What serial ATA raids are you guys running? Building a new system and trying to make some determinations. This new system will likely be built around AMD 252 procs.
I'm not just nervous about RAID 0 I've actually been badly burnt by it in the past that's why this time I went for a decent controller. Two reasons why I want RAID 0, one I do have plans to work with uncompressed footage and 4:2:2 so disk throughput is vital. But also running network rendering on the one machine means disk performance is going to be a big issue as well. I'm hoping the command queueing will also speed things up.
Bob.
'm talking from experience having discussed this at length with dealers who serve both uncompressed SD/D1 and HD markets aswell as multi-stream DV solutions.
RAIDCore and 3ware-Escalade controllers are able to break the 133MB/sec limitations of anything more generic other than the ICH6 Intel southbridges (these latter bridges are bandwidth limited to 266MBytes/sec across the drives which is good but not king).
Interestingly, the higher end SATA solutions are also able to use their accelerators to keep the mobo CPU utilisation down and perform with A/V material aswell as file-server type applications. Lesser cards are good at burst performances for enterprise applications, not digital video workstations. If you don't use the so-called onboard hardware RAID functions of PCI32 cards, then you can usuallly get above 120MByte/sec sustained (by striping in Windows/NT). Extended BIOS RAID modes are typically poor, ICH6R is a likely exception.
For Spot, the choice depends on the onboard chipset SATA performance. Are you using an AMD chipset with your CPUs Spot? If they have access to a fast pipeline into the CPU/memory then the chances are you can get fast performance too.
Do you want 200MB/sec performance?
Well, actually you might even with DV and HDV. Performance depends on whether the applications seeks frames or moves to navigated points in the geometry of the file structure. However the use of RAID-10 (benefits of striping but with half the storage space of the overall capacity of the array) can half (or better) the random access performance. Especially of larger streams.
SCSI is still great, mostly due to the drive smarts and the rotational speed. However, almost entirely, the internal RAID hardware solutions for SCSI are geared for server performance and not the digital video workstation. NT striping is king for throughput. People frown on NT striping because they believe that raw hardware must be able to do a better job, however you must remember that many sub $100 controllers are not endowed with their own processors. M$ for their faults, do know something about storage and marrying up data across two fast moving mechanisms each doing what they do slightly out of time from the other.
The hybrid solution is for a SCSI host controller and an external chassis that holds SATA/PATA drives but has an outboard distributed/parallel controller to represent the array. Medea RTR etc are good.
Medea (SCSI presented SATA drives) and 3ware are moderately expensive. However RAIDCore and HighPoint SATA solutions are cheap and very high performing where a PC has either PCI64/PCI-X or PCI-Express peripheral buses.
I've yet to test an older PATA based 3ware board with twelve or sixteen 2GB solid state compact-flash fast drives. However I suspect this would make a great software striped RAID for all of reliability, throughput and random access performance. Not up there with BitMicro SSD solutions, but I think the SSD is about to come upon us for multi-stream HDV and DV digital video workstation purposes.
There are many dealers who can save you from expensive experiments. Decide what you want from your secondary storage (and system boot disc for that matter) and take those specs to them (sustained throughput, average seek latency, drive warranty period, hot-swap, redundancy of data, backup/archive solution).
The guys I bought my box from deal exclusively in high end work stations, render farms and solutions for handling DIs etc. This is the first time in my life I've had someone build the box for me, I probably could have put it together a bit cheaper if I really knew what would work and what wouldn't but having spent a lot of time picking their brains I felt I had to do the right thing.
One thing I did learn from them, 1 GB ethernet is plenty fast enough for uncompressed SD if you use decent switches and is way cheaper than the next step up. SATA RAID is fast enough for anything apart from uncompressed HD and even then you might just squeeze it in.
So I'd agree, if you're looking to get a high end system find a company that's got a reputation in this business.
Bob.
As Rbarlett and Cline point out configuration is king. Four serious output with multi cpu boards, you should serious consider a couple of Raid controllers, one conf for JohnCline file sizes requirements, one conf for media lib, still pictures,etc... and one raid conf for max speed on read/writes for all temp, swap, works files. I should avoid temp/swap/work/ files space with media or video stream.
I use the 36Gb Western Digital Raptor as my C drive and I'm happy with it.
I had a 250Gb Western Digital 2500JD (SATA) as my internal data drive and it failed after about 5 months. They replaced it under warranty. I would probably choose Hitachi in preference next time as a result of that.
Remember that Hitachi bought all those drives from IBM which had huge failures and one would hope that Hitachi would have fixed all of the problems with the dexstars.
I was seriously trying to forget about the IBM deathstar experience. A cluster of 70g disks operated from 2001 to Aug 2004 flawlessly and in three months over 20 drives failured. I and IBM are still working on recovering data. IBM storage modules are great but I looking around for alternate solutions.
Hitachi and IBM merged their hard disk divisions in 2003. IBM has about a 15% stake in the new company. IBM did have a serious design issue with the 75GXP series of drives but that was isolated to that one particular model and there have been no problems since. The 75GXP fiasco was compounded because the IBM "suits" denied the high failure rate of those drives while it was quite clear that there was a serious problem with them. This was really too bad, because it was IBM the invented and developed all the breakthrough hard disk technologies throughout the years, including AFC "pixie dust" media and Giant magnetoresistive (GMR) head technology. It was primarily these two things that allowed the sudden and dramatic increase in hard drive capacity that occurred just a few years ago. IBM's credibility was fatally damaged by management and that prompted the merger with Hitachi. IBM's engineers were and continue to be the best, only now it's under the Hitachi banner.
Anyway, I don't see any reason to stay away from Hitachi drives, they continue to be pretty cutting edge. Here is a link to a comprehensive review of the Hitachi 7K250 and 7K400 drives:
I had horrible luck with Maxtor drives and just quit buying them. I bought Western Digital drives instead because of a technology partnership they had with IBM, but that partnership ended and WD started to lag behind in performance and reliability. Then I started buying Hitachi drives and I've been quite satisfied with them. These days I've been buying a lot of Seagate 7200.8 series drives and they seem to work very well and I like the five-year warranty.
OT to Spot's question, but for an interesting idea coming down the pike read iscsi SAN. For those who don't know, SAN is "storage area network", i.e., physically separating the storage hardware and computers that access it. Its been expensive but iscsi is a relatively cheap alternative that might make sense to some people.
It almost seems on the drives that you have to catch them on a good roll as almost all of them had issues at one time or another. I had Western Digital, IBM, and Seagate that crapped out regularly and switched to Maxtor and Fujitsu.
Great comments, thanks.
I do want at least 180MB/sec throughput, yes, but challenged to find it in an affordable package. I'll likely use an AMD chipset in this new machine, yes.
The most attractive system I can find, these days, at a hi performance and low price point, is the G-RAID storage systems. Unfortunately, these require 1394b firewire busses. Have any of the AMD mobo's implemented 1394b yet? Last I checked, they have not. To the best of my knowledge, a 1394b PCI card will only give 1394a thruput on a standard PCI bus.
180MB/sec sustained transfer rate? Yeah, that's going to be difficult to do without throwing a lot of cash at the problem. A lot of the 15k rpm drives can hit 90 MB/sec on the outer cylinders, but most of them fall off to "only" 60-70 MB/sec on the inner cylinders. 15k drives max out at around 147 gig of storage each and, currently, they are all SCSI.
In the SATA camp, there is the 10k RPM WD WD740GD Raptor. The outer- and inner-zone scores of the Raptor hit 71.8 MB/sec and 53.8 MB/sec respectively but only top out at 74 GB capacity. You could do a 4-drive SATA RAID0 with the Raptor and probably hit your desired max data rate, but it would "only" be a 296 gig array.
Of course, the other important factor to consider is that the controller can't be tied to the PCI bus since PCI is only good for a maximum of 133 MB/sec rates. 64bit PCI can hit 266 MB/sec. And the new PCI-X bus can easily hit the speeds you need.
To get the speeds you're looking for with enough "headroom" to actually sustain those transfer rates, you're probably still looking at SCSI using 15K drives. But, like you said, a challenge to find at an affordable price.
The upcoming SATA II drives using a PCI-Express SATA controller will probably do what you need, but it seems that the choices are pretty limited at this point. Perhaps in a couple of months...
The Canopus "Raptest" software mentioned near the top of this thread is quite interesting. However, it is not very consistent. It gave me readings ranging from 42 to 52MB/sec for both read and write, but it jumpped all over the map, and changed every time I ran it. That is QUITE a margin of error. Not only that, but sometimes read would be faster than write, and sometimes the other way, bouncing independently on the scale.
This is kind of what I was alluding to, was the SATA II configurations. A guy at CES told me that it was possible to hit sustained speeds of 225 with SATAII, but I found that pretty darn hard to accept/swallow. I don't need it just this second, so I think I will hold and see what comes down the release pike in mid spring/early summer.
FWIW guys, I'm disappointed 1394b PCI cards won't give full 800 MB/SEC on a PCI 2.0 mobo. Still, I needed some external drives, so, I went out and bought some ADS Technology Firewire 800 DV Drive Kits. I put some WD2500JB's in them and have been running them as 1394a devices on my P4P800 mobo. Using HDTACH to test them, I'm getting some pretty impressive data rates, altho' they fall off pretty much nearer the center. I haven't tried RAID 0 with Win XP Dynamic drive setup, but, that should yeild even faster speed.
You can't stripe external drives with Windows when they are derived from USB or firewire interfaces using the generic storage device model that pertains to that type of interface.
Firewire800 is 100MBytes/sec peak, with a fair wind behind it. 800Mbits/sec.
Using a 3Ware Escalade SATA or a RAIDCore solution on a 64bit PCI bus with no or next to no contention on the slot (typically E7505 or PCI-Express based motherboards). You can sustain between 150MBytes/sec and 250MBytes/sec dependent on the drives. Command-queueing drives might be worth looking for if you can't afford all the SATA to be 10k RPM drives. CQ will help the poor random access performance of SATA compared to the similar SCSI models. RPM isn't the only differential.
So DSE, you could wait, but you needn't. Typically the drives give the same performance on the new interfaces for a good 12 months. So SATA-2 or serial-SCSI probably won't buy you much unless somehow the host adapter/controller is somehow propelled into greater parallelism too.