What serial ATA raids are you guys running? Building a new system and trying to make some determinations. This new system will likely be built around AMD 252 procs.
I'm running two types of SATA drives:
- Western Digital Raptors (72Gbyte, 10k RPM, 4.5ms access time) 2x RAID 0 for system disk
- Hitachi (250Gbyte, 7200 RPM, 8.5ms access time) 4x RAID 5 for video content
My RAID 0 is running on Intel ICH6R with 4 SATA ports. The RAID 5 is running on Silicon Image Sil3114R with 4 SATA ports. These are both found on the P5AD2-E Premium motherboard (Intel CPU) that I'm running.
They've been running 24/7 for about 2 months. No problems of any kind.
I'm also running the Hitachi 7K250 series SATA drives. I have six 250 gig drives and a couple of 400 gig drives. Four of the 250's are raided and the rest are running as single drives. I have been extremely satisfied with them.
Recently, I have gotten a few Seagate SATA drives on sale from Fry's Outpost. I got a couple of 250's and a 400. They, too, have been very good and come with a five-year warranty as opposed to the three-year warranty of the Hitachi's and everyone else. I used to use Seagate's back in my SCSI days and hadn't bought any of their drives in a while. Turns out that I'm really pretty impressed with their new SATA drives. Very fast, quiet and, so far, reliable. I'll definitely be buying some more.
2X Maxtor 120gig SATAs.
3 year warrantee.
Can't say I've noticed a difference with non-SATA drives. I don't push my system like you probably do though.
Just the smaller round cables is a great plus. Makes for better ventilation.
Rest of the set-up is an Athlon XP 2800+ matched to an ASUS A7N8X-E Deluxe. Very stable!
Make sure to check out the ASUS with nVidia nForce 4 motherboard. BOXXtech made a killer dual AMD 64bit with the nForce 4.
1) The motherboard approach rather than a dedicated board does cost a CPU performance penalty. I've never measured it myself but I've seen some claim 10%. I don't consider this a problem but it does tie CPU/RAM to any problems with the drive. A dedicated board is safer than a RAM solution.
2) I had bad RAM for a long time causing BSODs. One night I had 3 renders going on in 1MB RAM and I crashed the hard drive. Nonrecoverable, so a fresh windows install. I've finally got things fixed up but I've had 3 reinstall nightmares in 14 months.
If I did it again (and I eventually will) I'd definitely go with a dedicated RAID board
RAID 0 on Highpoint controller, dual Xeons @ 3GHz on SuperMicro mobo, only way to fly. Nvidia pro vid card.
Ah and a decent firewire card, looks like it's PCI X. With 2 x 800 and 1 x 400 ports if they were all running flat chat PCI might have an issue.
Bob.
not that you won't research this before you buy your equipment, but I've heard of a few (either cards or mobo's - don't know which was causing the problem [or possibly a combination of the two] but the card was slower performance that going directly through the mobo. Sorry, but I don't remember any brand names etc... associated with this, I only read it in passing.
So make sure to double check before you combine =)
Seagate drives stay cool better than most drives, too, which I would think would be a factor as you start RAIDing a bunch of drives together in one system. Cool, quiet, and 5-year warrantee! I don't think they're the absolute fastest drives, but they're competitive.
Tom's Hardware just posted a review of the new WD drives (WD3200JB) that found it to be the coolest 7200rpm drive they've tested and very quiet, too. Not available in SATA yet, though.
Using RAID controllers on the ASUS P5AD2-E motherboard (I925XE chipset, LGA775 pentium 4 3.4Ghz, 800Mhz FSB, 1GB 533Mhz DDR2 memory).
John
EDIT: I just ran the thing again and got wildly different numbers. It gets better for a few times, and then it gets much worse and starts building up again.
My average over 20 attempts is 71MB/S read, and 98MB/S write.
It is definitely making the disks grind each time. Anyone know why the results vary so much run to run?
I just ran across the press release from Hitachi announcing the 500 gig SATA II drives. Just think... four drives, two terabytes, no waiting!
"Deskstar 7K500 – The Hitachi 7K500 offers 500 GB, the industry's largest available capacity for high-end media center PCs, DVRs, nearline storage and other enterprise ATA applications. The SATA version of the drive ships with a large 16 MB buffer to deliver breakthrough performance for bandwidth-intensive applications. In addition to the complete set of SATA II features, the drive includes Rotational Vibration Safeguard (RVS), a technology used to ensure drive performance in high rotational vibration environments, where there are multiple drives in a single enclosure."
2 x ST3300831AS Seagate
Capacity: 300 GB Speed: 7200 rpm
Seek time: 8 ms avg Interface: SATA
Native Command Queuing
RAID-0 striped to 600GB with MSI mainboard controller.
Ran the test four times and got:
34R / 43W
37R / 105W
36R / 103W
37R / 104W
And FYII, here are results for other plain ATA drives in the box:
160GB WD: 31R / 38W (system drive)
250GB Hitachi: 34R / 31W
250GB Hitachi: 53R / 52W
300GB Seagate ATA: 50R / 58W
I'm curious as to why most people are running RAID configurations, performance or redundancy? What specifically do you do that requires raw throughput? I'm just curious.
Of course faster is always better with drives, and all computer components, unless you are working with uncompressed video are transfer rates over 30MB/sec or so required?
I remember being very concerned about drive performance when my "benchmark" was capturing lossless using huffyuv, which required about 10MB/sec sustained rates. By 2001 any drive could do that no sweat, since then I look for a fast drive just to make Windows boot faster, applications load quickly, reduce the time for moving large files around, etc.. but I haven't noticed or seen any data supporting drives being a bottleneck in video editing. Outside of working with uncompressed streams as I noted above.
Here is one review from a very reputable source showing minimal gains from a RAID setup in real world usage. Artificial benchmarks look great but there are no significant gains in most applications. Again I want to stress that if you are constantly moving large files around or working with uncompressed video then your usage patterns will mimic the benchmark testing and you will realize speed gains. Other than that though it doesn't look promising from this report.
In my particular case, I work on projects with up to 10-12 hours or more of source material. I sometimes have 4 or 5 of these projects on the system at the same time, so my interest in RAID is about having a huge, single drive and not worrying about running out of space. Otherwise, if I use a single 250 gig drive, for example, I can get the source files for maybe two projects on the drive and have a fair amount of space left over but not enough for a third project. With a huge RAID array, it doesn't really matter (until you actually fill up the drive.)
Now, when I render these projects, I usually render to a separate physical drive and keep my temp files on yet another physical drive. A lot of my finished video doesn't actually need to be "rendered", there are large sections of untouched DV footage. In this case, Vegas is just going to copy the source video to the destination and copying from one drive to another is MUCH faster than copying files within the same physical drive. RAID's don't necessarily copy internally much faster than single drives.
If all your footage has filters, like color correction, or CG applied and all of the project really needs to be rendered, then the render speed is going to be the bottleneck, not the speed of the drives.
The "why not" of RAID has to do with the most common RAID implementation, a "RAID 0" striped array. RAID 0 is not a "true" RAID because it is NOT fault-tolerant. If one of the drives in the RAID array fails, you have lost ALL the data in the array.
As I was writing about "moving large files around" after I posted I realized that someone would correct me on that! Yes, moving files around will only be faster from one RAID "performance setup" array to another.
I understand your use and had not thought of that. That's why I asked. So you are in effect creating a huge logical drive, right?
How effective is Vegas as SmartRendering? Or passing unaltered portions of video on the timeline through to export? I used to work with MediaStudio Pro and it was very good at what Ulead calls SmartRendering IF you knew MSP's behavior. For example, for VBR MPEG you would need to be exporting VBR with a bit rate above the source, for CBR you had to be at the exact source bit rate. There were other things I can't recall at the moment.
And yes, RAID 0 has always made me nervous. It's scary enough thinking about 1 drive going down, but 1 drive taking down 2 is really a nightmare.
What is the drawback to you just spreading project over a few drives? It seems like you are trying to keep projects contained to one array. I guess I'm not understanding the type of array you're setting up. It's interesting and I'd like to learn more about your preferred workflow regarding storage if you have the time.