OT: Anyone using RAID0

farss wrote on 8/8/2003, 8:31 PM
Being a speed freak I decided using 2 80GB drives and Promise RAID 0 on the mobo would be a good idea. Now I'm far from certain that was a wise decision.

I've developed an ECC error on the array and I cannot even get Win2Ks scan disk to run on it. As soon as the array controller see the error it takes the array offline and the only way to get it back is a restart. Not very elegant.

I know this isn't a VV issue but I figure video is one of the things that should benefit from the higher data throughput of RAID0 but if it means these sorts of hassles then I just don't think its worth it.

I know DV doesn't stress drives as much as we think, during capture the array is just loafing alone and I've seen others in the past mention that RAID is overkill for DV, maybe my experience can serve as a warning to others.

Comments

Flack wrote on 8/8/2003, 8:38 PM
Yes your farss you found out the hard way... I also went that way at first and ran into some similar problems.. but to be honest the new drives of today are fast enough for Av editing.
I stuck 2 80 gigs in a Raid 0 array and had heaps of problems but now I just have them on the controller as 2 slave drives and have had no problems at all

Flack
farss wrote on 8/8/2003, 9:19 PM
The other thing I didn't realise until I did a bit more research is in this type of config its only software RAID, i suspect this is the imposing some overhead onto the CPU so although I might be getting faster disk throughput I'm loosing out on CPU performance.

For rendering and capturing I maybe be loosing more in one area than I gain in the other. The seemingly nice thing about RAID 0 wasn't just the speed but ending up with one large volume. I do a lot of work that involves capturing say 4 hours off tape. With 160 GB I just don't have to watch the capture, even if I end up with one hour of nothing at the end of the capture isn't an issue. But now that I've done the sums even five hours of DV is going to fit onto a 80 GB drive. Having two of them means I can edit from one to the other which is probably going to perform better than having the input and output files on the one giant drive anyway.
CrazyRussian wrote on 8/8/2003, 9:40 PM
I have RAID0 myself and not complaining. I have a lot of expirience with RAID and I know FastTrack is a good company but IDE was never ment to be in any RAID, so configuing it is pushing your luck, and you dont play with your luck when it comes to HDs. I mean if anything goes bad, your data is still ok if your HDs are solid. With RAID0 it is even more gambling because there is no redundancy.... Sorry you had to find out the hard way.
farss wrote on 8/9/2003, 12:15 AM
Mostly just an annoyance really, I was still able to get everything vaguely useful off the drives, I'm just going to go back to basic drives, if I ever really need to let the speed demon loose again I'l mortgage the kids and buy a big SCSI array.
kentwolf wrote on 8/9/2003, 1:13 AM
My motherboard also has RAID capability, but I thought it much more reliable to just use the RAID ports as regular IDE ports.

I have never had a problem running in this manner.
RBartlett wrote on 8/9/2003, 1:55 AM
Reliability of a spinning disc at 7200, 10k or 15k rpm isn't always something best assured by spending more on SCSI technology. These are however the companies premium drives that don't get the short cuts implemented quite so quickly. However IDE has the quantity and has to keep working for 1 to 3 years to save the manufacturer from significant costs on warranty claims (and any bad reputation that bubbles on from the failure).

If SCSI discs were solid state then I'd completely agree that they are more reliable.
Fact is, by being of similar engineering spec, the WD Raptor 36GB 10k rpm SATA-150 drives are likely to be SCSI like. We'll know in a year or so.

RAID-FIVE or RAID-TEN are the best options for reliability. With the right number of ports to suit RAID-10 on a 3ware Escalade PCI64 (capable) controller, you get the best throughputs and strong data protection.

Farss, I've done the same as you have now. On my last PC I ran RAID-0 but since I get the same throughput on a single drive, I have two separate drives on the new PC. Using plug in ATA-133 controllers (Silicon Image) I keep the drives as master devices with no slave. For video I'd always stripe RAID-0 in Windows and not on the Promise/HighPoint/SI BIOS interface - something I've figured at least one of you above do but didn't quite say.

I've used the mountvol command to build the drives I've formatted without any letters onto the C: - takes some doing and doesn't help programs detect how much spare space is left on the video storage area. I just like one flat space like in UNIX - darn geek!

Separate drives are nicely supported by Vegas. Capture allows you to specify multiple directories to fill.

Uncompressed and HD-Cam and some multilayer applications can still make good use of a performance RAID. Storage won't really improve for the non-elite until PCI-Express reaches the interface bus of mere mortals. PCI-X is still a fat premium.

I'm even more unlikely to run a 2 drive IDE RAID now that the manufacturers are trying to all stay at the 1 year warranty point for the high performance drives. Although rather useless for the video drive, 8MB cache drives are worth considering just for their warranty. I'm slightly assured by the engineering being common between the 8MB and 2MB cache devices. A differential that should be watch should the premium ATA drive type ever disappear.

I found out that the Maxtor 160GB Plus 9 - 2MB(L0) cache drive I recently had fail on me, was in fact a NEC Calypso drive. This appeared on the replacement drive's customs paperwork and also on the broken drives BIOS detection phase - I guess it didn't load up the Maxtor identity on its own boot loader because the motor to start somehow. They echanged it quite happily. I worry slightly about the sister drive which was also manufactured on 24Dec2002. The performance of the 2002 model gives away the fact that Maxtor have shortened a 200GB unit to become 160GB. As such the performance remains at 42-48MB/sec (read) across the entire disc. If my replacement drive is the intended 160GB that Maxtor were hoping to "make" then my new drive might work out slower towards the end of the disc. Another reason not to stripe it RAID-0 as timing of the drives you stripe should be as close as is possible.

I only have a PCI32 bus to I couldn't expect anything better than circa 95MB/sec from any drive array. There isn't enough headroom for other stuff to go on on the 133MB/sec max theoretical bandwidth of this contention bus.
farss wrote on 8/9/2003, 9:01 AM
Funny thing about drives, the older ones seem much more reliable, I suppose the lower data density helps, I've got some really old 10 GB ones in a PII machine that I just use as a test bed and they just don't die.

The annoying thing about what has happened here is nothing seems to have died per se. After a couple of restarts and writing fresh data over the entire array everything is working fine. I've had my fair share of arguments with programmers over the years who seem to think any reported error, even ones that just said an error has been corrected meant to was time to shut down.

I think if I ever do go down the RAID path again it'll be via a proper controller that comes with some form of support. The mobo uses the Promise chip and drivers. Needless to say Gigabyte are asleep when it comes to support and Promise just refer you back to the mobo manufacturer. Damn mobo also has AC97 sound on it and thats even worse than the RAID controller, to date I'm lucky to get better than 30 dB S/N out of the line inputs. But I can convert it at 24/96K !

I wouldn't care so much about all that largely useless silicon on the mobo, I can switch it off and plug in something better but you don't get much choice, from what I can see every mobo manufacturer is trying to cram more substandard bits onto the board just to have a longer feature list. Even if I turn off this stuff in the BIOS the power supply still has to power it.
Rom wrote on 8/9/2003, 12:12 PM
I am using RAID0 (2x80Gig) and so far so good. My aim was to get 160Gigs combined more than performance for uncompressed video work. The one thing that most people may not appreciate is how hot these new drives get when operating. I would strongly recommend using cooling fans on all your hard drives. Remember that when the drive is hot on the outside, it's much hotter on the inside. This is compounded when a RAID array is built of 2 drives sandwitched together in a computer chassis (they both work and heat each other at the same time). I believe that active cooling should keep your hard drive's guts from cooking and extend its useful life.
Erk wrote on 8/11/2003, 1:19 PM
Since some RAID-knowledgeable folks are on this thread.... I've got an ASUS A7V333 mobo with 2 built in "Promise Lite Fast Track Raid" connections. For about a year or so I've had 2-3 regular Maxtor IDE 7200 drives hooked up to these. I never set a RAID up with the Fast Track utility that shows up during bootup. They just show up as regular drives in WinXP's Explorer. But if I plug a new drive in, during boot up it will say something like "configuring array" and reboot, and everything works.

My questions: what exactly is happening here? Is this RAID 0? Am I indeed taking a CPU hit? My system works great, I don't want to rock the boat, but is there anything I need to know? (my mobo manual wasn't too helpful).

Thanks,

G
dvdude wrote on 8/11/2003, 1:45 PM
It used to be necessary for me to run a RAID setup just to get the sustained throughput required for DV. Back then, I was using 2 x 10GB IBM drives on a Promise Fastrak PCI card. Frankly, the Promise card was probably the easiest, most reliable and unobtrusive change I ever made to a PC - it just plain worked, it even set itself up! Anyway - I digress....

My machine today still has multiple drives, I even have a highpoint RAID controller on the mobo. But I don't use it as it's simply no longer necessary to do so, so it's setup as another couple of primary IDE channels. I still use multiple spindles (connected to these additional ports) so that I can pull video from the capture drive, render the bits that need rendering during edit and then create a finished project file on a different drive. Same thing when rendering MPEG2 - read in the finished project file and write out the MPEG file on a different physical drive. I find that such an arrangement dramatically reduces head movement (particularly as my pagefile is on a third disk) which is more "friendly" to the drive than the constant thrashing I used to get with a single video drive. My current setup is a 34GB system drive with 2 x 80GB video drives.

Andy
JJKizak wrote on 8/11/2003, 2:01 PM
It is a good idea to keep the OS off your raid IDE drives. One glitch and its history.
Win2k pro doesn't like raid drives and tends to loose the Hive files.
once they are lost, your goose is cooked. But then again the drives are so fast now
why do you need raid to capture DV at 3.43 mgs per sec? Only reason would be
to capture HD or uncompressed AVI. But I usually am missing something so
perhaps somebody could clue me in.

JJK
theigloo wrote on 8/11/2003, 8:12 PM

Remember that using 2 drives with RAID 0 will not buy you any speed. They will benchmark fast, but you are shooting yourslelf in the foot if you read and write to them. To do that right, you need 4 drives.

For those of you with Raid 0, try this experiment: Copy a big file from one location on the RAID to another. Slow as hell right? Try it again, but copy the file from the RAID to your c: drive. Fast.

This is because when you RAID 0 two HDs, logically you have created one disk. We all know the logical issues of one disk. All you have done for yourself is made one fast disk.

Let's say you create a project with two tracks, and you do a picture-in-picture effect. The computer has to get the data for both video streams, but it will be fighting itself - constantly jumping from one streem to the other. The HD read heads will be going nuts.

If you do the same thing, but both sources are on separate, non-RAIDed drives, the results will be much better.

I too played with RAID cards and got burnt. Its a good learning experience. At the end of the day, what I hated the most was the lack of ACPI in the controllers. My HDs would power up and down all the time for no reason - even when I wasn't using them. I drove me nuts so I broke the RAID.

Matt.
BillyBoy wrote on 8/11/2003, 8:25 PM
The last couple motherboards I bought had RAID built in. I tried it on each and neither configuration lasted the full day. For the typical non server configuration, RAID not only isn't needed, its generally a BAD idea and places much more strain on the drives if you're just using a couple drives.
Rom wrote on 8/11/2003, 10:37 PM
I have the same mobo and have configured 2 80Gig drives as a raid 0 array to form one large drive of 160 Gigs. It shows up as one drive with that configuration not two. My OS sits on a separate drive and I often use that drive as a destination drive for my render output files - this way I don't perform read followed by write to the same raid drive when rendering. This approach is probably worth something in performance but there is alway the risk of a raid 0 setup.
At this point in time I would rather use two large independent drives (1-source & 1-destination) and wouldn't bother with raid.
In your case you probably should use the standard IDE connectors for your hard drives, not the raid hardware to avoid weird boot-ups. You can at that point disable raid in BIOS. Right now it seems you have a confused Raid controller trying to figure out why you have unequal drives sitting there. The controller then figures things out and goes from there. Again, use Raid if you mean it, otherwise use the normal IDE connectors and disable the raid controller.
I don't know if this helps you out in any way, but good luck...
farss wrote on 8/12/2003, 6:02 PM
I'd like to thank everyone for their input on this.

You've pretty much confirmed that what I've just done is the right thing.

RAID is dead, well at least in my setup.

I guess it started with a dream about uncompressed HD but then as I'll never afford a way to output it and when I do I'll also be able to afford SCSI RAID, that idea is relegated to the good lessons learnt hard basket.

As an engineer I should have known better, the one thing I didn't really consider is how much more RAID-0 stresses the drives. My next machine (after I pay for taking the wife to IBC) will be SATA, now that looks promising, but I'm going to wait until the dust settles a bit more.
bobojones wrote on 8/12/2003, 6:58 PM
Now I'm really confused.....just how much more stress and/or strain does Raid 0 put on a drive?

What is the effect of this stress/strain? e.g. does it reduce the drive's operational period?
farss wrote on 8/12/2003, 9:08 PM
The issue with RAID-0 is that every access to the array means both disks are being accessed and the heads moved. I don't imagine its a big issue but add to that the reduced security and the fact that Win2K seems to loose the plot (so I'm told) doesn't add up to a good way to go.

Where the stress thing could become significant is if the drives are mounted too close or don't have adeqauate cooling. I wouldn't take that alone as reason enough to ditch RAID-0 though.

What really annoys me is drives today just don't seem to be as reliable as they used to be. Between me and my son we've had quite a few die yet I've got a PII machine with a 20 GB drive in it that has run for maybe 8 years withour a hickup. SUre its not exactly stressed but years ago I had a Netware server with 2 200 MB SCSI drives in it. One day the fan died, the only way I found out was the burning smell, you could have easily fried eggs on them yet they never dropped a byte and continued to run for years after.
bobojones wrote on 8/13/2003, 12:46 PM
I understand that Raid-0 reduces system reliability and may not offer substantial performance advantages for video editing.

What I don't understand is BB's claim that it "places much more strain on the drives if you're just using a couple drives"

What does he mean by 'strained' drives? Can this strain be quantified? What it it's effect on drive lifetime?

I don't want to do anything that strains my drives or my processor.