OT: several single drives? or a Raid?

FrigidNDEditing wrote on 5/31/2006, 10:34 AM
ok I know the benefits of a RAID array but what I'm asking is if you guys think it's better if you're running several video feeds at once, to have each one on a separate drive? or to have them all running off of an array? I don't do a ton of multicam, but I also don't do a ton of highbitrate video footage currently either, however with use of HD in a higher bitrate codec it may be usefull to me, I'm in the midst of deciding, any suggestions?

Dave

Comments

Chienworks wrote on 5/31/2006, 10:49 AM
The benefits of RAID are security/redundancy, size, and speed. In your case the size benefit can be achieved equally well either with a RAID or several drives. I don't believe security is an issue in your situation or you would be suggesting several smaller RAIDS instead of one larger one. That leaves speed. RAIDs increase speed by striping data across several drives, thus allowing each drive to only have to carry part of the load. However, once you are accessing more than one file at a time there is no guarantee that the next part needed won't come from the same drive. That could partially or completely nullify the speed advantage of RAID. In that case, multiple individual drives would perform better.

Pretty much most of the benefits of RAID are obviated by today's huge high-speed drives. Probably the only real benefit left is redundancy.
Yoyodyne wrote on 5/31/2006, 10:52 AM
Good question, I think it would be simpler to have them running off of a raid and I bet the cpu load would be lower. I guess it depends on the drives, how they are hooked up to the mobo and what type of file they are playing back.

I'm running a 4 raptor raid for media that has plenty of thruput, little over 100mbps. Right now my biggest preview bottleneck is Vegas itself unfortunately.
Jayster wrote on 5/31/2006, 11:48 AM
I ran some tests on my system. My 2-drive SATA RAID0 spins out about 110MB/s and my single SATA drive does about 50 MB/s. I think these tests are probably only reading one large file. For seeks, the RAID0 and the single disks have almost exactly the same seek time in ms. I suppose the ideal two-file situation would be two RAID0 pairs (i.e. 4 disks).

The CPU is such a bottleneck, though, that I suppose it's a question of tradeoffs (i.e. cost and increased chance of disk failure with RAID0 vs. the lower speed of individual disks). Lots of threads on the general topic of I/O speed. Much bigger deal for uncompressed HD.
farss wrote on 5/31/2006, 5:36 PM
A pretty vexed question really.
It depends on how you implement the RAID. I'm assuming we're talking RAID 0. In that case MTBF is halved and that might scare you off by itself.
If you use MOBO based RAID then from what I know that of itself can impose a CPU hit, worse still if the drivers / code isn't upto spec you can loose the lot, had that happen several times before I gave up on RAID 0 on one system.

However with a dedicated RAID controller things seem much sweeter. I'm running SATA RAID 0 over two 200GB drives off a Highpoint controller and so far it's fast and haven't lost a bit of data.
It's still not fast enough, CPUs never get over 75% so SCSI RAID would be the next logical step up however that's a HUGE increase in cost. Adding more drives to the RAID 0 array might give me the speedup to handle HDCAM, some say it'll work.

Bob.
epirb wrote on 5/31/2006, 5:44 PM
Anyone chime in on the possible plus minus' of using a external or possible internal 4 disc SATA raid(as discussed in another thread) but with a RAID 5 config.
I'm considering this for HDV CFDI use primarly.
Seems the fact of striping and parity make sense, at least theoreticly.
FrigidNDEditing wrote on 5/31/2006, 9:38 PM
ok - well, here's the deal, I'm gonna be dropping about 2500 or so into a new machine come late July/August, I'm going dual dual core with whatever server procs are available at that time. What I would like to do (ideally) for the raid setup would be a Raid 1-0 or 10 (however you write/say it). Thus have the redundancy of a Raid 1, with the striped speed of a 0. Mind you, if the speed for multicam will be higher with 2 pairs of Raid 1's not striped, that may in fact be how I go then. Either way - Processor bottlenecking will not be as high as with some machines.

Thanks for all the input guys.

Dave
GlennChan wrote on 5/31/2006, 10:08 PM
It's still not fast enough, CPUs never get over 75% so SCSI RAID would be the next logical step up however that's a HUGE increase in cost.
It might be that the work can't be split onto two CPUs/cores...?

I know with old versions of Vegas, even if you had 1 core the processor wouldn't always "fill up".

If you're really bored, you can try setting up a RAM disk to see if hard drive speed is bottlenecking you. There's various RAM disk software out there that lets you use RAM as a hard drive.

ok - well, here's the deal, I'm gonna be dropping about 2500 or so into a new machine come late July/August, I'm going dual dual core with whatever server procs are available at that time. What I would like to do (ideally) for the raid setup would be a Raid 1-0 or 10 (however you write/say it). Thus have the redundancy of a Raid 1, with the striped speed of a 0. Mind you, if the speed for multicam will be higher with 2 pairs of Raid 1's not striped, that may in fact be how I go then.
The speed for multicam should be higher for two pairs of RAID1s. When you have two things accessing a hard drive at once, the hard drives will slow down because the heads are trying to grab two different sets of data. Moving the heads (seek time is the measurement) is very slow.
I could be wrong, but most tests show something like this. To be absolutely sure, you could of course run a test.
farss wrote on 5/31/2006, 11:42 PM
Glenn,
I've got 4 cores, all show the same load (roughly) encoding mpeg-2, never over 75%. One day when I have some down time I'll really try to get to the bottom of this.
Perhaps what I should do is try encoding from different sources that have higher data rates, that may give some idea of what's going on.
It's not that big an issue as the system is mostly left to run doing large batches of encodes, still it'd be nice to squeeze every possible bit of performance possible out of, particularly given that it cost more than my car!

Bob.
FrigidNDEditing wrote on 6/1/2006, 9:25 AM
I think my main problem is not being certian of my workload, I do some multicam, but if I start transcoding HDV into a higher bitrate codec, I might want that extra read speed, but thanks for the input guys.

Dave
Jayster wrote on 6/1/2006, 3:04 PM
If you use MOBO based RAID then from what I know that of itself can impose a CPU hit, worse still if the drivers / code isn't upto spec you can loose the lot, had that happen several times before I gave up on RAID 0 on one system.

I think almost all RAID controllers impose a CPU hit. According to a review on TomsHardware.com, the HighPoint RAID controllers are software-based, with the CPU doing all the parity calculations for the RAID array. This might be just as true (I don't know) for mobo controllers. In fact, very few RAID controllers are purely hardware based solutions. The review cites AMCC, Areca, and LSI Logic as hardware-based, and they are quite expensive. I know one vendor, Netcell, that makes a very inexpensive SATA RAID controller card that is hardware-only. But, it currently uses the slower PCI slot as its interface.

Perhaps the surest way to get a purely hardware (no CPU hit) RAID0 or 1 solution is to buy an external enclosure with its own RAID hardware and an eSATA connection. A company called Thecus makes a 2-disk, RAID0/1 eSATA external enclosure which sells for $179 at NewEgg.com. I would assume there are (or will be) lots of competitor products that do this too.


Check the following hardware reviews for more detail: