OT: Access multiple drives on fileserver

DavidMcKnight wrote on 5/25/2006, 11:56 AM
I'm reposting this as a new topic, as my question may have gotten lost in the old thread.
I have two Vegas edit stations, each working off of their own drive within a fileserver.
The fileserver is a 2.4 gH P4, 512 MB Ram.
XP Pro
Vegas on C drive
4 removable drive bays, each on their own channel using two add in IDE cards.
Each project gets its own drive.
No SATA.
Renders write back to the same drive as the project is on.

Now, the problem. I thought for sure that having projects on separate drives, and those drives on separate channels, would allow for writing to one drive from an editing station without affecting playback or editing on another editing station (which is using a different drive). But it does. Is it because of the single CPU processing requests for both streaming reads and writes to the different drives?

Would any of these help with this problem:
- use a hyperthread or dual core CPU
- use a true Windows server OS (instead of XP Pro)
- use Linux/Samba?
- Replace IDE drives with SATA drives

Any thoughts would be appreciated.
David

Comments

ScottW wrote on 5/25/2006, 12:27 PM
How are your systems networked together?
DavidMcKnight wrote on 5/25/2006, 12:49 PM
100 MB switch, wired ethernet.

Individual performance on either editing station is great, even editing and playback both at the same time is great. It's only when one station is writing to (for example) drive F on the fileserver that the other edit station has sputters and glitches when editing and reading from drive H
ScottW wrote on 5/25/2006, 1:21 PM
Well, my suggestion would be to start by moving to GB ethernet - it's going to be the cheapest thing to try, even if your machines don't have a GB ethernet port, getting a few PCI cards along with the GB switch is probably going to be cheaper than moving to a hyperthreaded or dual core CPU.

All of the machines I have are networked with GB ethernet, and while I only edit from one machine across the network, frequently the other machines are doing file copies, encoding or some such, and it's never bothered my editing machine. Though admittedly of the boxes I have doing file serving are hyperthreaded, usually with Raid 0 disks (one exception is my new server which is Raid 1 and an AMD chip, but it's more of a safe storage & short term archive place rather than something I edit from).

--Scott
apit34356 wrote on 5/25/2006, 1:29 PM
David, you need to add a lot more memory, remember rewriting large files requires alot of system resources and ethernet connections consume resources.
apit34356 wrote on 5/25/2006, 1:37 PM
also, David, make sure file indexing is turned off for the drives. Your logical about drives and files are on the right track, but with each new drive, additional memory resources are required. So, if you have 6 files on a two drive system vs 6 files(one on each drive) on a 6drive system, the 6drive will require more memory resources, if memory resources are tight, performance will take a hit.
OdieInAz wrote on 5/25/2006, 3:58 PM
I think the cheapest thing will be to do some short tests of copying files around and looking for the bottleneck.

You could have bottlenecks in the Ethernet. One PC reading from the server while the other PC is writing to the server probably causes more “collisions” on the Ethernet than both PCs reading at the same time. When you have an Ethernet collision, I believe the response is to wait a random time and then retry. Has the overall effect of slowing down network traffic.

Here is a test I propose to check out your network. Use large files, something that takes 30s - 1 minute to copy. Turn on Task Manager and watch the performance when running the tests. Here’s what I suggest might give insight to problem:

1. Copy large from server to PC-A. Then copy from PC-A to server, use same drives in both instances. Should be similar times. Now you have exercised read / write in both directions. Do this with Window Explorer doing the copy from PC-A.
2. Repeat with PC-B. Should have similar results.

Now let’s kick it up a notch.

1. Copy big file from Server to PC-A and at the same time, copy a 2nd big file from Server to PC-B. Note any change in copy time. Should be pretty good - mimics the case of both PCs reading.

2. Now copy big file form Server to PC-A and 2nd big file from PC-B to the server. This should cause maximum collisions. Ethernet MIGHT be the bottleneck. Gigabit Ethernet might help. This mimics the case where one is PC is reading and the other is writing.

3. Uber Collisions: Copy big file from PC-A to PC-B; at the same time, copy 2nd big file from PC-B to Server; at the same time copy 3rd big file from server to PC-A. Now you have 3 PCs fighting over the same bus.

If this doesn’t turn up anything, then the next suspect bottle neck might be in the server PC, Perhaps the PCI bus since that probably routes both the Ethernet and the disk traffic. The processor comes into play in all these scenarios, so maybe that is a bottleneck as well. But from what you describe, the problems sounds like a collision on the Ethernet when the Server and a PC are fighting for resource.
fldave wrote on 5/25/2006, 4:38 PM
Absolutely try the network tests first. I'm thinking of going to Gibabit soon.

But don't underestimate the power of two cpu's. My trusty dual 1Ghz PIII continues to be my non-video editing machine of choice. I have 4 physical drives in it, was copying files back and forth to my video editing machine (including burning a DVD on the video machine from my PIII), while I had 4 Ameritrade IE java windows open, with 2 Firefox windows, one with 8 tabs, the other with 6 tabs, updating frequently. And Outlook off and on.

I never knew the files were being copied. Smooth server operation optimally has more than one CPU for load balancing.

I can't do what I described above with my wife's P4 2.2 Ghz without it stalling.
farss wrote on 5/25/2006, 5:07 PM
I believe you can pull something like 8 D1 streams from a purpose designed server.
Thing is to avoid the problems with network contention you run multiple network cards in the server(s) to GBit switches. This gives each user a dedicated pipeline between their machine and the server.
Costs of doing this can escalate but at nothing like the rate of fibrechannel.
DavidMcKnight wrote on 5/25/2006, 6:00 PM
GREAT replies, thanks ya'll. I'll try the network file copy tests tonight.

farss, is the multiple gbit card thing as simple as installing two gbit pci cards in the fileserver and running cat5 from both to the switch? XP Pro OK for that?
farss wrote on 5/25/2006, 7:16 PM
As far as I know yes.
However there might be a need for specialised drivers that ensure the load gets balanced between them.
Typical configs I've seen have say 4 ports from the server run to 4 ports on a switch.
If you're in Australia XDT in Melbourne sell a range of SuperMicro boxes configured just for doing this. That's what paying a bit of a premium for a system gets you, the expertise of guys who do it all day.
If not in Oz then I'd guess you'll find a Supermicro agent/system integrator close by.

One tid bit of advice I picked up from my integrator was that mostly typical office 1Gb network gear is perfectly adequate for small video workgroups if you're only doing SD, HD is where things get real expensive real quick. So any file server designed to feed bulk data efficiently over multiple ports should be fine. What as others have hinted at you don't need to bust the bank on are systems designed for applications servers, unless you're doing renders on the server. Application servers need lots of cores and RAM because of the amount of code they run, file servers don't need much CPU or RAM, they need fast buses and fast disk subsystems.

Bob.