I just ran the Vegas Render test (the sundance site) on my new system (Athlon 64 3200+, 1 gig ram), and I got a time of 1:33. I was astonished considering my old PC took 3 times as long. I would highlyb recomend the 64 if you want quick render speeds!
I'm just building my own 'cheap'n'cheerful' Athlon 64 system at the moment - 3000+, 512MB ram.
Just finished putting on the Zalman 7000 heatsink (lovely!) and will be able to power up once the PSU arrives, hopefully tomorrow.
Will run the rendertest and post back. Good to hear that these A64 chips are giving nice results on the test.
I have an Athlon-64 3200+ as well, render test ran in 1:39 I believe (the system is rigged for gaming, not video). This was about 3 times faster than my old system as well. From what others have posted the Intel P4/3.2GHz renders several seconds faster than the AMD, and the EE version of it is supposed to be lightening-fast.
So, if optimizing render time is your goal then the Intel P4 is better. But you can't complain about the AMD making things three times faster! Best of luck to you with your new systems.
EXISTING:
Harddrive: Segate 120GB 7200.2 IDE
Graphics card: Gainward GeForce 3 with Zalman ZM80 heatsink
SoundCard: M-audio Dio2448 (need to update this badly!)
OS: Windows XP
DVD drive: optorite 203 4x +/- re-writer
Lucent based PCI FireWire card
Lian-Li PC60 case
Vegas version 4.0d
I don't think the drive has been dr-fragged for ages, and the XP installation is fairly cluttered, but I don't have more than a couple of system tray items (just volume control, nvida control panel).
I went through the test with various overclocked settings, although I don't feel like pushing the system too hard just yet in case I damage it!
Ran the test twice each time, once with 'normal' thread priority and then again with Vegas set to 'above normal' thread priority.
I used the latest version of Sandra to check clock settings - it also gives an estimated PR rating.
I tried running the memory based on 333 settings to see what influence this had. My memory is rated at ddr400, which is the main reason I haven't yet tried the cpu at 2.21ghz with the mem set to 400, as it would be pushed to 440 and I think that's a bit risky... I'd estimate that it would take another 1-2 seconds off the time though.
key:
x.xxghz -speed of chip
xxxbus - front side bus setting
xxxmem - ddr memory speed
prxxxx - Sandra PR estimate
(pr3000) (the default system speed/settings)
2.00ghz 200bus 400mem = 1:35 (priority normal)
2.00ghz 200bus 400mem = 1:34 (priority above normal)
Now the things I find interesting about these results:
Sandras pr rating is obviously based on clock speed and not memory speed, as the pr3246 and pr3308 results have the same render times.
This definitely highlights the importance of memory speed relative to system performance.
I haven't done any long renders to check for stabilty, but the system currently idles between 36>39 degrees C (CPU) in a cold>warm room.
After playing 'Return to Castle Wolfenstein' for a couple of hours at 1280x1024 resolution, cpu temp went up to 43 deg C, and this when the machine was overclocked to 2.10ghz with mem at 420. The heatsink fan was at it's lowest speed setting. None of the case fans were switched on.
It's not the perfect test of a system, but it didn't crash so I guess thats a good sign!
Hopefully these figures are of some interest/use, sorry if this post is a bit long winded.
PS it also runs very quietly thanks to the Zalman(s) and Nexus.
I dont understand all of the above, especially the tests &
Can some one please tell me whether Having a system with two processors, presumably Athlon 64.... will significantly speed up rendering & the DVD encoding
Also .... is the opteron 64 bit processor especially gearded towards video editing?
>> I dont understand all of the above, especially the tests
ibliss was testing the speed of his new AMD 64 system, while at the
same time he was adjusting the speed of the memory bus to see how
much faster he could make it. The only problem with using a faster
memory bus is that your system may not be entirely stable if the
bus speed is too high.
>> Can some one please tell me whether Having a system with two processors,
>> presumably Athlon 64.... will significantly speed up rendering & the DVD encoding
Currently Vegas can use a 2nd processor, but I have no idea how much of a difference
it makes to rendering times. ( If you take a look at the Vegas preferences settings, under
General you will see a check-box marked "Disable Multi-processor AVI Rendering" )
>> is the opteron 64 bit processor especially gearded towards video editing?
Nope. The AMD 64 processor does almost everything fast, and will only get faster
once we have real 64 bit applications to run on it. Right now the AMD 64 is a big speed
champ at running 3D games. Here's a few reviews to check out:
You may notice that the Pentium 4 is faster at DivX video encoding, and my guess is
that the encoding software being used was not optimized for the AMD 64 CPU.
Has anyone tried using the Athlon Opteron with Vegas? I'm building a new editing comp for my new job, and the Opteron has better specs then the AMD 64. I'm making a gamble that in about a year the 64-bit will pay off with 64-bit mpeg encoders, os, etc.
I can't find benchmark comparison's between the two chips.
Hmm something just struck me:
Spot's Rendertest is slightly stacked against Pentiums since it doesn't take advantage of hyperthreading. Because the render takes so long compared to the amount of footage (18:1 on the fastest machines), the "second" (fake) processor of a hyperthreading CPU doesn't get much of a workout. If hyperthreading were fully utilized, the performance boost (HT on versus off) would be around 15% compared to the 2-3% difference now. In real world situations, the boost would probably be low (5% range maybe?) if you only render complicated sequences and get real-time on everything else. For color correction, Vegas is pretty fast at it and hyperthreading would give more of a boost. I think it's somewhere around 9% boost when just applying 1 layer of color correction.
Anyways, the Pentium should be a few percent faster than rendertest.veg indicates when using real world projects. It depends on the project though.
It may also turn out that less complicated renders (where the render time to footage ratio is more like 3:1, not 18:1) will have different results for reasons other than hyperthreading.
As far as buying a processor goes, you probably want a Pentium processor. They're generally faster at video tasks. AMD64 processors right now I don't think overclock that good since the AGP/PCI bus is unlocked on most motherboards. You run into instability problems really quickly. Pentium processors overclock very well. The 2.4C can hit 3.4-3.6ghz speeds on great air cooling (3.6ghz is probably a bit too ambitious, but people on the net report it's possible and their systems are stable at that speed).
Someone was asking about using an Opteron. I had a chance to try this on my work machine which is a dual Opteron 246 (2GHz) system on a Tyan K8W motherboard. 2Gigs of ram, each processor connected directly to 1Gig.
I built this machine to compile code fast and indeed it does accomplish this extremely well. I'm about twice as fast as any other developer at our site in building our product (they have dual 2.4GHz Xeon) and I installed Windows 2003 Adv. Server so that one other developer could also use the machine for his builds so I'm a bit hampered even in this comparison.
Anyway, to what you guys care about: RENDERTEST. The answer - 1:30. This is using Best and NTSC (what rendertest has in it by default).
Does a second CPU make a difference in this test? NO.
In general Vegas does not multi-thread renders so dual procs / hyperthreading don't help.
Oh, speaking of this "wonderful" hyperthreading - the above Xeons of my work buddies support this so in essence they can have 4 virtual processors. The time it took to build our product slowed down with hyperthreading enabled.
We build using Microsoft's build utility which allows it to take advantage of as many processors as you have. This is the same utility they use internally to build their windows family of products.
On the xeons, we tried hyperthreading on and off for 2, 3, and 4 separate processes. The best was hyperthreading off, 2 processes.
For any kind of work which is either memory intensive or computation intensive (I'm being really general here), hyperthreading works against you because you have twice the number of threads starving for the same resource inside the processor. And especially when it comes to the arithmetic units inside the P4, they are already very starved. That's why higher core speeds or necessary to compete.
Anyway, sure hyperthreading has a place but an app has to be designed to do very different types of calculations in each thread (one doing floating point the other integer) and not doing large amounts of memory access in both thread to see much improvement. Perhaps this is what is happening for the color correction that you mention. If so, that's awesome!
I'm encouraged to see the Athlon64 3200+ turn in close numbers. I would place my bet on the newer 939-pin Athlon64 when they come out since they will have the fastest memory bandwidth, being dual channel DDR and not hampered by ECC registered memory as my Opteron or the Athlon FX is.
Hopefully I haven't shaken the Pentium hornets nest too much here.
Vegas uses the 2nd (fake) processor to (decode)/encode DV and for processing audio. In a lot of renders the 2nd processor doesn't get much of a workout, so you don't see much benefit from hyperthreading. It really depends though.
2- I never though about testing how hyperthreading affects various filters.
3- Some sites tested hyperthreading on VS off. Performance differences range from -10% to 50%, but average somewhere around 15%-25%.
4- I feel the best test would be something that approximates real world rendering. Benchmark a variety of real world situations, like:
keying
color correction (curves + color corrector + saturation adjust)
making material broadcast safe
film look (unfortunately many combinations of this)
DVD encoding (lots of benchmarks for this- Pentiums are faster)
etc.
The footage used may also affect render speed. I never bothered testing that though.
Wouldn't that be like someone with an AMD-64 saying that it isn't optomized for them and that a new rendertest should be made that would make them look faster (ie vegas with 64-bit optomized, windows 64bit, and encoding to Divx with 64-bit Dr. Divx?)
But, I made one up that's 10 seconds, modifying the origional. The first 5 seconds is the origional run, the 2nd 5 seconds is with the "real world" effects applied.
the origional took me 4:05 with default NTSC DV settings,.
the 2nd 5 seconds took 5:06 with default NTSC DV.
Total time of 9:06 (rendering whole 10s)
I have an AMD XP 1800 and 512md DDR. I rendered to my desktop (but that didn't slow it down.. it rendered slow enough).
I uploaded it to the sundance site so if anyone else wants to try it feel free (rendertest_2.veg under tools/aids).
Wouldn't that be like someone with an AMD-64 saying that it isn't optomized for them and that a new rendertest should be made that would make them look faster (ie vegas with 64-bit optomized, windows 64bit, and encoding to Divx with 64-bit Dr. Divx?)
I'm saying that rendertest.veg is unfairly stacked towards AMD processors. In more real world situations, I'm guessing Pentiums should do better. Perhaps we can agree on real world test(s), and then compare AMD64 and Pentium setups. It will have much more real world relevance. We could pick footage that's in the Vegas sample projects, and distribute the Veg file on the Sundance site.
I belive the test normaly runs faster in Intel chips though. How is it stacked against Intel? The AMD -64 wasn't even around when the test was made (VV3). There was the Athlon XP and Pent 4. It you have proof that the AMD 64 runs is better then the P4 it's news to everyone here! :) I belive even Spot (who made the rendertest.veg) said Intels will run it better, and AMD's go faster in some areas while Intel goes faster in others (it was a discusion simular to what you're saying, but the guy was saying that his AMD can render the still pics faster then an Intel, which was true in his situation).
Getting a happy medium will be impossible. Then, if for example, if Vegas 5 included an OpenGL render acceleration, some would say that the test is geared towards people with game video card and against "pro's" who don't use those cards (how many video editors buy a $500 ATI card with DirectX 9 & latest Open GL support to use with Vegas?).
I think the current render test is the best way. It's been around for a little while, SoFo used it as a benchmark (probely Sony will too), and until Vegas goes in a completely different direction (code wise) there's no need to change it. Then we've got a big problem if Vegas get hardware support, 64-bit support, etc etc etc... Then mpeg-2 might not be the best way to test. HD-WMV would be better, but if you don't ever render to HD-WMV it won't matter. But, everyone renderes to DV AVI.
Plus, what's a "real world situation?" I use Vegas for cleaning up audio (which the rendertest doesn't give me a clue how fast it can render audio on AMD & Intel), simple slice & dice, all the way to chroma keying and motion blurs. But, someone doing a wedding video will be in a completely different direction, as someone making a movie, a music video, photo collage, etc.
Oh, I'm STILL interested in how the rendertest work on an Opteron. They support up to 8 processors now. :) Plus Windows 64-bit for AMD is on the horrizon and i'm drooling (my next big upgrade, yeah!)
The main problem I see with "rendertest.veg" is that it's not a real world test. It may not be indicative of real world performance.
Because the test is so intensive (render time to footage ratio), it doesn't see much of an advantage from hyperthreading. In real world situations, you usually are going to see more of an advantage from hyperthreading. As such, Intel processors are slightly worse than they should be. AMDs don't have hyperthreading, so they look better in comparison.
There may also be other things that are different from real world renders and rendertest.veg. For example, you are rarely going to use the combinations of filters and generators in rendertest.veg. Applying any filter to NTSC bars and tone doesn't happen often in real world situations. Things like this may cause rendertest.veg results to be an inaccurate reflection of real world performance.
As for real world situations, there are many. One good test I suggest is for color correction/grading. Use real world settings on the color curves, 3-way CC, and saturation adjust filters. Color curvers are the best way to boost contrast. Saturation adjust can boost the saturation of dark areas more than lighter areas, which looks great and is more film-like. 3-way CC is often used to fix white balance, although you might want to use it to shift white balance towards warmer or cooler colors. Color curves can be more powerful than the 3-way CC, so maybe the test should just be color curves + saturation adjust.
There are many other varying real world situations, which you have mentioned (simple slice & dice, all the way to chroma keying and motion blurs). There is a problem of how many real world situations to test. One approach is to make a "one size fits all" test. I think we could make a test to is overall better than rendertest.veg, which is very contrived in my opinion. A different approach is to make multiple tests, or maybe test each filter individually. There are comprimises in the various approaches you take, but I think real world approaches are better than the one rendertest.veg takes. I'm not trying to pick on that rendertest.veg, it's just that there are ways to improve it.
Did you download the rendertest_2 I uploaded to the sundance site? I got the approval e-mail thismorning saying it was submitted. It has the CC, chroma, and film effects.
I don't think HT would realy show use unless you had audio to process, which the 2nd thread normaly uses. I don't think Vegas uses HT the way you're thinking (to process lots of effects).
I also tried using some different footage/generated media (ie clouds animated, etc) and the render times went down a little.
Also, have you checked out the stuff used in the render test? a gaussian blur was added, 2 layer masks (the bottom 2 and upper 2 layers), and a track motion were applied. I think these are more real world then excessive CC & chroma key (if the video is shot "correctly" lots of CC isn't necessary, just maybe a little).
This is with the same A64 3000+ system I detailed above, overclocked from 2.0gHz to 2.15gHz (215fsb, 430mem), which is the speed I always run the system at now, and it's been rock steady.