Rendertest REQUEST for VP 11

nedski wrote on 9/16/2011, 5:31 PM
Hey John Cline. I have a request for the next update of your famous Rendertest.
Could you possibly include a way to test for Vegas Pro 11 GPU acceleration?

I'd also like to know how much different video cards will affect the preview.

Later this year I'll be building a new PC. I don't have the money to budget for a thousand dollar video card. I usually budget around $200 - $300 for the video card.

I know it's a lot to ask, :-)

Sony really should publish these numbers, but I don't think they have your passion for knowledge. ;-)


John_Cline wrote on 9/16/2011, 6:28 PM
There are probably some filters used in Rendertest2010 that will be GPU accelerated, so the results of the current version should be an indicator. I'll look into it when I get a copy of Vegas 11.

As far as GPU acceleration in general, it's usually a factor of how many CUDA cores and the clock frequency at which they are run. More expensive cards have more cores and/or run at higher frequencies. The drivers for the consumer cards are essentially the same across the entire line, but the drivers for the professional (expensive) cards may be better optimized for certain tasks.
R0cky wrote on 10/20/2011, 5:37 PM
I ran the 2010 rendertest on my quad core i7-920 with a newly installed nvidia GTX 460 with 1 GB.

46 seconds

Preview in best full was 5-10 fps.

Stringer wrote on 10/20/2011, 6:30 PM
46 seconds !

That is almost half of the record in the data base for 24 thread, multi CPU Xeons..

What was your render template ?
R0cky wrote on 10/20/2011, 8:44 PM
HDV 1080-60i - the standard for the test. I think it was something like 4 minutes with the older video card on Vegas 10.

Get this, my cpu load was only about 35% while it was doing this. Go GPU.

Steve Mann wrote on 10/20/2011, 10:29 PM
I'll get around to updating the database soon, but for now, enter your GPU info in the comments column.

Password is "vegasuser"
WillemT wrote on 10/21/2011, 3:37 AM
Lowly Q6600 6GB memory. GTX460 graphic card. 2010 Rendertest and preview RAM set to 1024MB.

GPU disabled in preferences:
6:36min. 100% CPU and 4% GPU.

GPU enabled:
1:33min. 60% CPU and 66% GPU.


Alex D. wrote on 10/21/2011, 5:01 AM
GPU OFF - 4:45 min
GPU ON - 1:32 min

Windows 7 Ultimate, 64-bit (Service Pack 1)
Intel(R) Core(TM)2 Quad CPU Q9550 @ 2.83GHz
Quadro 4000 2Gb
Nvidia drivers to 285.58
megabit wrote on 10/21/2011, 5:34 AM
M6600 i7 Extreme laptop with Quadro 4000m card:

- GPU off: 180 sec
- GPU on: 83 sec

Alexander - I realize your 4000 is not the mobile version like mine, but still I wonder how it performs in playback? My 6-camera track in multicamera edit mode plays back slower with GPU than without it :( Would love to hear about yours.


AMD TR 2990WX CPU | MSI X399 CARBON AC | 64GB RAM@XMP2933  | 2x RTX 2080Ti GPU | 4x 3TB WD Black RAID0 media drive | 3x 1TB NVMe RAID0 cache drive | SSD SATA system drive | AX1600i PSU | Decklink 12G Extreme | Samsung UHD reference monitor (calibrated)

dingus wrote on 10/21/2011, 6:02 AM
i7 980X with GTX 470 card:
-GPU off 189 sec
-GPU on: 40 seconds

w/GPU it's almost 5 times faster, holy cow... i'd say it was worth the upgrade!

@Hulk re: preview performance at the Best (Full) setting;

GPU off: lowest- 0.555 fps, highest- 8.604. It was very choppy and unusable at this setting and impossible to give an average frame rate.

GPU on: lowest- 10fps, highest- 29.97.The average was around 15 fps. The average after the 12 second mark was typically around 12 fps. Frame rate varied with each pass. Overall it was fairly smooth, obviously not perfect, but definitely usable and a noticeable upgrade in performance.
paul_w wrote on 10/21/2011, 10:53 AM
i7 930 clocked to 3ghz with GX570 card

GPU ON 51 seconds
GPU OFF 4mins 55 seconds (295 seconds)

GPU ON 13fps at best peaks, 10 fps at lowest.
GPU OFF almost zero, about 1 or 2 fps at best.

Previews at FULL BEST setting.

Summery, this is a massive increase in performance both Previews and Renders!

EDIT: i adjusted the Preview RAM setting from just 200 MB to 2 GB and the GPU ON rendering time reduced further to 44 seconds.

Rv6tc wrote on 10/21/2011, 11:49 AM
i7 2600K, OC to 4.4ghz GTX 560

GPU on: 0:50
GPU off: 2:33
dxdy wrote on 10/21/2011, 12:19 PM
i7-950, standard clock, 12 GB RAM, Win 7 64 Home Premium

PNY GTX-560TI factory overclocked with 1024 G DDR5

With GPU: 53 seconds
CPU only: 233 seconds (3:53)

Version 10 CPU only 200 seconds

Looks like V11 and a $279 US (Best Buy) card have breathed new life into the 950, now I can wait another year or two to upgrade hardware.


Preview performance:
With GPU at Best Full, it runs 29.97 for the first half, then starts declining, down to 8 by the end. At Best Half it stays at 29.97.

Jaums wrote on 10/21/2011, 1:18 PM
For me, if Preview would work well I wouldn't have to render to see each fx tweak I make to see if I like the result. Can V11 preview HD well with the right hardware? If so, which hardware?
Hulk wrote on 10/21/2011, 1:21 PM
This is great info guys. Thanks for posting.
Would it be possible for you to test preview performance at the Best (Full) setting?
Specifically the frame rate when the text starts to zoom out around 12 seconds. My non GPU accelerated 2500k system previews this at about 1.4 fps.

If you could edit your original post it would help keep this post short. I even hate to make this post actually but I think having preview performance would help people with buying decisions. I know it will help me!

Wow, 8-10fps preview at Best/Full for the second half of the clip is pretty amazing and so it full frame rate at Best/Half. My 4GHz 2500k can only do about 18fps at Best/Half. Seems like the GPU may make realtime preview at Best/Half a reality for just about any project. And let's face it, when you're editing 1080p, half resolution previews are generally more than adequate to make most editing decisions.

Would love to know what a powerful ATI card can do with the preview on this?
LReavis wrote on 10/21/2011, 6:36 PM
I am indeed impressed. I may yet decide to cough up the $$ for V11 after all.

Could someone please confirm benefits when rendering from Cineform clips to Cineform final render? to Sony .MXF? I almost always render to one or both - rarely to anything else.

I'm presuming that previews w/Cineform clips also would benefit from GPU assist . . .
John_Cline wrote on 10/21/2011, 7:17 PM
To my knowledge, the Cineform codec is not GPU accelerated in any way, so the new Vegas will not necessarily encode any faster. What will be faster are any of the new GPU Vegas filters, transitions and media generators, if your videos are effects heavy or processed, then you will certainly see a decrease in render time and faster preview speeds. Since MXF is MPEG2-based and MPEG2 encoding is now GPU accelerated, I would expect to see much faster rendering to MXF. With a few possible exceptions, the new Vegas is just faster than the old version and not by a insignificant amount.
Ros wrote on 10/21/2011, 7:45 PM
i7 2600
16gb ram
Asus Nvidia GTS 450
Win 7 64 bit


V10: 3:21
V11 off: 4:20
V11 on: 1:29


hazydave wrote on 10/21/2011, 10:27 PM
Wow... glad I found this. I've been playing around with some basic benchmarks, and kind of bumming on the speedup. This was sports stuff, pretty simple edits, and then some artificial stuff, like a bunch of AVCs composited. I saw some performance boots, some losses, nothing dramatic.

Rendertest... dramatic. Thanks, I needed that:

render preview
V10: 279 0.35
V11 off: 315 0.36
V11 on: 48 10.20

6.5x faster render? I'll take that. Guess I just need to try some more complex projects.

Was hoping for a big AVC acceleration all by itself, but my machine's actually fast enough for full speed preview of 720p60 or 1080i60 in Vegas... and the GPU actually drops that a little. I guess it all comes back with plug-ins, rendering from less intensive media, etc. Have not tried anything with 1080p yet (have that on two cameras, but nothing on-line at the moment).

This is a home-built machine with AMD "Phenom II" at 3.2GHz, 6 cores, 16GB DDR3, and a bright shiny new Radeon HD 6970 2GB card. Things were not looking good for that card's long-term health until just now.
hazydave wrote on 10/21/2011, 10:30 PM
Interesting thing with video preview I just discovered.. your preview buffer tries to act as a cache, inconsistently. At least on my system, the very choppy preview behavior was with my usual 2GB preview buffer. This also meant the value changed from run to run... dandy in actual use if that makes it faster, bad for benchmarking. Set it to zero and it gets really consistent -- obviously, still dependent on what you're doing on-screen.
John_Cline wrote on 10/21/2011, 11:44 PM
Interesting, it looks like the render speed is now tied almost directly to the speed of the graphics card. Many here, including myself, are running nVidia GTS-450 video cards with widely different speed CPUs, but all of the renders using the GPU on seem to be within seconds of each other. I have a 6-core 980x processor with 12 GB of RAM running Win7-64bit which certainly produced some fast render times before Vegas v11 but now that it appears to be almost completely dependent on GPU speed, I guess it's time to go to NewEgg and order a high-end nVidia 500-series card to go with my 980x processor.

This is pretty good news for people running older Core-2 processors with a reasonably fast video card.
im.away wrote on 10/22/2011, 7:36 AM
John's assertion seems to be right on the money. My own experience is that the rendertest time has dropped from 125 seconds to 58 seconds. When I spec'd my PC not so long ago, the graphics card was probably the only thing I underspent on. I rationalised that the PC was only going to be used for editing in Vegas and Vegas (at the time) didn't really take advantage of high-end graphics cards.

Well, the World has turned and now it seems that the graphic card is right up there with the other serious considerations one should make when scoping a PC. In my case I find myself wondering what would be gained by having a GTX580 instead of the GTX460 as I do now. I'll be keeping my eye on the rendertest results to see what times are coming out of i7 2600K boxes fitted with the GTX580, but I don't think I'll stump up the cash for a new card until I see what the new generation of cards from ATI and nVidia shape up like.

Edit: Just reviewed this thread and noticed another system with same CPU, clock speed and system RAM. The difference was the other poster has a GTX580 graphic card and mine, as stated, is a GTX460. The difference in the rendertest was 8 seconds, or more importantly, 13% faster. Probably not enough to make me race out and by the upspec'd card, but enough to make me look forward to what the new generation of cards will do.

hazydave wrote on 10/22/2011, 10:09 AM
I think rendertest, while initially formulated as an easy-to-download comparison of Vegas render speed, is skewed toward GPU performance in very unnatural ways.

I was messing around with my system (yeah, the one that pulled a 48sec on rendertest) prior to this, with tests based on real projects.

First project was just a cut and render thing. Brightness/contrast applied to AVCHD video, a few titles... it's a soccer game, nothing terribly amazing to apply, FX-wise. Being sports, it's shot in 720p60, and without the GPU, I pretty much get full-speed preview in Vegas 10 or 11. In fact, Vegas 11 was slightly faster at preview.

So, results. In preview, I was actually 22% slower, GPU to Vegas 10, and 30% slower, Vegas 11 to GPU. Render out to MainConcept MPEG-2, 25Mb/s Blu-ray specs, ran 32% slower, Vegas 11 vs. GPU 11. Render out to MXF, 50MB/s 4:2:2 ran 8.7% slower. Better on AVC: it did 14.5% faster on GPU vs. Vegas 11 on Sony AVC (16Mb/s CBR), but lost almost 2% on Main Concept. In fact, Vegas 11 vs. Vegas 10 was a decent improvement, from 6.5% to 13%, depending on the CODEC, with GPU shut off.

I tried a compositing test... half of four MXF segments starting out full-screen, moving to quadrants, and then four layered segments, with transparency. In this case, I found Vegas 11 easily twice as fast on prevew, versus either 10 or 11 without GPU. Rendering to MainConcept MPEG-2 was 43% faster than Vegas 10, 79% faster than non-GPU Vegas 11. Sony AVC was better on Vegas 11, and 22% faster with the GPU on. Main Concept AVC ran 26% faster than Vegas 10, 38% faster than Vegas 11. That was floating point pixels.

I switched to 8-bit pixels, "good" quality, and the GPU got bad.. 86% slower with GPU than Vegas 10 on MainConcept MPEG-2, though still faster (11%) on MainConcept AVC, vs. Vegas 11. In short, all over the map. GPU load averaged about twice in the 32-bit case versus the 8-bit case, CPU was a bit higher for 8-bit.

Final test... I had a complex animation, some video, mostly moving PNGs and plug-ins, and figured this was a test closer to the rendertest setup. On previews, things are still nice... I got an average of 5.1fps on the CPU alone in Vegas 11, 14fps with the GPU... 2.75x improvement. On renders, I saw 25% improvement to Sony AVC, 37% improvement to MainConcept AVC.

So it's there, it's real... 'cept when it isn't. But not quite the rock-yer-socks improvement with Rendertest. I guess my main point... don't use that as the criteria for an upgrade.

I'm on the edge of ordering a similarly priced nVidia card, testing it how it does, and sending back the loser. Tie goes to nVidia, which has more support in other tools. I'm not sure there's a better way to know, at this point.

Hulk wrote on 10/22/2011, 11:39 PM
I have a feeling the GPU may have a harder time with actual video and filters which are used in normal day-to-day editing rather than generated media and the color/sizing manipulations in the rendertest. Although the demo project is 2.5GB it probably should be used as a test project for VP11.
hazydave wrote on 10/30/2011, 12:55 AM
I ran the VP11 Benchmark on my system, on two GPUs. This is a otherwise an AMD 1090T (6 cores, 3.2GHz) with 16GB DDR2. Only Vegas 11 was tested.

Without GPU, I see 8.0fps average on preview. On the HD6970, I saw 28.5fps, on the GTX570, I saw 27.5fps.

For rendering, the Sony AVC render (1080i60, Blu-ray, 16Mb/s) ran in 5.95 minutes without GPU. On the HD6970, it did 2.73 minutes (58% CPU, 30% GPU average), while on the GTX570 is did 2.87 minutes (75% CPU, 50% GPU average).

On the XDCAM EX 1080i60 MPEG-2 render, I saw 5.37 minutes no-GPU. With the HD6970, I saw 1.73 minutes (60% CPU, 37% GPU average), while with the GTX570, I saw 1.95 minutes (85% CPU, 45% CPU average).

So clearly, Sony chose well on their benchmark.