GPU, The Processor, CPU, The Driver and Vegas . .


Zeitgeist wrote on 1/12/2013, 8:11 PM
Does it really matter? I'm noticing much of the problems I thought were gpu related are actually related to the ram preview setting. A setting of 0 in V12 works magic for preview rate, while a setting of 1 makes the preview rate lose half the frame rate. This is not the case for V11.

V12 has some kind of bug/change that causes the preview rate to drop by half the frames, while the same settings in V11 do not. The ram preview setting is useless in V12. Some of the problems caused by this are being blamed on the video card & gpu.

I now have my gpu on & PRS set to 0 & V12 works beautifully except for some of the other known bugs. I am using a with the latest drivers on a gtx550ti.
farss wrote on 1/13/2013, 4:02 AM
Seeing as how my painting was put on hold by rain I spent a few hours reading a lot and then thinking a lot more about all this.

Executive Summary: JR's advice makes more sense now but for more complex reasons. In theory for Vegas a GTX 570 should perform reliably and better than a Quadro 4000 however because with Vegas we're using OpenCL, not CUDA I feel the less variables thrown into the mix the better if you want dependability and support. There's still no certainty of course, you could spend your money and end up no better off but you do have a better chance of someone hearing you.

In Detail:
CUDA is pretty much all nVidia's baby. If it doesn't work it's between the code that uses it and on what's running on nVidia's hardware. Adobe are not the only people using CUDA, there's lots more including scientific apps. nVidia have guaranteed code compatibility i.e. the application developer needs do nothing for his application to just work with any CUDA enabled hardware now or in the future.

OpenCL is a very different beast. It can run on well, anything, GPU, CPU, non core parts of a CPU, and even Field Programmable Logic Arrays. It runs on both AMD and nVidia GPUs, in fact for certain OpenCL tasks AMD smokes nVidia.
The complication, as far as I can see, is the application developer doesn't just need to work with an interface (API), he also has to write the code (kernel) that runs on the target hardware. How well and how fast it will run though is in the lap of the hardware vendor. Also the kernel code is run time compiled.

This seems to play out with the GTX 6xx cards that many are complaining don't work nicely at all with Vegas. We are not alone. nVidia seem to have clobbered this series of cards' OpenCL capability in favour of better CUDA performance. Plus the architecture is different and that maybe requires the kernel code to be altered.

Now, some troubling information that I came upon.
nVidia have not updated their OpenCL tools in years, they appear to not support OpenCL 1.2 even yet. There's some talk amongst the development community of them dropping it altogether. Given the troubling state of AMD's finances one does have to wonder where we could be left. There are ways to make OpenCL run on nVidia hardware even if nVidia drop it but....

dxdy wrote on 1/13/2013, 1:08 PM

I have the opposite condition with the Dynamic RAM preview - with it set to zero, preview runs 1/3 the speed of when it is set to 8 or 16. I am totally baffled.

mark-woollard wrote on 1/23/2013, 12:28 PM

Does your GTX 570 give you a stable VP12 system, i.e. does it let you leave GPU acceleration on and have RAM preview set to a reasonable amount, e.g. 2048 MB?

My GTX 560ti lets me keep GPU acceleration on, but RAM preview needs to be set a 0 MB to avoid crashes.

Zeitgeist wrote on 1/23/2013, 7:37 PM
dxdy, I guess our computers are polar opposites in some way when it comes to V12.

V12 is still working here with preview ram set to 0 & gpu on. 3 cam edits at preview full are running full frame rate! This is better than V11.

When I need preview ram I change the settings in preferences. It would be nice if the setting could be changed right from the toolbar. Then again, it would be even nicer if other preferences options could be docked on the toolbar for ease of use like fade length etc.
SuperG wrote on 1/23/2013, 10:23 PM
The issue seems to be with the changes introduced in nVidia's Kepler architecture.,2817,2402193,00.asp

According to the article, OpenCL performance is worse on these cards - apparently nVidia dropped some compute features in order to reduce power. Whether or not Cuda operation on these cards is comparable to OpenCL operation with the same number of older Fermi architure stream processors is an open question. So far the AMD 7970 looks like it might work - if you have the power supply to support it.
Pete Siamidis wrote on 1/24/2013, 12:11 AM
Thanks for the ram preview tip! I have a 560ti and I've been unable to get proper playback on the timeline with Vegas 12 with gpu on, and it ran really slow with just cpu. Setting ram preview to 0 fixed it! Now with gpu on and properties set at Best(Full) I get full 60fps timeline playback even with color corrector and sharpen filters on a 28mbps hd source. Thanks! Oh as another side bonus of ram preview = 0 and gpu on, my render times are twice as fast now. Awesome :)
Zeitgeist wrote on 1/24/2013, 4:09 PM
Sweet, glad to have some company in this experience. It is very nice when v12 has a accurate preview rate most of the time.