GPU Preview? Where is the benefit?

chudson wrote on 3/4/2012, 8:39 PM
Downloaded the trial of 11 and opened a project in both the trial and 10 pro. The project has AVCHD video so there should be a decent amount of decoding going on. 11 see's my graphic card as supported. ( Nvidia Geforce 310m) and is selected in preferences of 11. However, I see absolutly no difference in playback in 11 compared to 10. The 310M certainly is not the most powerful graphics chip, but it does support openCL and CUDA. Should one not be able to see any difference when utilizing the extra processing power of the Graphics processor on top of the CPU?

Using Vegas on a Sony Vaio F series with i7 CPU, 8GB ram, 7200 rpm HDD


NicolSD wrote on 3/4/2012, 10:55 PM
The 310M may support openCL but it is not powerful enough to make a difference for you. A Pentium IV is a CPU and so is the I7. But the gap between the two of them is enormous. Your 310M is just too weak for the demands of playback..
chudson wrote on 3/5/2012, 5:57 AM
So why does it even show up?
If a GPU is as you say too weak, it shouldn't even be available as an option. The application tests the GPU on first run, why if it was no benefit would it allow it as an option?
The GPU should not have to be a magnitude of 4 more powerful than the CPU to get SOME benefit. Vegas has always had the weakest preview compared to other NLE's when using AVCHD. I guess I just hoped they would catch up by using some GPU procesing power in parrallel to the CPU.
Looks like we're still stuck converting AVCHD files in order to use them in Vegas.
Not a bad thing per se, just had hoped adding a couple of extra processing threads might aleviate this...

paul_w wrote on 3/5/2012, 6:16 AM
It shows up because it has a compute value high enough to even 'try' and help the process. But, because your GPU power is way lower than your CPU, there will be no apparent gain in performance in your case. Thats the sad truth of the matter im afraid. Some users end up just switching OFF gpu acceleration to get better performance because of this. The only way to gain performance is to use a faster more capable GPU card.
One misconception of GPU acceleration is that it somehow 'adds' cores to your existing CPU power. Thats not the case in Vegas. Its more like a switch. Either CPU or GPU. Although saying that, even with GPU enabled, the CPU does get used for other internal Vegas processes too. But they are doing different jobs.

Much debate about this on other threads, but the bottom line is, you would need a faster GPU engine to see any gain.

farss wrote on 3/5/2012, 6:20 AM
What you're nor understanding is there's a cost to using the GPU.
Firstly the Vegas code has to move whatever processing would have been done on the CPU to the GPU, this takes time moving the data over a bus external to the CPU. Then the GPU has to crunch the numbers, that happens while the CPU can keep running but then the results have to be move back over the PCI bus.

To further compund the conundrum the cost / benefit ratio is dynamic, it will change depending on what the CPU has to do and that will depend on your project. All in all this is way to complex a problem for any startup code to determine whether or not using the GPU is going to be of benefit or not with a high degree of reliability.

paul_w wrote on 3/5/2012, 6:30 AM
another point .."The application tests the GPU on first run"..

Well, Vegas does not do this at all. All it does is check the compute capability of the card and/or check its on the 'GPU card' list.
Could be better - agreed. But very difficuilt to profile a system with this many variables.

Grazie wrote on 3/5/2012, 6:33 AM
Bob, that's what I meant about CPU<>GPU management.


Hulk wrote on 3/5/2012, 10:27 AM
Warning: This is simply my opinion and is in no way meant to offend anyone. I love Vegas and marvel at it every time I open it. I'm a Vegas "lifer" and like to speculate, hopefully intelligently.

The CPU and GPU should be able to share the workload in a balanced manner, regardless of their respective compute ability. I think the real problem is that Vegas's ability to use the GPU is somewhat "inflexible." By that I mean it has certain things it will have the CPU and GPU do,respectively, regardless of their respective compute ability. There appears to be no balancing of the workload between the two based on their capability.

Now I don't have V11 with a GPU so this opinion is based on what I have been reading over the past few months,my admittedly limited programming ability/knowledge, and the fact that many people have reported not being able to max out CPU and/or GPU use and still not have a full frame timeline playback (or during render when everything with a compute ability should be floored).

We have no idea from what I can tell as to how Vegas is programmed regarding GPU and CPU dependencies. It is my opinion that in a first effort to get GPU acceleration working the Sony engineers took the most direct route to making this work in the most stable manner. And that would mean one "path" for the GPU and CPU to operate. It reduces complexity in coding and therefore the possibility of problems. I would expect this to be fine tuned in future revisions once the other (stability) aspects are under control. The Sony guys would most likely even dealing with more issues had they tried to develop a "smarter" CPU/GPU sharing brain on the first attempt.

Well I guess technically this is the 2nd attempt since the first one was in Vegas 10 when they just kind of dipped their toes into the water and tapped into the GPU a bit to help with rendering. So this all seems to be going in a rather logical manner I think.

- Mark
paul_w wrote on 3/5/2012, 11:04 AM
"There appears to be no balancing of the workload between the two based on their capability"


Its selected as one way or the other (i called it switching). CPU or GPU. Thats why we have to restart the Vegas application when changing this setting in preferences. It really is executing different code depending on the mode. As you said, its the simplest way to do it.


dxdy wrote on 3/5/2012, 12:10 PM
I have been using TMPGEnc for renders to DVD from HD source (via Debug Frameserver). When you start TMPGEnc does a GPU/CPU analysis, and displays what % of the work is being done by each during the render. On my i7-950 with Nvidia 560ti, 38% of the work is on the CPU, the rest on the GPU when rendering to MP4 or MPG2.
Hulk wrote on 3/5/2012, 12:34 PM
In fairness to Sony it's a lot more complicated with Vegas. When you are simply transcoding video it's just input decoding and output encoding. But with say timeline preview there can be any number of effects or tranforms (crop for example) that have to be dealt with. Each has a different amount of CPU or GPU cycles required. But there must be some way to dynamically allocate resources to the CPU and GPU. Basically we are talking about fine vs. course threading for parallel processing. Imagine have 1 big job to do. You can either have the CPU or GPU do it. Now imagine breaking down that 1 big job into 10 smaller ones that can be done by either CPU or GPU, now you can balance the GPU and CPU to run full load. That's a really simple example because there are lots of housekeeping things (and more I'm sure) that the CPU does even with a strong GPU in Vegas.

Fine threading take a lot more brain power to progam!

Perhaps Sony would give us a little more information in a overview sort of way of how Vegas is handling this load balancing of the CPU and GPU during render and preview?
chudson wrote on 3/5/2012, 2:00 PM
" What you're nor understanding is there's a cost to using the GPU.
Firstly the Vegas code has to move whatever processing would have been done on the CPU to the GPU..."
Your right, I did not understand this. I thought the processing would have been in parrallel to the CPU, not either/or.
This kind of makes the "feature" useless to me.
"You can upgrade your graphics card to get excellent GPU performance"
While true, I can also upgarde my CPU. Nice to use the GPU, however it would much better if you could use both in parrallel. That might not be possible though because of the different code. Not a code ninja so don't know.

As it stands, this feature as it is implemented will keep me on 10.

Golfer wrote on 3/5/2012, 2:32 PM
An added Note. I have a fairly modest video card "450 GTS".. That said, my preview with GPU enbled is superior with biuld 595 than CPU 2600k"
farss wrote on 3/5/2012, 3:17 PM
"Fine threading take a lot more brain power to progam!"

As I understand it that gets close to the crux of the problem.
One could write code to optimise fine threading however that too is going to take up computational resources. Very likely the effort to optimise how the process is done will cost more than the saving from optimising the process. On top of that because all this process is done through pipelines predicting just how long anything will take would be very difficult.
On top of that the way Vegas works makes the task close to impossible. Other NLEs require FXs to be prerendered and that makes it way simpler to optimally code.

Hulk wrote on 3/5/2012, 3:45 PM

I meant "brain power" as in human brains during programming. Smarter, better code is difficult to write. A few examples. The 16bit kernel in Win98 was hand coded and optimized assembly. That's why for a long time 32bit XP couldn't run as fast in many applications as 98. As long as 98 wasn't memory strapped and page swapping like nuts it was generally faster. Another example is Bob Lentini's Software Audio Workshop from the early '90's. That was one beautiful piece of software, all 2MB of it! Ran from the executable and could do multitrack audio with very good quality EQ, compression, etc. on a Pentium 90. Other non assembly editors of the day couldn't get anywhere near that level of performance.

Vegas could use finer threading and other optimization to greatly improve performance with CPU and CPU/GPU. The issue of the time it takes to send data to the GPU can't be that significant with the enormous bandwidth of today's computers. In addition, there are many, many compute intensive games that extract a maximum performance from CPU and GPU. And a game is much, much, much harder to parallel program than video.

Just about everything that people ask for in this forum could be achieved by Vegas if the money for development was there. And that is where the rubber meets the road. Sony has to work within the constraints of the real world and try to earn a profit, which is after all, why they are doing this.

Spectralis wrote on 3/6/2012, 11:43 PM
Don't forget that even with GPU switched off in Vegas, CUDA enabled FX still use the GPU for rendering. If you have a powerful CPU perhaps the best balance is Vegas using CPU and FX using GPU.
farss wrote on 3/7/2012, 12:55 AM
"The 16bit kernel in Win98 was hand coded and optimized assembly."

It was revealed a long time ago that some parts of Vegas in written in assembler.

It's been a long time since I wrote any assmbler but indeed it can be much faster, at least 1,000x faster than Pascal in one instance. However that compiler was anything but optimised for the CPU. I think the x86 compilers are written around the CPU architecture. Still, as you say there's always a trade off between how fast the code runs and how easy it is to write and debug.

arenel wrote on 3/9/2012, 10:26 AM

I am currently moving my wife from SAW to Vegas for Audio editing. She has been using it for about 17 years (SAW 6.4 came on a floppy) She has been doing Skating programs ans ice shows on it . I am not having much fun!

Remember Superbase and Superscript? Multitasking on a Commodore 128!

You are so right about assembly language.

Hulk wrote on 3/9/2012, 1:14 PM
A long time ago in a galaxy far, far away I used to do some Assembly programming on my old Atari 800. In the 160x192 resolution mode, two bits per pixel you could only get 4 colors. But if you changed the color registers during the vertical blank interrupt, when the electron beam finished one scan but before it started the next, you could have 4 different color. So as long as you divided the screen into horizontal sections you could "fake" a larger color palette.

The point here is that even running at 4GHz, using Assembly programming there was plenty of time to do things during the short (in human terms) vertical blank interrupt. That is an example of the power of Assembly. You can control exactly how something gets done, when it gets done, and you can "speak" directly to the CPU.

I wonder of those hand coded parts of Vegas that are in Assembly made it to the 64bit port?

It is my opinion that there are only literally a handful of super genius programmers out there. The rest are just guys that learned to program in college and do their best. I would be an example of this if I was a computer engineer instead of a mechanical. I'm average. Sad to admit but true. My friend who taught me Assembly in high school and sold games to Broderbund when he was 15 is a super genius. He is a chief engineer at nVidia now;)

Sometimes all of the power at the fingertips of programmers today is not a good thing. When there was no alternative with that old Atari 800 you struggled to develop new routines to simply make it work. Now you just use a relatively easy high level language, flow chart it out, and crank it out. The result is often mediocre performance and instability since each module of code isn't really vetted.
John_Cline wrote on 3/9/2012, 3:30 PM
The Geoworks graphical, multi-tasking OS running on a Commodore C128 was an amazing thing, it romped all over the MAC and Windows offerings at the time. Too bad it never caught on...
Hulk wrote on 3/9/2012, 9:56 PM
My friend had a Commodore 64 when I had the Atari 800. While I argued to him the Atari was better, in truth I knew the Commodore was a far superior machine.
Adam L. wrote on 3/11/2012, 6:30 PM
You guys are talking about times when a compiler wasn't that smart. These days compilers are very, very smart. There is still room to hand optimize potential bottlenecks and eyeball critical code paths, but most of the time it's not worth the effort.
NicolSD wrote on 3/13/2012, 9:43 AM
Geoworks? I had that... and a 3 1/2 inch drive. But going back even further, I remember my first modem. It had a switch on its side so I could select between send and receive mode.