CPU utilization low?

cliff_622 wrote on 10/14/2013, 12:04 PM
I'm running a new i7 Haswell with 16 Gigs or RAM, RAID drives and Vegas 12.

In render testing, AVCHD and XDCAM to DNxHD, I have noticed that CPU utilization across all cores stays below 50% with short spikes here and there to 60-70%. (This is with GPU acceleration turned off). With GPU acceleration turned on, CPU drops down to about 30% across all cores.


1.) Why is Vegas not driving CPU usage much higher? (I'm running on liquid cooling and I'd like to see Vegas peg the meter!)

2.) Is "GPU acceleration" an addition to CPU rendering or is it a replacement for CPU rendering? (it certainly seems to lower CPU utilization quiet a bit)

Note, changing the processor priority in Windows Task Manager from "normal" to "high" or even "real time" makes no difference.


musicvid10 wrote on 10/14/2013, 12:21 PM
The perception that an encoder should run at 100% CPU in order to be at full efficiency is widespread (especially with gamers), but incorrect. The cases where this is a factor is when the CPU itself is the bottleneck in the encoding chain, which in your case it obviously is not. You have a ton of stuff working upstream including but not limited to filters and pagefile throughput, any of which can pinch the pipe before the data stream reaches crunch time at the CPU.
cliff_622 wrote on 10/14/2013, 2:58 PM
So you think it's a plugin that is not utilizing the full CPU ability?

Is it that a plugin is poorly written or threaded that is bottlenecking the processing chain? (something like that?)

Im not using any 3rd party plugins, just the native Sony color corrector, Gaussian blur and unsharp mask. (again, this is just a test)

Where is bottleneck coming from? On the hardware side, I'm running OC'd at 4.5ghz. The processor is just kicking back and laughing at the render not even breaking a sweat.

musicvid10 wrote on 10/14/2013, 3:34 PM
My friend, there are MANY filters in the chain that are not "plugins." However that is one of the many possibilities. Even if you were able to track down the proverbial needle in the haystack, it is very unlikely you would be able to do anything about it.

I hear this same question over and over again, most often from gamers who are used to the CPU being the bottleneck in the transport stream playback and can't imagine anything else being possible. However, in the encoding realm, which MUST process raw bits from beginning to end, nothing could be farther from the truth, and most modern CPUs are NOT the bottleneck in the chain! Understanding that alone is critical to understanding that ENCODING can be "different."

Please use the internet to learn about nonlinear encoding, and there is also a "rendertest" thread on this forum for benchmarking if you are curious, although I'm not sure it has been updated recently,

I've encountered this exact same question on three forums this week alone, from multiple users, one of whom resorted to immature responses. Best.
Steve Mann wrote on 10/14/2013, 5:59 PM
CPU Utilization is an extremely unreliable indicator of program efficiency. The only time it's accurate is when it reads 100% - indicating that the processor is the bottleneck in the process.
cliff_622 wrote on 10/14/2013, 7:59 PM
There is no question that the CPU is not being taxed. In monitoring the temperature, it's running cool at about 50-60 celsius through the whole render.

So what I'm gathering here is that there is somewhat of a fixed rendering "speed limit". I'm running with an i7 4770k processor that is only 50% utilized and upgrading it to a higher chip wont increase the render speed. (much)

There are essentially software bottlenecks in the pipe that simply cant go any faster even with higher hardware? (unlike a benchmark utility that can slam ANY CPU with a 100% capacity and raise your temperatures into the 90's)

Oddly enough Premiere CS6 seems to drive my CPU much harder than Vegas does. Not sure why but I know their "engine" is completely different.

musicvid10 wrote on 10/14/2013, 8:20 PM

Think of a big fat pipe that is capable of delivering 900 gallons per minute. Think of sufficient pressure at the other end to push 900 gallons per minute. Now think of a narrow piece of pipe inserted in the middle that is capable of passing only 600 gallons per minute. The maximum delivery the fat pipe is capable of is now 600 gallons per minute, not a drop more.

People get so used to thinking in terms of the CPU being the only system bottleneck, that they completely ignore the fact that many other scenarios are possible. You are one of the rare ones that seems at least willing to learn rather than just arguing the narrow case . . .
Geoff_Wood wrote on 10/14/2013, 8:45 PM
So what is that narrow pipe ? Surely would be great to be able to optimise (or eliminate) it, and get the opportunity to use all the CPU and RAM we've paid for ?!!!

musicvid10 wrote on 10/14/2013, 9:01 PM
Could be anything under the hood that changes bits, Geoff.
Took me two years to track down a bad module in my car, I'm not about to take on this one . . .
cliff_622 wrote on 10/14/2013, 9:39 PM
Anything that changes bits is a simple way to put it.

If a complex calculation becomes a bottleneck that cant exploit the full speed of the CPU than I think something is wrong with the code. (or it can be written better)

Bottlenecks are inherent in video editing, otherwise we'd all have real time (or faster) rendering anytime we want.

If software A can run a complex calculation and complete it in X time. And, If software B can run a similar calculation on the same CPU and complete it faster, I'd chalk it up to a "problem" caused by software A's calculation engine.

Where the bottleneck is, I have no idea. But it does seem to bother me to see my CPU running really cool and at 50% utilization.

Sony Vegas 12 runs on an old VFW (Video For Windows) platform,.no?

Are there any limitations and/or bottlenecks inherent inside VFW that might cause this phenomenon? I believe that other NLEs have moved away from VFW for certain reasons.

I know very little about VFW. I'm only asking the question to anybody here that might know more.

Low CPU usage I think indicates a "problem" or "inefficiency" or "beaver damm" somewhere in the river and I'd love to know why. (and why some engines are better than others with the same CPU)

Am I crazy for questioning this?

NormanPCN wrote on 10/14/2013, 9:48 PM
Surely would be great to be able to optimise (or eliminate) it, and get the opportunity to use all the CPU and RAM we've paid for ?!!!

You CPU has four cores with hyperthreading and so Windows shows you 8 cores in the task manager. So 12.5% is one logical core fully utilized. The extra logical cores only give you a 0-20% boost.

To use all cores the algorithm must run on all cores and full saturate each core. For this argument this algorithm is encoding a file.

The encoding algorithm must be split into separate functions that can be run in parallel to fully saturate your 8 logical cores. Suppose that one of these functions finishes before the others, and that function gets its next piece of work from one of the other functions running on another core. So it must wait for that other function to finish to get another piece of work before it can start again.

This is just one example of a great many as to why the CPU will not be 100% loaded as is your desire.
musicvid10 wrote on 10/14/2013, 9:56 PM

You're not crazy, but realize a lot of the core filtering code hasn't yet been ported to 64-bit, or even multithreading for that matter. It's the slow driver at the front of the traffic that holds everyone else up.

Vegas is not exclusively vfw, but legacy vfw engines and filters in Windows run on a mishmash of internal 16-bit and 32-bit single-threaded architecture from the 1990's. Don't expect that to change unless you are interested in writing the code yourself.

That being said, x264 is now about as good as it gets in the speed vs. quality dept., and the gamers still complain that their CPU's aren't running 100% during encoding. It just may be that CPUs, with all the available hardware enhancements, are no longer at the top of the dogpile.

Play with x264 using various filters in AviSynth and other CLI implementations, where every possible tweak and trick is exposed and abundantly documented, see where the state of today's art of interframe encoding lies, and post back with your thoughts about what can be improved in Vegas. Again, it is unlikely that legacy encoders will be touched in the future. Since i'm not interested in reinventing the wheel at my relatively advanced age, I will remain interested, but not be able to contribute much further to the discussion. Here's a really good place for you to start:
VidMus wrote on 10/14/2013, 11:35 PM
One example of a bottleneck is a hard drive not being able to deliver the bits fast enough to the processor from an uncompressed video. Simply too many bits for the speed of the drive in the time needed.

An uncompressed 60p AVI video will not even play on my system without delays. The CPU is plenty fast enough to play it but it has to wait for the hard drive to deliver it.

Compressed videos do not have this problem.

In testing with HandBrake, if I use a compressed video such as an MPG file for input the Frames Per Second will be high and the CPU will be around 90 to 100%. If I use an AVI Sony YUV codec video the FPS will be much lower and the CPU will be around 50% or less.

I think the number one problem is that people think the CPU should always be at 100% or they are not getting the speed they think they should have. They are not considering other things that can get in the way.

It is a system thing not a CPU thing.

What happens when the fastest engine has to work with a transmission stuck in first gear? So much for the race...

MSmart wrote on 10/14/2013, 11:46 PM
Oddly enough Premiere CS6 seems to drive my CPU much harder than Vegas does.

Maybe Vegas is more efficient at encoding.

You never say that you performed any time tests to see what the overall render speed was. To simply state one pegs the CPU more than the other is meaningless. You gotta back it up with elapsed render time.
FilmingPhotoGuy wrote on 10/15/2013, 2:12 AM
@ Cliff_622

I'm not sure of your CMOS settings but if you have set the temp limit to 60* then when that limit is reached then the CPU will back off. Set the temp limit higher and tell us what happens. I'd like to know the result.

Also there a RENDERTEST.veg on the forum somewhere with no plugins which forum members use to benchmark their rigs. When I render it my system hits 80-90% CPU usage.

Waiting for your feedback
CJB wrote on 10/15/2013, 9:50 AM
I have experienced this myself where the rendering pipeline seems limited to 1 core. As a programmer this seems a bit poorly optimized for parallelization. On the other hand using FFMBC shows no better parallelization. Is this just inherent or relying too much on legacy coding?
musicvid10 wrote on 10/15/2013, 10:04 AM
Of course, DNxHD is an Avid product using the inefficient Quicktime 32 lib and renderer, to produce an intraframe file, which inherently are less demanding to encode. Differences with other editors aside, it is very likely Sony will not be able to to anything about it, short of writing their own VC3 libraries.
VidMus wrote on 10/15/2013, 10:29 AM
"Of course, DNxHD is an Avid product using the inefficient Quicktime 32 lib and renderer, to produce an intraframe file,"

That is probably why it is so crazy slow. That is why I use Sony YUV AVI instead.
rmack350 wrote on 10/15/2013, 12:27 PM
You never say that you performed any time tests to see what the overall render speed was. To simply state one pegs the CPU more than the other is meaningless. You gotta back it up with elapsed render time.


I suspect that PPro actually is faster, but time is a much better gauge of performance than CPU load.

Another thing to look at is that some codecs will peg your CPU while others won't. It's not always Vegas that's the problem. That said, if you're just using Vegas and have nothing to make comparisons with I can see how one would look at CPU load and think "this could be faster".

Lovelight wrote on 10/16/2013, 1:20 AM
Tmpg renders like a bear 100% all cores most of the time. Speed is amazing.
Warper wrote on 10/22/2013, 12:13 PM
As far as I remember, Vegas has settings for number of cores to use and last time I was worried about it I think default was at 4 cores.

AFAIK VFW interface is not multithreaded. Some codecs can be multithreaded, but plain old codecs don't have such optimizations.

Some filters in Vegas are not multithreaded. Some are missed probably, but some can't be really sped up, because they already work lightning fast. For example, you won't get much our of 8 cores applying simple math like 50% transparency filter - they will compete for memory access.
Some filters were ported to GPU. They use GPU instead of CPU for routine operations, but for the sake of synchronization CPU has to upload data and wait for the end of operation, and only after that download processed data back.

If you use GPU acceleration for encoder, some of CPU burden is simply transferred to GPU. By the time GPU does all work, CPU is sleeping. They do not do it like competitors, every unit does dedicated part of work. So in your CPU will be used less, while GPU can effectively work slower than CPU could (in extreme cases).

musicvid10 wrote on 10/22/2013, 1:03 PM
Great information, except that op is using the outdated qt32 libs for encoding, which are even a bit less efficient that Vegas' internal engine.
Warper wrote on 10/22/2013, 2:07 PM
It's only one of possible causes. Nevertheless mpeg-2 material is decoded times faster than h264 material is encoded.
dxdy wrote on 10/22/2013, 2:46 PM
I use TMPGEnc to render to MPG2 for DVD, either via frameserving from VP12 or straight from a camera's MTS AVCHD file.

My system is i7-3770k, with GeForce 660ti, 16 GB RAM, render to SSD.

TMPGEnc rendering from the MTS file , CPU shows 60-70% utilization, 16-26% GPU utilization, and TMGEnc reports 98% of the work is done in the GPU, and 2% in the CPU. The output format is 8900 BPI average, with audio. It renders 1 minute of video in 1 minute.

Interestingly, Vegas 12 renders the same file with the same settings in 23 seconds, showing GPU running 41-45% and the CPU running 61-68%. Video quality looks about the same (low light, little fast motion).
musicvid10 wrote on 10/23/2013, 10:17 AM
"Nevertheless mpeg-2 material is decoded times faster than h264 material is encoded."

You should "probably" re-read the whole discussion from the top.

OP is encoding to neither mpeg-2 nor h264 nor vfw.
OP is encoding DNxHD, a VC3 codec from Avid in the MOV container that uses qt32 libs in Vegas, and has decided something is wrong because it is not slamming his cores. The many fallacies and irrelevancies in that thinking have already been pointed out, so there is no benefit in continuing the discussion further, nor in indulging in apples vs. oranges speculation.

All this talk of TMPGENC is frivolous and has nothing to do with the topic of the discussion. I don't know why it was ever brought up.