Rendering time for VMS9.0b and VMS 10

DT9b wrote on 6/11/2010, 9:19 AM
First my apologies to those who have been doing this awhile and seen a similar question(s) and answer(s) but I'm new to this game.

My questions:

1. I have VMS 9.0 and it takes about 12 hours to render a 1hr 8min movie to burn a blu-ray disc on my HP with windows XP home ed? I use the default settings in the wizard from "make movie".

2. Does VMS 10 render movies for blu-ray disc faster than 9.0b?

Thank you!

Comments

Markk655 wrote on 6/11/2010, 10:26 AM
What GPI (video card) does your PC have?

V10 offers: Support for GPU-accelerated AVC rendering using the Sony AVC plug-in.

If you have a CUDA-enabled NVIDIA video card, Vegas Movie Studio can use your GPU to improve AVC rendering performance.

GPU-accelerated AVC rendering requires NVIDIA driver 185.xx or later. We recommend using a GeForce 9 Series or newer GPU. GPU-accelerated rendering performance will vary depending on your specific hardware configuration. If you have an older CPU and a newer NVIDIA GPU, rendering using the GPU may improve render times.
DT9b wrote on 6/11/2010, 11:40 AM
Thank you for your quick reply and again I apologize because I'm not too familiar with GPI, GPU, stuff but I basically understand what you're saying.

From My Computer, under Display Adapters, I found that I have a NVIDIA GeForce 7300LE.

Some more questions:

1. Is a render time of 12 hr for a 1hr movie typical (for what I have; an HPm7674n with Windows XP, and the GeForce 7300)?

2. In your opinion, how much faster (approximately) would the rendering time be if I upgrade the driver to the GeForce 9 series?

3. Would the render time improve with VMS 10?

4. Would the rendering time improve with a GeForce 9 driver and VMS 10?

Thank you!
MSmart wrote on 6/11/2010, 12:34 PM
Your 7300 is not CUDA enabled so upgrading the driver won't help.

You'd have to replace your graphics card with one listed here:
http://www.nvidia.com/object/cuda_gpus.html
Markk655 wrote on 6/11/2010, 8:43 PM
Essentially, these days the video card is capable of doing some of the graphics processing that the computer used to do. This frees up the computer to do other things. Themost recent graphics cards actually can work on video files too. So, this would speed up rendering in V10 with a newer video card.

Looks like your PC of similar vintage as mine (Intel Core 2 Duo E6400 2.13 GHz). Upgrading the graphics card won't help that much. Also, if your PC is like mine (also an HP), the newer graphics cards all require more power than what HP put in to the power supply. So, a number of upgrades would be needed.

Since you are burning to BluRay, I am going to make the assumption that you are using 1440x1080 footage or 1920x1080 .m2ts/.m2t or .mp4 clips. If that is the case, then 12 hr for a 1 hr movie is about right on that system. At least that is my
experience with mine (C2D, 2.66 GHz), where I get roughtly 10x the time to render to 1920x1080 from an AVCHD clip.
DT9b wrote on 6/12/2010, 9:14 AM
Thank you for the information and the link to the NIVIDIA site. Very helpful.

Thank you!!
DT9b wrote on 6/12/2010, 9:30 AM
Thank you for sharing the information and for the support. All very helpful. We have very similar systems (Intel Core 2 Duo E6400 ); and your assumptions are correct. Your response really addressed one of my concerns; so It's good to know that the system is at least functioning up to its capabilities.

Thank you!!
david_f_knight wrote on 6/13/2010, 7:54 AM
I don't have VMS HD 10 Platinum (and I don't have a CUDA-capable NVIDIA graphics card), so I haven't tested this, but something Markk655 wrote above needs to be corrected. He advised that due to the original poster's aged CPU, that upgrading the graphics card in his computer wouldn't improve rendering speed much.

That misses the entire point of GPU-accelerated rendering.

If you are rendering to the AVC format (apparently, the only format that is GPU-accelerated in VMS HD Platinum 10), then if you have a suitable GPU in your computer, it will (or can be) used to perform the rendering. So, GPU-acceleration should have a larger relative improvement in rendering performance the slower your CPU is. On the other hand, if you have a super fast CPU and only a very modest CUDA-capable GPU, GPU acceleration may actually slow down your rendering. It all boils down to which is faster: your CPU or your GPU (and it's not easy to tell except by testing). Keep in mind that you can buy CUDA-capable graphics cards from about $25 to about $3000, so you shouldn't expect that all of them deliver the same level of performance. There are orders of magnitude of difference. Ditto, you can spend from about $10 to about $1000 on a CPU, and you shouldn't expect all of them to deliver the same level of performance. A $25 graphics card coupled with a $1000 CPU will render slower on the GPU than on the CPU. Nobody should be expecting a dramatic improvement in rendering speed just because they have a CUDA-capable graphics card in their computer; it depends on which CUDA-capable graphics card they have in their computer vs. which CPU they have in their computer.

So far as I know, no one has published any comparative tests, so at this point no one has any idea at all of what change in rendering performance can be expected by rendering on various CUDA-capable GPUs vs. various CPUs. Until that testing has been performed and people have a reason to know exactly what to expect, the promise of GPU-acceleration is nothing but a bunch of hype. Realistically, people tend to couple cheap CPUs with cheap GPUs, and expensive CPUs with modest to expensive GPUs, so the relative performance of GPU to CPU in most computers may be fairly matched and GPU-acceleration in those situations may not result in much improvement, if any, at all.
Markk655 wrote on 6/13/2010, 12:25 PM
I do agree in some sense with what David is saying. If the computer is the bottleneck, as it is in our case, then any rendering done by the GPU should be a bonus and hence speed up the process. Likewise, with a faster cpu, one would expect to see less of an enhancement because the cpu is already fast enough and so relatively speaking the relative increase won't be as much.

However, in absolute terms, the relative processing speeds of standard (not top of the line CUDA-enabled cards) are only around 1GHz and some of the processing power is still needed for your display. As such, in absolute terms, while GPU-enhanced rendering will seed it up, I doubt there will be a monstous enhancement in rendering speeds - even when the cpu isa bottleneck. Even if it took 1/5 (20%) of the rendering processes, is 9.5 hours much better than 12? Compared to a faster PC, if it took 4 hours, then 20% faster is 3.2 hrs. A smaller difference, as expected. So, yes, it will help, but not as much as either your or I want it to - unless that number is 50% faster or higher. If that is the case, my CUDA card and a new power supply will be showing up pretty soon!

Again, as David points out, we won't know just how much xtra speed we will see in rendering speeds until someone does some testing.
DT9b wrote on 6/13/2010, 2:02 PM
David, MarkK655,

Thank you both for the great discussion and information. I'll definitely be doing some research and your information really gives me some direction for things to consider. I've actually been amazed that my PC is still able to handle some of the newer software but I've been contemplating either upgrading or a new computer. I can live with the 12 hrs of rendering time since most of it' done as I sleep but I'm a bit impatient so it would be nice to render more quickly. My big problem right now is an error that I get, after the 12 hrs of rendering, during the compilation; it get's past 50+% then gives me the error. I've posted this in a separate issue on the forum. If you can help on this it would be very much appreciated.

Again, thank you both; you've been very helpful!!
david_f_knight wrote on 6/14/2010, 7:56 AM
Markk655, we're mostly in agreement. As I wrote, though, it isn't easy to meaningfully compare rendering performance simply by comparing the specs of GPUs vs. CPUs. The main reason GPUs promise greater rendering speed than CPUs isn't because of their raw clock speed (the 1GHz figure you mentioned), but because of their parallel circuitry. The NVIDIA GeForce GTX 480, for example, has 480 CUDA cores. And if that's not enough, you can link up to three of them in computers with appropriate motherboards, providing up to 1440 CUDA cores to potentially simultaneously render video (I'm not sure whether they can all be used when linked in that way, though). Compare that to any of Intel's quad core processors. One would hope that 1440 processors could beat four. On the other hand, the GeForce 8400 GS has just eight CUDA processing cores. But, compared to a single core CPU, that might still offer significant improvement in rendering speed. You can buy a GeForce 8400 GS for about $25. That's much cheaper than buying a whole new higher-end computer. For comparison purposes, NVIDIA says that the maximum power required by the GeForce 8400 GS is 71 watts (which as you pointed out is substantial), while a single GeForce GTX 480 consumes 250 watts (combine three in a system and you're talking 750 watts just for your GPUs!).

Most mass-produced PCs come with the smallest power supplies possible (maybe around 300 watts) for just the components installed in the computer at the factory. If you add a graphics card that draws 71 watts, you will almost certainly need to replace the power supply, as Markk655 pointed out.
DT9b wrote on 6/14/2010, 8:53 AM
Great information. Thank you!