I'm thinking about adding an internal HD> Someone mentioned POWER SUPPLY to me Should I upgrade it also (current I think is 400watts) How should I determine what wattage ? Will it effect my performance ?
Here are the specs of a GTX650. You can see that the power consumption is very low, 75W versus up to 300W of the last fully supported GTX580.
Have you rendered the SCS Benchmark project with this card? How well does it do with it when rendering to MC AVC or Sony AVC?
You hit the nail on the head. It does and I was careful in the above post to always caveat statements with "in my system" which uses both the 650 and Intel HD4000. I also stated that it does NOT improve Sony AVC or MC AVC render performance, except by virtue of its effects processing. At the moment, I've been using the frameserve to Handbrake method and usually use the Quick-sync option (unless something for YT), since it's a lot faster than CPU only. BTW, I recently tried the new TMPGEnc mastering works 6 and found that its implementation of quick-sync to be even faster than Handbrake, alhough CPU only was slower, presumably same quality since it uses x264. I do wish I had better preview performance at times and will probably upgrade in the future, but as you stated in this thread,
"The only newer cards that work well are AMD/ATI R9 2xx series with the R9 290 or 290X being the top models. However, those are not supported by the MC AVC and Sony AVC render codecs and will be as fast as CPU only during rendering."
Unless I'm missing something (probably), the bottom line is that a newer high end card will give better preview performance and improve rendering only to the extent that it affects "effects processing", but at a cost in terms of power consumption and ultimately the question of an adequate PS.
No, I haven't run the SCS Benchmark test. Do you mean the V11 test? Downloading at the moment. I'm pretty sure it will not fare well--it is a low end card, but does support 3 monitors which was my original concern. Wasn't there an earlier thread with test results? If so, I'll post there.
One should not judge a system's power draw based on Vegas rendering alone, as it only uses a fraction of the resources that are available on a given machine. A better render test might be from applications such as After Effects, which uses as much CPU, GPU and memory as it can. During an AE render, a view of the Task Manager performance graphs show every resource being "pegged" at max and as such, my computer can get pretty hot .
[I]One should not judge a system's power draw based on Vegas rendering alone, as it only uses a fraction of the resources that are available on a given machine.[/I]
Not quite true, It really depends how well your system and the codecs used in the project are supported. On my system, rendering XAVC-Intra files with FXs applied and NeatVideo will use CPU and GPU to its fullest and yes, the system does get hot. XAVC-S is a totally different story and doesn't seem to use a lot the GPU but still puts quite some load on the CPU.
OK guys, here are the numbers. This is from the PressReleaseProject which touted the value of GPU in V11 over V10. Hopefully, this was the render test you had in mind, not the 4K test. In the accompanying PDF, render times using the XDCAM EX HQ 1920x1080-60i, 35 Mbps render template were:
The bottom line is that the low end 650 increased rendering speed by a factor of roughly 3 over CPU only, the same as the GTX 570. Preview performance also seemed to be pretty much the same. Likewise the GPU on the HD4000 increased rendering speed by a factor of almost two and a half.
My conclusion is that GPU DOES work in V13 by a sizable factor, even with a low end video card or by using the GPU that's part of most Intel processors. Hopefully, the higher end cards would be even better. For me at least, the fastest renders are using the 650 for effects processing inside of Vegas, and Intel Quick-sync inside of Handbrake for encoding--again, the caveat, my system.
The number 83s you referenced here are not quite correct. I don't remember the number but it should be lower. With HD6970 I got it on 42s render to the same template you reference here. My R 9 290X does it in 24s for this template also.
@Bruce USA The number 83s you referenced here are not quite correct.
I copied the numbers directly from the accompanying PDF. Just re-checked and they hadn't changed. That's great that your renders are significantly faster--they should be with the pretty awesome system specs you have. If you want to find out the effect of GPU, simply do renders with CPU only and compare. In your case, you would have an almost 10 to 1 increase in rendering speed compared with my CPU only times. Since your CPU only times should be better anyway, the ratio should be somewhat lower. In any case, the only point I've been trying to make is that GPU helps--even with low end cards and you have shown that the assist is even greater with high end cards.
Peter is correct that regular apps don't provide accurate workstation max power consumption. The best way to get more accurate maximum consumption is to run a CPU stress test program like Prime95 AND a GPU stress test like MSI's Kombustor.
My Kill-A-Watt meter power consumption readings for my rig is as follows:
150W - Typing this response no other apps running
325W - GPU Stress test Kombustor
435W - GPU + CPU Stress test
Bruce USA. CPU only my ran MC MP4 105s @5.0ghz and 120s @ 4.6ghz.
Sounds like you're using the MainConcept AVC template and not the XDCAM template. The nice about XDCAM is that there is no GPU assist in encoding, only effects processing. Your numbers should be even lower with this template.
WWAAG. No.. What I post earlier about render times are XDCam template. The cpu render in MC pm4 is another separate test that I ran. PS.. XDCam indeed use gpu assist.
OP - If the PC was an off-the-shelf model, then the manufacturer probably specified a power supply that was just enough for the PC as shipped. Add a second HDD, optical drive, more powerful GPU, even more memory and other peripherals and you could shorten the life of the PSU.
Do not confuse the input power with the power supply specs - they are not related.
Power supply ratings are the maximum combined draw for the 3.3V, 5V and 12V rails. Exceed any of the PSU output ratings and you will stress the supply, even if your total draw as measured on the input side are less than the PSU overall rating. The rule of thumb is that you should never run a PC with less than a 600W PSU, and if you are going to load it with drives and dual-GPUs, then go to 1,000W.
Steve Mann makes a very good point. It is very important to check how much power the PSU can supply on each rail and only the better manufacturers will have it stated in their specs.
I also agree with Steve Mann. Having a good quality, high power PS does not cost a great deal more, but gives you room to move. Having an 800W PS does not mean it is going to use that amount, but if the PC uses 400W, then the power supply is only running at 50% of its capacity, and should handle it quite coolly.
Also, many of us also run other programs that require high power GPUs, like Blackmagic's Resolve, which will not even install unless you have a sufficient bunch of CUDA cores available.