Comments

OldSmoke wrote on 6/4/2015, 11:13 AM
@wwaag

I am glad it works for you.

Here are the specs of a GTX650. You can see that the power consumption is very low, 75W versus up to 300W of the last fully supported GTX580.
Have you rendered the SCS Benchmark project with this card? How well does it do with it when rendering to MC AVC or Sony AVC?

Proud owner of Sony Vegas Pro 7, 8, 9, 10, 11, 12 & 13 and now Magix VP15&16.

System Spec.:
Motherboard: ASUS X299 Prime-A

Ram: G.Skill 4x8GB DDR4 2666 XMP

CPU: i7-9800x @ 4.6GHz (custom water cooling system)
GPU: 1x AMD Vega Pro Frontier Edition (water cooled)
Hard drives: System Samsung 970Pro NVME, AV-Projects 1TB (4x Intel P7600 512GB VROC), 4x 2.5" Hotswap bays, 1x 3.5" Hotswap Bay, 1x LG BluRay Burner

PSU: Corsair 1200W
Monitor: 2x Dell Ultrasharp U2713HM (2560x1440)

wwaag wrote on 6/4/2015, 12:25 PM
@OldSmoke

I am glad it works for you.

You hit the nail on the head. It does and I was careful in the above post to always caveat statements with "in my system" which uses both the 650 and Intel HD4000. I also stated that it does NOT improve Sony AVC or MC AVC render performance, except by virtue of its effects processing. At the moment, I've been using the frameserve to Handbrake method and usually use the Quick-sync option (unless something for YT), since it's a lot faster than CPU only. BTW, I recently tried the new TMPGEnc mastering works 6 and found that its implementation of quick-sync to be even faster than Handbrake, alhough CPU only was slower, presumably same quality since it uses x264. I do wish I had better preview performance at times and will probably upgrade in the future, but as you stated in this thread,

http://www.sonycreativesoftware.com/Forums/ShowMessage.asp?Forum=4&MessageID=917093

"The only newer cards that work well are AMD/ATI R9 2xx series with the R9 290 or 290X being the top models. However, those are not supported by the MC AVC and Sony AVC render codecs and will be as fast as CPU only during rendering."

Unless I'm missing something (probably), the bottom line is that a newer high end card will give better preview performance and improve rendering only to the extent that it affects "effects processing", but at a cost in terms of power consumption and ultimately the question of an adequate PS.

No, I haven't run the SCS Benchmark test. Do you mean the V11 test? Downloading at the moment. I'm pretty sure it will not fare well--it is a low end card, but does support 3 monitors which was my original concern. Wasn't there an earlier thread with test results? If so, I'll post there.

wwaag

AKA the HappyOtter at https://tools4vegas.com/. System 1: Intel i7-8700k with HD 630 graphics plus an Nvidia RTX4070 graphics card. System 2: Intel i7-3770k with HD 4000 graphics plus an AMD RX550 graphics card. System 3: Laptop. Dell Inspiron Plus 16. Intel i7-11800H, Intel Graphics. Current cameras include Panasonic FZ2500, GoPro Hero11 and Hero8 Black plus a myriad of smartPhone, pocket cameras, video cameras and film cameras going back to the original Nikon S.

Tech Diver wrote on 6/4/2015, 2:16 PM
One should not judge a system's power draw based on Vegas rendering alone, as it only uses a fraction of the resources that are available on a given machine. A better render test might be from applications such as After Effects, which uses as much CPU, GPU and memory as it can. During an AE render, a view of the Task Manager performance graphs show every resource being "pegged" at max and as such, my computer can get pretty hot .

Peter
OldSmoke wrote on 6/4/2015, 3:10 PM
[I]One should not judge a system's power draw based on Vegas rendering alone, as it only uses a fraction of the resources that are available on a given machine.[/I]

Not quite true, It really depends how well your system and the codecs used in the project are supported. On my system, rendering XAVC-Intra files with FXs applied and NeatVideo will use CPU and GPU to its fullest and yes, the system does get hot. XAVC-S is a totally different story and doesn't seem to use a lot the GPU but still puts quite some load on the CPU.

Proud owner of Sony Vegas Pro 7, 8, 9, 10, 11, 12 & 13 and now Magix VP15&16.

System Spec.:
Motherboard: ASUS X299 Prime-A

Ram: G.Skill 4x8GB DDR4 2666 XMP

CPU: i7-9800x @ 4.6GHz (custom water cooling system)
GPU: 1x AMD Vega Pro Frontier Edition (water cooled)
Hard drives: System Samsung 970Pro NVME, AV-Projects 1TB (4x Intel P7600 512GB VROC), 4x 2.5" Hotswap bays, 1x 3.5" Hotswap Bay, 1x LG BluRay Burner

PSU: Corsair 1200W
Monitor: 2x Dell Ultrasharp U2713HM (2560x1440)

wwaag wrote on 6/4/2015, 3:24 PM
@ Bruce USA
@ OldSmoke.

OK guys, here are the numbers. This is from the PressReleaseProject which touted the value of GPU in V11 over V10. Hopefully, this was the render test you had in mind, not the 4K test. In the accompanying PDF, render times using the XDCAM EX HQ 1920x1080-60i, 35 Mbps render template were:

Nvidia 570 GTX - 83 sec
AMD HD6870 100 sec
Nvidia Quaatro 5000- 98 sec
AMD Firepro - 97 sec.

Here are my results:

Nvidia 650 (GPU on) - 83 sec
Nvidia 650 (GPU off) - 255 sec
Intel HD4000 (GPU on) - 105 sec.

The bottom line is that the low end 650 increased rendering speed by a factor of roughly 3 over CPU only, the same as the GTX 570. Preview performance also seemed to be pretty much the same. Likewise the GPU on the HD4000 increased rendering speed by a factor of almost two and a half.

My conclusion is that GPU DOES work in V13 by a sizable factor, even with a low end video card or by using the GPU that's part of most Intel processors. Hopefully, the higher end cards would be even better. For me at least, the fastest renders are using the 650 for effects processing inside of Vegas, and Intel Quick-sync inside of Handbrake for encoding--again, the caveat, my system.

wwaag




AKA the HappyOtter at https://tools4vegas.com/. System 1: Intel i7-8700k with HD 630 graphics plus an Nvidia RTX4070 graphics card. System 2: Intel i7-3770k with HD 4000 graphics plus an AMD RX550 graphics card. System 3: Laptop. Dell Inspiron Plus 16. Intel i7-11800H, Intel Graphics. Current cameras include Panasonic FZ2500, GoPro Hero11 and Hero8 Black plus a myriad of smartPhone, pocket cameras, video cameras and film cameras going back to the original Nikon S.

BruceUSA wrote on 6/4/2015, 3:37 PM
The number 83s you referenced here are not quite correct. I don't remember the number but it should be lower. With HD6970 I got it on 42s render to the same template you reference here. My R 9 290X does it in 24s for this template also.

Intel i7 12700k @5.2Ghz all P Cores, 5.3@ 6 Core, Turbo boost 3 Cores @5.4Ghz. 4.1Ghz All E Cores.                                          

MSI Z690 MPG Edge DDR5 Wifi                                                     

TEAMGROUP T-Force Delta RGB 32GB DDR5 -6200                     

Samsung 980 Pro x4 Nvme .M2 1tb Pcie Gen 4                                     

ASRock RX 6900XT Phantom 16GB                                                        

PSU Eva Supernova G2 1300w                                                     

Black Ice GTX 480mm radiator top mount push/pull                    

MCP35X dual pump w/ dual pump housing.                                

Corsair RGB water block. RGB Fan thru out                           

Phanteks Enthoo full tower

Windows 11 Pro

wwaag wrote on 6/4/2015, 3:50 PM
@Bruce USA
The number 83s you referenced here are not quite correct.

I copied the numbers directly from the accompanying PDF. Just re-checked and they hadn't changed. That's great that your renders are significantly faster--they should be with the pretty awesome system specs you have. If you want to find out the effect of GPU, simply do renders with CPU only and compare. In your case, you would have an almost 10 to 1 increase in rendering speed compared with my CPU only times. Since your CPU only times should be better anyway, the ratio should be somewhat lower. In any case, the only point I've been trying to make is that GPU helps--even with low end cards and you have shown that the assist is even greater with high end cards.

wwaag

AKA the HappyOtter at https://tools4vegas.com/. System 1: Intel i7-8700k with HD 630 graphics plus an Nvidia RTX4070 graphics card. System 2: Intel i7-3770k with HD 4000 graphics plus an AMD RX550 graphics card. System 3: Laptop. Dell Inspiron Plus 16. Intel i7-11800H, Intel Graphics. Current cameras include Panasonic FZ2500, GoPro Hero11 and Hero8 Black plus a myriad of smartPhone, pocket cameras, video cameras and film cameras going back to the original Nikon S.

BruceUSA wrote on 6/4/2015, 4:06 PM
WWAAG. I think the last time Oldsmoke ran his GTX(570 he got about the same times as HD)6970. CPU only my ran MC MP4 105s @5.0ghz and 120s @ 4.6ghz.

Intel i7 12700k @5.2Ghz all P Cores, 5.3@ 6 Core, Turbo boost 3 Cores @5.4Ghz. 4.1Ghz All E Cores.                                          

MSI Z690 MPG Edge DDR5 Wifi                                                     

TEAMGROUP T-Force Delta RGB 32GB DDR5 -6200                     

Samsung 980 Pro x4 Nvme .M2 1tb Pcie Gen 4                                     

ASRock RX 6900XT Phantom 16GB                                                        

PSU Eva Supernova G2 1300w                                                     

Black Ice GTX 480mm radiator top mount push/pull                    

MCP35X dual pump w/ dual pump housing.                                

Corsair RGB water block. RGB Fan thru out                           

Phanteks Enthoo full tower

Windows 11 Pro

Byron K wrote on 6/4/2015, 4:07 PM
Peter is correct that regular apps don't provide accurate workstation max power consumption. The best way to get more accurate maximum consumption is to run a CPU stress test program like Prime95 AND a GPU stress test like MSI's Kombustor.

My Kill-A-Watt meter power consumption readings for my rig is as follows:
150W - Typing this response no other apps running
325W - GPU Stress test Kombustor
435W - GPU + CPU Stress test

My power supply is a Corsair 750W.
wwaag wrote on 6/4/2015, 4:27 PM
Bruce USA.
CPU only my ran MC MP4 105s @5.0ghz and 120s @ 4.6ghz.
Sounds like you're using the MainConcept AVC template and not the XDCAM template. The nice about XDCAM is that there is no GPU assist in encoding, only effects processing. Your numbers should be even lower with this template.

wwaag

AKA the HappyOtter at https://tools4vegas.com/. System 1: Intel i7-8700k with HD 630 graphics plus an Nvidia RTX4070 graphics card. System 2: Intel i7-3770k with HD 4000 graphics plus an AMD RX550 graphics card. System 3: Laptop. Dell Inspiron Plus 16. Intel i7-11800H, Intel Graphics. Current cameras include Panasonic FZ2500, GoPro Hero11 and Hero8 Black plus a myriad of smartPhone, pocket cameras, video cameras and film cameras going back to the original Nikon S.

BruceUSA wrote on 6/4/2015, 4:39 PM
WWAAG. No.. What I post earlier about render times are XDCam template. The cpu render in MC pm4 is another separate test that I ran. PS.. XDCam indeed use gpu assist.

Intel i7 12700k @5.2Ghz all P Cores, 5.3@ 6 Core, Turbo boost 3 Cores @5.4Ghz. 4.1Ghz All E Cores.                                          

MSI Z690 MPG Edge DDR5 Wifi                                                     

TEAMGROUP T-Force Delta RGB 32GB DDR5 -6200                     

Samsung 980 Pro x4 Nvme .M2 1tb Pcie Gen 4                                     

ASRock RX 6900XT Phantom 16GB                                                        

PSU Eva Supernova G2 1300w                                                     

Black Ice GTX 480mm radiator top mount push/pull                    

MCP35X dual pump w/ dual pump housing.                                

Corsair RGB water block. RGB Fan thru out                           

Phanteks Enthoo full tower

Windows 11 Pro

Steve Mann wrote on 6/5/2015, 5:04 PM
OP - If the PC was an off-the-shelf model, then the manufacturer probably specified a power supply that was just enough for the PC as shipped. Add a second HDD, optical drive, more powerful GPU, even more memory and other peripherals and you could shorten the life of the PSU.

Do not confuse the input power with the power supply specs - they are not related.

Power supply ratings are the maximum combined draw for the 3.3V, 5V and 12V rails. Exceed any of the PSU output ratings and you will stress the supply, even if your total draw as measured on the input side are less than the PSU overall rating. The rule of thumb is that you should never run a PC with less than a 600W PSU, and if you are going to load it with drives and dual-GPUs, then go to 1,000W.

OldSmoke wrote on 6/5/2015, 5:25 PM
Steve Mann makes a very good point. It is very important to check how much power the PSU can supply on each rail and only the better manufacturers will have it stated in their specs.

Proud owner of Sony Vegas Pro 7, 8, 9, 10, 11, 12 & 13 and now Magix VP15&16.

System Spec.:
Motherboard: ASUS X299 Prime-A

Ram: G.Skill 4x8GB DDR4 2666 XMP

CPU: i7-9800x @ 4.6GHz (custom water cooling system)
GPU: 1x AMD Vega Pro Frontier Edition (water cooled)
Hard drives: System Samsung 970Pro NVME, AV-Projects 1TB (4x Intel P7600 512GB VROC), 4x 2.5" Hotswap bays, 1x 3.5" Hotswap Bay, 1x LG BluRay Burner

PSU: Corsair 1200W
Monitor: 2x Dell Ultrasharp U2713HM (2560x1440)

GeeBax wrote on 6/5/2015, 6:06 PM
I also agree with Steve Mann. Having a good quality, high power PS does not cost a great deal more, but gives you room to move. Having an 800W PS does not mean it is going to use that amount, but if the PC uses 400W, then the power supply is only running at 50% of its capacity, and should handle it quite coolly.

Also, many of us also run other programs that require high power GPUs, like Blackmagic's Resolve, which will not even install unless you have a sufficient bunch of CUDA cores available.