In praise of GPU rendering

Cottonwood wrote on 5/18/2014, 7:18 PM
So, for a while now, I've been using a 1st generation i3 system as my editing rig (your condolences are accepted). I stumbled on a NVidea 610 card that was being thrown out at work so I popped it into my system. Some improvement, but my system would crash shortly after the start of rendering. So back to CPU only where a two-hr movie without FX might run me overnight to render

Currently, I'm having CyberpowerPC assemble me a new rig with a AMD 8350 processor and an AMD Radeon 280 something or other. While I'm waiting, I found a tip for GPU rendering that suggested changing the dynamic RAM setting in preferences to zero. I did so.

Bam! It now takes 3.5 hours to render the film that took overnight to render. The CPU usage rarely maxes out. All of this with a ~$60 dollar card.

Now, I realize when I add sharpening, color correction, etc. the time for render will lengthen, but now I'm thinking I should have spent the $1000 on a Boss GPU, and foregone the pumped- up new CPU set-up and middle of the road GPU.

C'est la vie...

Comments

Spectralis wrote on 5/19/2014, 4:44 AM
While I agree that in some case GPU rendering speeds up the process that's not always been my experience - especially with nested files. It's also worth considering that some users have experienced problems with GPU rendering that don't occur when it's switched off. So buying a highend GPU could pay off or it might not. I think the main factor here is Sony's integration of GPU rendering in Vegas. In my experience of GPU rendering over the years since it was introduced in Vegas, Sony need to pay far more attention to ensuring this works seamlessly in every situation. I don't think we're there yet.
musicvid10 wrote on 5/19/2014, 8:27 AM
Any hardware accelerated rendering scheme has its biggest impact on modest-to-underpowered systems. The effects can be some to almost none on high-powered systems. For that, and quality considerations, CPU trumps GPU in most setups, and that is where I would invest my $.
larry-peter wrote on 5/19/2014, 9:14 AM
+1
And on my two systems (both nvidia GPUs) I have found subtle differences in GPU accelerated renders that I don't care for. I'll gladly trade the small additional time involved for reliable CPU renders.
musicvid10 wrote on 5/19/2014, 9:22 AM
Yes, hardware rendering works at the expense of quality; there are plenty of deniers out there.
Spectralis wrote on 5/19/2014, 10:26 AM
I have a nVidia GTX 780 and I'm not seeing any time difference rendering my latest project using nested files with GPU on or off. GPU makes no difference in preview either.

In contrast, when I use Octane Render to render 3D modelled scenes there is definitely an observable difference between different GPU's. A GTX 780 always renders twice as fast as a GTX 760 which is entirely predictable based on specifications.

Which is why I think Sony have some way to go before GPU rendering in Vegas will have the predictability and consistency it has in 3D modelling.
Marc S wrote on 5/19/2014, 11:08 AM
I have a 570 card and it does help for speed in renders. Anything important I render without using it as i also see a quality difference. There is a subtle strobing effect that goes away when I turn off GPU rendering.
OldSmoke wrote on 5/19/2014, 11:39 AM
Spectralis

Have you tried my tweak with your GXT780? http://www.sonycreativesoftware.com/forums/ShowMessage.asp?Forum=4&MessageID=885329 this works best with driver 334.89!

Proud owner of Sony Vegas Pro 7, 8, 9, 10, 11, 12 & 13 and now Magix VP15&16.

System Spec.:
Motherboard: ASUS X299 Prime-A

Ram: G.Skill 4x8GB DDR4 2666 XMP

CPU: i7-9800x @ 4.6GHz (custom water cooling system)
GPU: 1x AMD Vega Pro Frontier Edition (water cooled)
Hard drives: System Samsung 970Pro NVME, AV-Projects 1TB (4x Intel P7600 512GB VROC), 4x 2.5" Hotswap bays, 1x 3.5" Hotswap Bay, 1x LG BluRay Burner

PSU: Corsair 1200W
Monitor: 2x Dell Ultrasharp U2713HM (2560x1440)

NormanPCN wrote on 5/19/2014, 11:57 AM
Which is why I think Sony have some way to go before GPU rendering in Vegas will have the predictability and consistency it has in 3D modelling.

This would be expected. The consumer Nvidia Kepler architecture is optimized for 3D rendering. Games or anything else that renders 3D scenes.

Vegas uses the OpenCL general purpose GPU compute subsystem and in this the current Nvidia architecture does not do as well as it does with 3D rendering or in comparison to certain AMD architectures.
Spectralis wrote on 5/19/2014, 6:20 PM
Thanks OldSmoke, I'll give that a try and report back.

Concerning Vegas rendering it would be good if Sony could optimise VP for CUDA as well as ATI. It would probably make a difference to rendering speeds.

Octane Render is coming to After Effects but it would be great if they developed it for Hitfilm as well especially as HF integrates with Vegas.
OldSmoke wrote on 5/19/2014, 6:32 PM
Well, it is sort of "optimized" for CUDA but only certain codec and older cards like the 500 series just work better, same for OpenCL.

Proud owner of Sony Vegas Pro 7, 8, 9, 10, 11, 12 & 13 and now Magix VP15&16.

System Spec.:
Motherboard: ASUS X299 Prime-A

Ram: G.Skill 4x8GB DDR4 2666 XMP

CPU: i7-9800x @ 4.6GHz (custom water cooling system)
GPU: 1x AMD Vega Pro Frontier Edition (water cooled)
Hard drives: System Samsung 970Pro NVME, AV-Projects 1TB (4x Intel P7600 512GB VROC), 4x 2.5" Hotswap bays, 1x 3.5" Hotswap Bay, 1x LG BluRay Burner

PSU: Corsair 1200W
Monitor: 2x Dell Ultrasharp U2713HM (2560x1440)

Pete Siamidis wrote on 5/23/2014, 2:10 PM
Just stumbled across this thread...so after all these years are Nvidia 5xx cards still the best solution for faster encoding? I'm using Vegas Pro 13 and have started filming in 4k with a Sony AX100, hence why faster encoding performance has just shot to the top of my priorities list. I normally render my shoots overnight and they are done in the morning but now that I shoot in 4k that's no longer the case, and I'm finding my render pc still grinding away when I wake up :( I checked Sony's Vegas Pro 13 website and they say:

"...and crank encoding speeds up by as much as six times over previous versions to popular formats like AVC."

...implying it's possible to speed up encode times, but they don't go on to say what video card to buy to get that magical 6x encode speed increase, nor which codec to use although I presume they must mean their own Sony AVC codec.

Anyways...so with Vegas Pro 13 is an NVidia 570/580 still the card to get for fastest possible encodes, or are the new Amd cards faster when set to use OpenCL? I notice Sony's Vegas Pro website seems to mention OpenCL a lot hence why I wonder if Amd are the better cards now for encoding.
OldSmoke wrote on 5/23/2014, 3:49 PM
It really depends on your codec for final render MC AVC with GTX580 is the fastest you can get. Sony AVC is a bit slower. AMD and OpenCL seems to have a slight advantage when it comes to timeline preview and the HD6970 is a great card too. I don't have my FDR-AX100 yet but if you have 2-3 short clips somewhere to download I'll be happy to test it. I currently have 2x GTX580 in my system. It also depends on the rest of your system but you don't have it listed.

Proud owner of Sony Vegas Pro 7, 8, 9, 10, 11, 12 & 13 and now Magix VP15&16.

System Spec.:
Motherboard: ASUS X299 Prime-A

Ram: G.Skill 4x8GB DDR4 2666 XMP

CPU: i7-9800x @ 4.6GHz (custom water cooling system)
GPU: 1x AMD Vega Pro Frontier Edition (water cooled)
Hard drives: System Samsung 970Pro NVME, AV-Projects 1TB (4x Intel P7600 512GB VROC), 4x 2.5" Hotswap bays, 1x 3.5" Hotswap Bay, 1x LG BluRay Burner

PSU: Corsair 1200W
Monitor: 2x Dell Ultrasharp U2713HM (2560x1440)

Pete Siamidis wrote on 5/23/2014, 7:17 PM
I use h264 so either Sony AVC or MC AVC worked fine for me in the past. I had switched to Sony AVC about a year or so ago because it was faster but unfortunately on my machines I can't render anything in 4k with Vegas Pro. No matter what setting I change, even disable gpu, etc, be it Vegas 12 or 13 if I try to render out anything at 3840x2160 it will fail immediately and give some kind of error, and that's even if I take a default template and change nothing other than resolution output. So I have to go back to MC AVC as Sony AVC is dead to me now since it doesn't support 4k rendering on my various machines. Seems kinda odd that Sony's codec doesn't support 4k rendering, but who knows.

I'll try getting you a sample project with some 4k videos and then we can compare render speeds. MC AVC seems to work on my machine but its savagely slow, and on my render test script it managed to render a 4k video but then failed after Windows said my 16gb ram machine ran out of memory.
OldSmoke wrote on 5/23/2014, 7:47 PM
As I said, it really depends on your other hardware too, not just only the GPU. Maybe you can update your system specs?

Proud owner of Sony Vegas Pro 7, 8, 9, 10, 11, 12 & 13 and now Magix VP15&16.

System Spec.:
Motherboard: ASUS X299 Prime-A

Ram: G.Skill 4x8GB DDR4 2666 XMP

CPU: i7-9800x @ 4.6GHz (custom water cooling system)
GPU: 1x AMD Vega Pro Frontier Edition (water cooled)
Hard drives: System Samsung 970Pro NVME, AV-Projects 1TB (4x Intel P7600 512GB VROC), 4x 2.5" Hotswap bays, 1x 3.5" Hotswap Bay, 1x LG BluRay Burner

PSU: Corsair 1200W
Monitor: 2x Dell Ultrasharp U2713HM (2560x1440)

Pete Siamidis wrote on 5/23/2014, 11:19 PM
Oh yeah forgot about that, it's a Haswell i7-4770k cpu, 16gb ram, Windows 8.1, two NVidia 670 gpus. I used to use a Sandy Bridge i7-2600k cpu along with an Nvidia 580 and that worked well, no problems, but eventually upgraded to twin 670's because my render box is also my game playing box. It worked perfect with Vegas Pro 12 with the Haswell cpu and 670's, no issues with either MC AVC and Sony AVC, except that renders did get slower because the encoder didn't make much use of the 670 whereas it made good use of the 580. But I didn't mind since I just did my renders overnight and they were done in the morning anyways. It's only an issue now because my guesstimate is that my previous overnight renders will now take around 30 to 35 hours to complete because of the transition to 4k. The only options for 4k rendering on Vegas Pro 13 seem to be MC AVC, Sony AVC, and XAVC. XAVC makes monstrously large files and isn't configurable so that's out, and Sony AVC fails to render anything at 4k (works perfect at all other resolutions) so that's out, hence that leaves MC AVC. I'm starting to think that I may need to build a dedicated Vegas Pro render box with a 580 in it.
Pete Siamidis wrote on 5/24/2014, 2:48 AM
In case your curious about render times, I just did some tests where I rendered 10 seconds of 4k footage out to the various formats I need. Footage is stock with a color corrector added and watermark. To render that 10 seconds of footage took:

2:26 rendering to 3840 x 2160 MC AVC
0:28 rendering to 1280 x 720 MC AVC
0:52 rendering to 320 x 180 MC AVC
0:25 rendering to 1280 x 720 WMV

...which puts my estimate to render one of my typical video shoots at around ~32.5 hours. That's with a Haswell 4770k cpu and Nvidia 670. I don't film every day so I guess it's still doable, I just have to put the render box on a battery backup and let it grind away for 2 days. Or I guess I can build another box with a 580, not sure yet what to do.
OldSmoke wrote on 5/24/2014, 5:32 AM
Have you tried my tweak? It's somewhere in this forum but basically it is using driver 334.89 and changing the OpenCL Memory Filter Size from the default 384 to 1024. Also with two GPU's, make sure SLI isn't fully enabled, set it to Activate All Displays. I have Win 8.1 on a second partition just to play around with it and with my tweak I got same render times as with driver 296.10 under Win7.

Proud owner of Sony Vegas Pro 7, 8, 9, 10, 11, 12 & 13 and now Magix VP15&16.

System Spec.:
Motherboard: ASUS X299 Prime-A

Ram: G.Skill 4x8GB DDR4 2666 XMP

CPU: i7-9800x @ 4.6GHz (custom water cooling system)
GPU: 1x AMD Vega Pro Frontier Edition (water cooled)
Hard drives: System Samsung 970Pro NVME, AV-Projects 1TB (4x Intel P7600 512GB VROC), 4x 2.5" Hotswap bays, 1x 3.5" Hotswap Bay, 1x LG BluRay Burner

PSU: Corsair 1200W
Monitor: 2x Dell Ultrasharp U2713HM (2560x1440)

JohnnyRoy wrote on 5/24/2014, 6:27 AM
> "...which puts my estimate to render one of my typical video shoots at around ~32.5 hours. That's with a Haswell 4770k cpu and Nvidia 670."

You should check the minimum requirements for 4K on the Sony site. They recommend an 8-core, 16GB memory, and RAID 0 as minimum! That means you probably need a 12-core with 24GB memory. No GPU is going to help you when you only have a 4-core; that's olny half the power Sony recommends as minimum for 4K.

I'll make you the same offer that OldSmoke did. If you can upload some 4K footage somewhere (maybe DropBox?) I'd be happy to benchmark the renders on my 8-core 2.8Ghz Xeon's with 16GB memory and ATI Radeon HD 5870 GPU.

~jr
Pete Siamidis wrote on 5/24/2014, 11:55 AM
"Have you tried my tweak? It's somewhere in this forum but basically it is using driver 334.89 and changing the OpenCL Memory Filter Size from the default 384 to 1024. Also with two GPU's, make sure SLI isn't fully enabled, set it to Activate All Displays."

Yeah I tried your tweak but it didn't change the render times. I tried with SLI both enabled and disabled, no difference.


"You should check the minimum requirements for 4K on the Sony site. They recommend an 8-core, 16GB memory, and RAID 0 as minimum! That means you probably need a 12-core with 24GB memory. No GPU is going to help you when you only have a 4-core; that's olny half the power Sony recommends as minimum for 4K."

Well I know I'm not i/o bound, I only use ssd's but even an old mechanical drive wouldn't be holding me back in this case. For example timeline preview plays at full speed so it can read the data off disk with ease. My memory use goes to at most 44% so I know I'm not memory bound either. I'm definitely computationally bound as my cpu meter will max out. Yeah for cpu only rendering I would definitely need more, but I have used gpu rendering for years with great results, it was 4x faster in my case when I used a 560 back in the day. I had only stopped doing that since I decided to have my gaming pc and Vegas render box be one and the same hence why there are twin 670's in there now, but I'm starting to realize that will have to likely come to an end now that I've switched to 4k. I have almost enough spare parts to build a pure render box anyways, just need the cpu and gpu.

Anyways I did make a test project, it can be downloaded here:

https://onedrive.live.com/redir?resid=8FF40136629EDA72!5716&authkey=!ACtsPRxNNk8kAs8&ithint=file%2c.zip

It's very basic, a 23 second clip with color corrector added (only 10% saturation added) and legacy watermark. I just recorded that clip quick off my AX100 at 4k because I need to head out to a shoot pronto. I also included some jpgs which show my project properties, and the 3 different render settings with 180 being for mobile networks, 720 for typical hd, and 2160 for best quality. To render that project on my machine with MC AVC takes:

180 render: 2:02
720 render: 0:56
2160 render: 4:10

I'll be back from my shoot late tonight but if you guys can grab that project and render it on your 580 or 8 core machines that would be great, then I can decide if I'll live with 2 day renders or just build a dedicated Vegas render box.
OldSmoke wrote on 5/24/2014, 12:46 PM
One thing I must say, 32-bit floating point (fullrange) is an absolute overkill because your camera doesn't give you that much information. If you really want to operate with that project settings, there is no way around a dual 8-core or higher setup.

Proud owner of Sony Vegas Pro 7, 8, 9, 10, 11, 12 & 13 and now Magix VP15&16.

System Spec.:
Motherboard: ASUS X299 Prime-A

Ram: G.Skill 4x8GB DDR4 2666 XMP

CPU: i7-9800x @ 4.6GHz (custom water cooling system)
GPU: 1x AMD Vega Pro Frontier Edition (water cooled)
Hard drives: System Samsung 970Pro NVME, AV-Projects 1TB (4x Intel P7600 512GB VROC), 4x 2.5" Hotswap bays, 1x 3.5" Hotswap Bay, 1x LG BluRay Burner

PSU: Corsair 1200W
Monitor: 2x Dell Ultrasharp U2713HM (2560x1440)

Pete Siamidis wrote on 5/24/2014, 1:05 PM
I had tried 8bit as well and for encoding it was a little quicker but not substantially so, not enough to help me out much anyways as a shoot encode would still not finish overnight. The 670 can do the timeline with 32bit best/full 4k video at full speed so I'm good there.
OldSmoke wrote on 5/24/2014, 1:11 PM
It really doesn't help to go for 32bit, especially when you set your render quality down to Good. I have done renders at 8-bit and the times are:

320 = 00:10sec
720 = 00:11sec
4K = 00:33sec

This is with render quality set to best. There are a couple of other things I have changed and I will post screen shoots shortly. One more thing to you should try for better quality. Generated media such a text always looks better if you set the media properties to the final render size. Why I don't know but it does.

Proud owner of Sony Vegas Pro 7, 8, 9, 10, 11, 12 & 13 and now Magix VP15&16.

System Spec.:
Motherboard: ASUS X299 Prime-A

Ram: G.Skill 4x8GB DDR4 2666 XMP

CPU: i7-9800x @ 4.6GHz (custom water cooling system)
GPU: 1x AMD Vega Pro Frontier Edition (water cooled)
Hard drives: System Samsung 970Pro NVME, AV-Projects 1TB (4x Intel P7600 512GB VROC), 4x 2.5" Hotswap bays, 1x 3.5" Hotswap Bay, 1x LG BluRay Burner

PSU: Corsair 1200W
Monitor: 2x Dell Ultrasharp U2713HM (2560x1440)

Pete Siamidis wrote on 5/24/2014, 1:34 PM
Well the talent is late to the shoot (shocker) so figured I'd check back here again, and wow your render times are so much faster! The reason I think 8bit vs 32bit isn't a big difference with gpu assist, at least from my recollection from when I used to be a video game programmer, is on the gpu side the register file is typically made up of 128bit registers split into four 32 bit numbers. You load up the registers as needed and do your math operations in parallel. Since each value is 32 bit already there wasn't a huge savings to doing the operations on 8 bit values.

I'm curious to see what settings tweaks you made as well, I know for example that my 180 render must have some wrong settings because it should render far quicker than it does.
OldSmoke wrote on 5/24/2014, 2:12 PM
So here is what I did with your FDR-AX100 project. First straight from VP13:
Project Properties:

320p:

720p:

Results:
http://dl.dropboxusercontent.com/u/39278380/4K/vp13-320p-2pass-CUDA.mp4
http://dl.dropboxusercontent.com/u/39278380/4K/vp13-720p-2pass-CUDA.mp4
As you noticed there is no MC AVC 4K. What ever I did, it just looks bad and it doesn't even play in WMP; it does in VLC.

But here is what I propose. Render the project out as XAVC-S and run it through Handbrake. Not only does HB do a much better job at resizing 4K down to 320 and 720 but also converting the 4K XAVC-S to 4K MP4 at around 10Mbps.
Here are the results (the RF setting in HB is noted in the file name):
320p:
http://dl.dropboxusercontent.com/u/39278380/4K/xavc-hb-320p-rf19.mp4
720p:
http://dl.dropboxusercontent.com/u/39278380/4K/xavc-hb-720p-rf21.mp4
4K:
http://dl.dropboxusercontent.com/u/39278380/4K/xavc-hb-4k-rf35.mp4

BTW: I rendered it out to XAVC-S with the same render settings, no need to go to 32bit. If I switch to 32bit, my GPU load drops down significantly and it will take as long as your renders.

Edit:
here is the link to the 4K upload on YT.

Proud owner of Sony Vegas Pro 7, 8, 9, 10, 11, 12 & 13 and now Magix VP15&16.

System Spec.:
Motherboard: ASUS X299 Prime-A

Ram: G.Skill 4x8GB DDR4 2666 XMP

CPU: i7-9800x @ 4.6GHz (custom water cooling system)
GPU: 1x AMD Vega Pro Frontier Edition (water cooled)
Hard drives: System Samsung 970Pro NVME, AV-Projects 1TB (4x Intel P7600 512GB VROC), 4x 2.5" Hotswap bays, 1x 3.5" Hotswap Bay, 1x LG BluRay Burner

PSU: Corsair 1200W
Monitor: 2x Dell Ultrasharp U2713HM (2560x1440)