Will Vegas Pro use new Ryzen Threadripper effectively?

wilri001 wrote on 8/16/2017, 10:15 AM

I'm considering upgrading from AMD 8350 (4 cores) to the new Ryzen threadripper (16 cores), but it seems a risk that it won't be worth the money because it isn't clear that Vegas Pro will take advantage of all the threads.

Part of the confusion is the "max threads" option makes no difference at all with the 8350. Setting it to 1 or 8 utilizes the processor the same (about 50%). Changing to 8 "slices" bumps utilization up to about 70%.

I could use a boost since most projects are from 4 to 6 multicam, and the preview is dropping frames unless the quality is so bad it's hard to see the quality of each camera.

I'm more encouraged about threadripper being used effectively for preview because the 8350 is running 98%.

Any benchmarks with threadripper will be much appreciated.

Any comparisons in preview with multicam GPU on/off appreciated, too. I have Nvidia 750ti which isn't being used, but not sure if it would be worth the money to buy an AMD Fury 9x. But perhaps with the threadripper, I won't need any GPU ?

Comments

Kinvermark wrote on 8/16/2017, 10:21 AM

May have to wait a little for reliable tests to be done. Will be interesting to see how Vegas 15 performs.

It seems to me the big opportunity with threadripper will be the large number of PCI lanes it has. Thus, for instance, one could load up three 16 lane AMD Vega GPU's and "rip" away. IF Vegas can utilize all these cards. I think there is some evidence that multiple cards can be used by Vegas - Oldsmoke did some tests IIRC.

 

astar wrote on 8/16/2017, 11:36 AM

"But perhaps with the Threadripper, I won't need any GPU " Threadripper will need a good or multiple GPUs.

System utilization above 50% is considered CPU bound in my book. You are working on a multi threaded OS with other applications running in the background (whether you think they are there or not.) Vegas uses 70+ threads to operate. How many is your current system able to handle at one give moment in time. The AVC encoder is like 8 threads alone, render another 16, and the rest are timeline, preview display, VU meters running, ect.

If your system CPU is not being maxed, it means that there is wait time in your workflow (source media to render engine.) That wait time is not maintaining the CPU crunching numbers. There are software caps inside windows as well that limit the applications CPU utilization. This keeps your background tasks operational, and your mouse/keyboard responsive. Without these limiters, Vegas would halt all other functions on the system while you are rendering. So 70% utilization is actually good full system.

Full load system utilization does not have to be at 100%. This would be like driving your car at redline all the time just to go to the mall.

Multiple GPUs in the system running at full 16X will be beneficial from 2 stand points. If Vegas 15+ implements later versions of OpenCL that allows teaming of GPU or the ability to compute with the CPU+GPU+GPU this will improve speed. On the other hand, if we remain with only the CPU+GPU compute unit, then another GPU could be dedicated to effect/timeline compute, and the other one dedicated to Windows display. Currently with only one GPU, your GPU is being asked to multitask these operations. This multi tasking makes 4x the bandwidth on the PCIe bus, which is why it is so important to have your GPU running at the max 16x and latest PCIe version level.

Kinvermark wrote on 8/16/2017, 11:49 AM

Excellent informative post Astar!

So, if I read between the lines, you are quite optimistic that threadripper performance when loaded with GPU's will be very, very good.

 

wilri001 wrote on 8/16/2017, 12:37 PM

"But perhaps with the Threadripper, I won't need any GPU " Threadripper will need a good or multiple GPUs.

System utilization above 50% is considered CPU bound in my book. You are working on a multi threaded OS with other applications running in the background (whether you think they are there or not.) Vegas uses 70+ threads to operate. How many is your current system able to handle at one give moment in time. The AVC encoder is like 8 threads alone, render another 16, and the rest are timeline, preview display, VU meters running, ect.

If your system CPU is not being maxed, it means that there is wait time in your workflow (source media to render engine.) That wait time is not maintaining the CPU crunching numbers. There are software caps inside windows as well that limit the applications CPU utilization. This keeps your background tasks operational, and your mouse/keyboard responsive. Without these limiters, Vegas would halt all other functions on the system while you are rendering. So 70% utilization is actually good full system.

Full load system utilization does not have to be at 100%. This would be like driving your car at redline all the time just to go to the mall.

Multiple GPUs in the system running at full 16X will be beneficial from 2 stand points. If Vegas 15+ implements later versions of OpenCL that allows teaming of GPU or the ability to compute with the CPU+GPU+GPU this will improve speed. On the other hand, if we remain with only the CPU+GPU compute unit, then another GPU could be dedicated to effect/timeline compute, and the other one dedicated to Windows display. Currently with only one GPU, your GPU is being asked to multitask these operations. This multi tasking makes 4x the bandwidth on the PCIe bus, which is why it is so important to have your GPU running at the max 16x and latest PCIe version level.

My main concern is multicam editing. FX are ignored and there's no rendering then. So it's decoding the source files and displaying them. I'm seeing lots of processor load doing that.

I understand OPENCL GPU is used for GPU FX on playback of timeline, but is it used for decoding source files? I have Nvidia 750ti without OPENCL support, so I can't experience that.

fr0sty wrote on 8/16/2017, 3:34 PM

I'm using an 1800x right now. Vegas doesn't come close to maxing it out, not while encoding and not while editing either. The performance gains are decent, encoding is a minimum of 2x faster than with my FX8350 I had previously. It's worth noting I use no GPU at all since Vegas isn't modern and cannot recognize my GTX970.

All in all, yes, it does provide an advantage, but not as much as it could.

That said, Vegas Pro 15 launches in a few days, so come back to this thread after that happens and I will post some benchmarks from the new version.

Systems:

Desktop

AMD Ryzen 7 1800x 8 core 16 thread at stock speed

64GB 3000mhz DDR4

Geforce RTX 3090

Windows 10

Laptop:

ASUS Zenbook Pro Duo 32GB (9980HK CPU, RTX 2060 GPU, dual 4K touch screens, main one OLED HDR)

wilri001 wrote on 8/16/2017, 3:47 PM

I'm using an 1800x right now. Vegas doesn't come close to maxing it out, not while encoding and not while editing either. The performance gains are decent, encoding is a minimum of 2x faster than with my FX8350 I had previously. It's worth noting I use no GPU at all since Vegas isn't modern and cannot recognize my GTX970.

All in all, yes, it does provide an advantage, but not as much as it could.

That said, Vegas Pro 15 launches in a few days, so come back to this thread after that happens and I will post some benchmarks from the new version.

How many multicam tracks are you editing? I need to handle 4 and sometimes 6. 4 works okay, but 5 and 6 have a lot of lag st even preview 1/4.

One camera is GoPro at HD1080, but it has 50mb/sec bitrate, one is at 60fps, and one is 4K 72mb/sec. Doing the 4K surrogate only helps a little.

But I think your GTX 970 is faster than 750ti, so perhaps that's a factor.

 

fr0sty wrote on 8/16/2017, 8:45 PM

Vegas does not support my 970 at all, so it isn't contributing anything to editing or encoding.

Multicam is usually 3 cameras, I haven't done higher than that since I upgraded. I have no issues editing on a 4K timeline with a mixture of 4K and 1080 content. It keeps a pretty solid frame rate with quality set to "preview".

Systems:

Desktop

AMD Ryzen 7 1800x 8 core 16 thread at stock speed

64GB 3000mhz DDR4

Geforce RTX 3090

Windows 10

Laptop:

ASUS Zenbook Pro Duo 32GB (9980HK CPU, RTX 2060 GPU, dual 4K touch screens, main one OLED HDR)

wilri001 wrote on 8/16/2017, 9:24 PM

Thanks, Fr0sty. I may just save some money and go with the 1800x.

astar wrote on 8/18/2017, 11:29 AM

Certain codecs will use the GPU for decoding, and in 32-bit FP modes even more.

"Vegas isn't modern and cannot recognize my GTX970." This is actually more of a VHS vs Beta issue. Vegas is programed with OpenCL which is an open standard, while nvidia chose to gimp their support of OpenCL in favor of their own proprietary CUDA technology. NVidia has failed at this, and is now starting to have better support in their latest cards for OpenCL. So in a way your NVcard is the problem and not Vegas.

If you are a Pro and looking to use Vegas as pro tool. Then you should build your system specced for Vegas, and not some gaming performance or marketing deal.

fr0sty wrote on 8/18/2017, 11:34 AM

Right, which is why all the other NLE's support it... Blame the hardware, not the software. I get the reasons why Vegas doesn't support my GPU, I've been through those threads and read the expert opinions on the matter. That said, literally all of the other media creation software on my system supports my GPU, including the other NLEs I have. So, I'm not excusing the omission just because I need a GPU that doesn't suck to do the other tasks (such as 3D rendering, video projection mapping, live video mixing with effects, etc) with my media server/editing system. AMD sucks for what I do. I've had them ruin too many gigs with driver crashes that shut me down. Not only do I get more video outputs on a cheaper card with Nvidia, but I also have far more stability when stressing the card with a complex video mapping setup. Since I made the switch, not a single crash during a gig. Rock solid stability. For the record, I'd been through multiple AMD cards prior, this wasn't just an isolated issue with one card. Their drivers always give me headaches.

 

If all of my other software can effectively utilize my GPU, and I can see performance gains as a result, I even have plugins within Vegas that can use my GPU (Neat Video) and get great gains from it. Vegas should be able to also.

Last changed by fr0sty on 8/18/2017, 11:45 AM, changed a total of 2 times.

Systems:

Desktop

AMD Ryzen 7 1800x 8 core 16 thread at stock speed

64GB 3000mhz DDR4

Geforce RTX 3090

Windows 10

Laptop:

ASUS Zenbook Pro Duo 32GB (9980HK CPU, RTX 2060 GPU, dual 4K touch screens, main one OLED HDR)

astar wrote on 8/18/2017, 12:12 PM

Yes. You are right. NV just needs to support OpenCL better in the case of Vegas 11-14, and they do with their latest high end cards. Hopefully Vegas 15 will change this support.

Based other posts, you seem to have a habit of running AMD chipsets and NV cards. Do you really think that was a tested configuration? Why not run the best Intel chipsets/boards and NV or AMD GPUs if you are looking for stability. Just the like the Spanish Inquisition, no one expected stability from AMD motherboard and CPUs, they were just cheap. Ryzen and Threadripper is a new story, and so AMD on AMD might make for the most sense in terms of stability. Since AMD probably does not test with older NV cards on their latest boards.

bitman wrote on 8/19/2017, 3:33 AM

One can also ask the question will Vegas Pro use Intel's answer to AMD ryzen /theredripper effectively, the new Intel core X-series: e.g. there is a i9 variant with 18 cores and 36 threads...

This one will surely be expensive, but there are cheaper Intel X-series variants with lesser cores (but more than 4) which pricewise come close to AMD...

Last changed by bitman on 8/19/2017, 3:34 AM, changed a total of 1 times.

APPS: VIDEO: VP 365 suite (VP 22 build 194) VP 21 build 315, VP 365 20, VP 19 post (latest build -651), (uninstalled VP 12,13,14,15,16 Suite,17, VP18 post), Vegasaur, a lot of NEWBLUE plugins, Mercalli 6.0, Respeedr, Vasco Da Gamma 17 HDpro XXL, Boris Continuum 2025, Davinci Resolve Studio 18, SOUND: RX 10 advanced Audio Editor, Sound Forge Pro 18, Spectral Layers Pro 10, Audacity, FOTO: Zoner studio X, DXO photolab (8), Luminar, Topaz...

  • OS: Windows 11 Pro 64, version 24H2 (since October 2024)
  • CPU: i9-13900K (upgraded my former CPU i9-12900K),
  • Air Cooler: Noctua NH-D15 G2 HBC (September 2024 upgrade from Noctua NH-D15s)
  • RAM: DDR5 Corsair 64GB (5600-40 Vengeance)
  • Graphics card: ASUS GeForce RTX 3090 TUF OC GAMING (24GB) 
  • Monitor: LG 38 inch ultra-wide (21x9) - Resolution: 3840x1600
  • C-drive: Corsair MP600 PRO XT NVMe SSD 4TB (PCIe Gen. 4)
  • Video drives: Samsung NVMe SSD 2TB (980 pro and 970 EVO plus) each 2TB
  • Mass Data storage & Backup: WD gold 6TB + WD Yellow 4TB
  • MOBO: Gigabyte Z690 AORUS MASTER
  • PSU: Corsair HX1500i, Case: Fractal Design Define 7 (PCGH edition)
  • Misc.: Logitech G915, Evoluent Vertical Mouse, shuttlePROv2

 

 

wilri001 wrote on 10/31/2017, 9:53 PM

Just wanted to pass on some 4k multicam info. Edit of two 4k and two HD1080 tracks in multicam mode is running 12 fps. This with 1950x Threadripper and Fury 9 x GPU. So I'm building proxies for the two 4k tracks, and it's taking 30% CPU each, and not using the GPU at all. So this means one could build up to 3 proxies of 4k files at a time with this configuration. (I just start multiple Vegas Pro 15 instances to generate proxies at the same time.)

I've read other products use the GPU better to avoid using proxies, but at least with all those Threadripper processes building the proxies doesn't take so long. Sorry, but I haven't timed how long it takes to build a proxy, but I think it's about 3/4 the time of the video, or 0.75x.

Another interesting thing is after building those two proxies, fps is still only 21. So I backed out the S04x .dll, and it didn't make any difference. So I unselected the Fury x in Preferences, and now the fps is rock solid at 29.97 WITHOUT the GPU. With the GPU, the GPU was pegged. (The Fury x has a line of lights to indicate it's use.)

So clearly, it would really be nice, and I guess my highest priority suggestion, to work on using the GPU better in multicam edit mode - to either eliminate proxies, or at least play the proxies faster!

OldSmoke wrote on 11/1/2017, 10:07 AM

Just wanted to pass on some 4k multicam info. Edit of two 4k and two HD1080 tracks in multicam mode is running 12 fps. This with 1950x Threadripper and Fury 9 x GPU. So I'm building proxies for the two 4k tracks, and it's taking 30% CPU each, and not using the GPU at all. So this means one could build up to 3 proxies of 4k files at a time with this configuration. (I just start multiple Vegas Pro 15 instances to generate proxies at the same time.)

I've read other products use the GPU better to avoid using proxies, but at least with all those Threadripper processes building the proxies doesn't take so long. Sorry, but I haven't timed how long it takes to build a proxy, but I think it's about 3/4 the time of the video, or 0.75x.

Another interesting thing is after building those two proxies, fps is still only 21. So I backed out the S04x .dll, and it didn't make any difference. So I unselected the Fury x in Preferences, and now the fps is rock solid at 29.97 WITHOUT the GPU. With the GPU, the GPU was pegged. (The Fury x has a line of lights to indicate it's use.)

So clearly, it would really be nice, and I guess my highest priority suggestion, to work on using the GPU better in multicam edit mode - to either eliminate proxies, or at least play the proxies faster!


Interesting findings. I did test my system but in VP13, my VP15 trial expired. I have the same Fury X GPU paired with a 3930K @4.3GHz and it can run a 4 cam (2x 4K 30p XAVC-I, 2x 1080 30p XAVC-I) with GPU ON at almost full fps in Best/Full, it jumps between 27 and 28fps; this is in a 1080 30p project.

CPU only get's me 21-22fps. I would expect that there are some resource sharing issues in your system that prevent the Fury X to run at it's full potential. It is rather strange considering the 1950X has by for more PCIe lanes as my 3930K. I researches and it seems that those lanes are not all available for expansion cards but I have to look deeper into that. The x399 chipset also only supports up to 16/16/8/8 (only some boards) on all four slots and 16/16/8 on three slots, 16/16/16 is not possible either. While the Threadripper 4 has more PCIe lanes, they seem to be allocated differently compared to x299 for Intel CPUs.

Proud owner of Sony Vegas Pro 7, 8, 9, 10, 11, 12 & 13 and now Magix VP15&16.

System Spec.:
Motherboard: ASUS X299 Prime-A

Ram: G.Skill 4x8GB DDR4 2666 XMP

CPU: i7-9800x @ 4.6GHz (custom water cooling system)
GPU: 1x AMD Vega Pro Frontier Edition (water cooled)
Hard drives: System Samsung 970Pro NVME, AV-Projects 1TB (4x Intel P7600 512GB VROC), 4x 2.5" Hotswap bays, 1x 3.5" Hotswap Bay, 1x LG BluRay Burner

PSU: Corsair 1200W
Monitor: 2x Dell Ultrasharp U2713HM (2560x1440)

wilri001 wrote on 11/1/2017, 4:53 PM

@OldSmoke: I re-installed my previous version (13) and ran several tests of multicam display with two HD1080 and two 4k tracks, both with version 13 and 15, and version 15 with and without the new s04... .dll. The only combination that runs almost at 30 fps is version 13 with the Fury x GPU selected in Preferences.

So I am able to reproduce your fps with version 13, but version 15 is much slower. Version 15 fps is between 11 and 14. Except without GPU when a 4k track is selected, fps is less than 1.

So I'm going back to v13. I'll run any test Magix asks with 15, but I'm just tired of working around it's performance problems.

OldSmoke wrote on 11/1/2017, 5:47 PM

@wilri001 maybe you have somewhere different settings between the two? Preview RAM maybe? You could also try and reset VP15 tonite default settings by holding CTRL+SHIFT during launch.

Proud owner of Sony Vegas Pro 7, 8, 9, 10, 11, 12 & 13 and now Magix VP15&16.

System Spec.:
Motherboard: ASUS X299 Prime-A

Ram: G.Skill 4x8GB DDR4 2666 XMP

CPU: i7-9800x @ 4.6GHz (custom water cooling system)
GPU: 1x AMD Vega Pro Frontier Edition (water cooled)
Hard drives: System Samsung 970Pro NVME, AV-Projects 1TB (4x Intel P7600 512GB VROC), 4x 2.5" Hotswap bays, 1x 3.5" Hotswap Bay, 1x LG BluRay Burner

PSU: Corsair 1200W
Monitor: 2x Dell Ultrasharp U2713HM (2560x1440)

wilri001 wrote on 11/1/2017, 5:54 PM

@OldSmoke, why would RAM Preview make a difference? Isn't that just used when you select a range and do a in memory preview? I never do that. The default is 200mb, but I set that to zero.

 

And I just uninstalled and reinstalled as Administrator for Magic support, although they have yet to request diagnosis yet.

OldSmoke wrote on 11/1/2017, 6:05 PM

@OldSmoke, why would RAM Preview make a difference? Isn't that just used when you select a range and do a in memory preview? I never do that. The default is 200mb, but I set that to zero.

 

And I just uninstalled and reinstalled as Administrator for Magic support, although they have yet to request diagnosis yet.


Honestly, I don't think anyone knows in detail what Preview RAM really does. For some users, only 0 works, for me, the default works best. The only change made since VP14 is that you can set it to a higher value without impacting render times.

Proud owner of Sony Vegas Pro 7, 8, 9, 10, 11, 12 & 13 and now Magix VP15&16.

System Spec.:
Motherboard: ASUS X299 Prime-A

Ram: G.Skill 4x8GB DDR4 2666 XMP

CPU: i7-9800x @ 4.6GHz (custom water cooling system)
GPU: 1x AMD Vega Pro Frontier Edition (water cooled)
Hard drives: System Samsung 970Pro NVME, AV-Projects 1TB (4x Intel P7600 512GB VROC), 4x 2.5" Hotswap bays, 1x 3.5" Hotswap Bay, 1x LG BluRay Burner

PSU: Corsair 1200W
Monitor: 2x Dell Ultrasharp U2713HM (2560x1440)

astar wrote on 11/2/2017, 12:16 AM

Definitely keep the RAM preview to default, unless you are using RAM preview in editing. Then reset it back before render.

Disabling the FuryX does make some sense VS the threadripper when multicam with large media sources. The way I see multicam working in Vegas would be something like this:

4K video streams x2 = 8.90Gps x 2 = 17.8 Gbps + 1080p streams x2 = 2.23Gbs x 2 = 4.46Gbps = 22.26 Gbps total x2(back and forth), plus the screen display load.

All that needs to be fed from CPU/memory across the Gen3 PCIe x16 interface. The x16Gen3 interface is only able to do about 15GBs. Overall it seems like about 5-6GBs on the PCIex16 bus.

I am not sure what type of bandwidth distribution is happening with half the OpenCL compute units on the CPU and others on the GPU either.

 

There must be a some issue with bottle necking with that rig. I would have thought that FuryX was better than that. Since the FuryX was all about memory speed. Of course he did say the GPU was reading 100%

Disabling the GPU puts the video back into CPU and main memory running about 60GBs. Then the GPU is only passing screen display and OpenCL is only using compute units on the CPU.

 

Did either of you try to systematically swap your monitors around to the different gpu interfaces? Thinking maybe there is something with the way the display chips are rigged, or maybe GPU scaling adding that much overhead?

 

I posted in OffTopic how the latest preview of Window 10's task manger now shows GPU utilization. The GPU utilization is broken down by 3D, Compute, video encode, memory utilization, overall, ect. Its pretty sweet.

CPU side PCIe utilization by slot, and DMI utilization would be a nice to have as well. Even if they are only added to Perfmon. Would be nice to see what is actually happening on the x16 GPU interface.