Threadripper 1950x + rtx 2080 : Poor Render Time on Vegas Pro Plugins

Comments

BruceUSA wrote on 4/22/2019, 9:08 AM

Marc. Everything you need to know is listed in my profile.

Intel i7 12700k @5.2Ghz all P Cores, 5.3@ 6 Core, Turbo boost 3 Cores @5.4Ghz. 4.1Ghz All E Cores.                                          

MSI Z690 MPG Edge DDR5 Wifi                                                     

TEAMGROUP T-Force Delta RGB 32GB DDR5 -6200                     

Samsung 980 Pro x4 Nvme .M2 1tb Pcie Gen 4                                     

ASRock RX 6900XT Phantom 16GB                                                        

PSU Eva Supernova G2 1300w                                                     

Black Ice GTX 480mm radiator top mount push/pull                    

MCP35X dual pump w/ dual pump housing.                                

Corsair RGB water block. RGB Fan thru out                           

Phanteks Enthoo full tower

Windows 11 Pro

Chief24 wrote on 4/22/2019, 9:53 AM

+1 BruceUSA

Yeah! I did decide to 'toss aside my past issues with AMD Graphics" and should be receiving my Radeon VII later this week. I did have to change out my GTX 1080 over the weekend, because that "original" Enermax Liqtech cooler liquid "won out" over the transistors and such on the card...boo-hoo! So replaced it temporarily with an RTX 2070 I just purchased for a psuedo gaming/limited editing machine. It improved transcoding some gameplay 4K60p footage to Grass Valley HQ 1080p60 at medium settings, plus kept better information going to my TR1950x for processing. I use Task Manager to check things, and have the "monitor" portion open to see all the physical/virtual cores working, and with the GTX 1080, a lot of Up/Down like it was constantly struggling to provide the data to the CPU. Though, could be from the leak that the GPU was on its downhill decline!

Still not sure how to "overclock" this MSI Gaming Pro Carbon AC to get better performance out of that TR1950x, though can't complain about either temperatures, or lack of performance, as even watching transcoding shows the CPU is being utilized. Not everything needs to be at 100%, as from my past as a "hardware" guy, usually means an issue/problem somewhere.

But Kudos to that awesome "Background Picture"!

Self Build: #1 MSI TRX40 Pro Wi-Fi w/3960X (be Quiet! Dark Rock Pro TR4) @ stock; 128GB Team Group 3200 MHz; OS/Apps - WDSN850X PCI-e 4.0x4 4TB, Documents/Extras - WDSN850X PCI-e 4.0x4 4TB; XFX AMD Radeon 7900XTX (24.1.1); Samsung 32 Inch UHD 3840x2160; Windows 11 Pro 64-Bit (23H2 22631.3155); (2) Inland Performance 2TB/(2) PNY 3040 4TB PCI-e on Asus Quad M.2x16; (2) WD RED 4TB; ProGrade USB CFExpress/SD card Reader; LG 16X Blu-Ray Burner; 32 inch Samsung UHD 3840x2160.

VEGAS Pro 20 Edit (411); VEGAS Pro 21 Suite (315); VEGAS Pro 22 Suite (93) & HOS (Happy Otter Scripts); DVD Architect 7.0 (100);

Sound Forge Audio Studio 15; ACID Music Studio 11; SonicFire Pro 6.6.9 (with Vegas Pro/Movie Studio Plug-in); DaVinci Resolve (Free) 18.6.6

#2: Gigabyte TRX50 Aero D w/7960x (Noctua NH-U14S TR5-SP6) @ stock; 128GB Kingston Fury Beast RDIMM @4800 MHz; OS/Apps - Seagate Firecuda 540 2TB PCI-e 5.0x4; Documents/Extras/Source/Transcodes - 4TB WDSN850X PCI-e 4.0x4; 4TB Inland Performance PCI-e 3.0x4; 2TB Inland Performance PCI-e 4.0x4; BlackMagic PCI-e Decklink 4K Mini-Recorder; ProGrade USB SD & Micro SD card readers; LG 32 Inch UHD 3840.x2160: PowerColor Hellhound RX Radeon 7900XT (24.1.1); Windows 11 Pro 64-Bit (22631.3155)

VEGAS Pro 20 Edit (411); VEGAS Pro 21 Suite (315); VEGAS Pro 22 Suite (93) & HOS; DVD Architect 7.0 (100); Sound Forge Audo Studio 15; Acid Music Studio 11

Canon EOS R6 MkII, Canon EOS R6, Canon EOS R7 (All three set for 4K 24/30/60 Cinema Gamut/CLog3); GoPro Hero 5+ & 6 Black & (2) 7 Black & 9 Black & 10 Black & 11 Black & 12 Black (All set at highest settings - 4K, 5K, & 5.3K mostly at 29.970); Sony FDR AX-53 HandyCam (4K 100Mbps XAVC-S 23.976/29.970)

eikira wrote on 4/22/2019, 9:54 AM

The combination of a high end Threadripper with a high end AMD card is certainly worth it depending on your projects.

Sure. But if money is no issue, there is also not to think much about it to maybe consider just an Intel Extreme.

If you do a lot of 4K60p or 4K multicam and so on, you will certainly appreciate the extra performance in the timeline as well as rendering.

Well, if one does much multicam, there is anyway no real meaning in 4K to be smooth if you have lets say 3+ angles anyway. Also depending on the format, you may have a bottleneck of SSD/HDD being able to give you all the data fast enough to screen. I dont have those projects, but i would probably work, just alone because there is much more data to transfer from the storage with proxies anyway.

For HD work you may not see too much difference but again depending on the complexity of your project. However, a high end AMD or Intel chip does not offer QuickSync if that is something you are interested in.

Sure. But i try to compare as close as possible. And to come closer to the corecount with Intel to a Threadripper the price difference and performance difference shrinks.
I probably would tend to get a TR if i needed a good price/performance ratio for a workstation of data management and multitasking, meaning working with multiple instances or VMs, as an example if i was a software developer. But so far i dont see a significant benefit in videoproduction in the mentioned TR 1950x vs 9900k comparison.

And to be fair, you would have to compare prices between a socket 2066 and a threadripper, the 9900 is a socket 1151 CPU with limited PCIe lanes.

Right now i have a 40 pcie lane CPU, and i will see if it will make a real difference with my 9900k. But i highly doubt it that it will be any. Because so far i have only 1 GPU and the rest should be able to be fast enough with the Z390 chipset. Come to think of it, i will make some tests with my 2 NVMe installed to see if by default the performance gets better or worse later.

BruceUSA wrote on 4/22/2019, 10:15 AM

+1 BruceUSA

Yeah! I did decide to 'toss aside my past issues with AMD Graphics" and should be receiving my Radeon VII later this week. I did have to change out my GTX 1080 over the weekend, because that "original" Enermax Liqtech cooler liquid "won out" over the transistors and such on the card...boo-hoo! So replaced it temporarily with an RTX 2070 I just purchased for a psuedo gaming/limited editing machine. It improved transcoding some gameplay 4K60p footage to Grass Valley HQ 1080p60 at medium settings, plus kept better information going to my TR1950x for processing. I use Task Manager to check things, and have the "monitor" portion open to see all the physical/virtual cores working, and with the GTX 1080, a lot of Up/Down like it was constantly struggling to provide the data to the CPU. Though, could be from the leak that the GPU was on its downhill decline!

Still not sure how to "overclock" this MSI Gaming Pro Carbon AC to get better performance out of that TR1950x, though can't complain about either temperatures, or lack of performance, as even watching transcoding shows the CPU is being utilized. Not everything needs to be at 100%, as from my past as a "hardware" guy, usually means an issue/problem somewhere.

But Kudos to that awesome "Background Picture"!

Chief24. You should really at least trying to overclock the beast. I mean, if you can't get it at 4ghz and stable, then try 3.8-3.9ghz on this chip is prety normal. Heck, all 16 cores running at 3.8ghz is way better then 3.39 at stock. Paired that your new card is sweet combo. I always overclock my computer and never have any issue. My other system is a intel 6 cores 4930K have running 4.5ghz with custom water cooled for almost 7 yrs and still running strong. no degrade in performance that I notice about. Overclock is a free performance that people should take advantage of.

Intel i7 12700k @5.2Ghz all P Cores, 5.3@ 6 Core, Turbo boost 3 Cores @5.4Ghz. 4.1Ghz All E Cores.                                          

MSI Z690 MPG Edge DDR5 Wifi                                                     

TEAMGROUP T-Force Delta RGB 32GB DDR5 -6200                     

Samsung 980 Pro x4 Nvme .M2 1tb Pcie Gen 4                                     

ASRock RX 6900XT Phantom 16GB                                                        

PSU Eva Supernova G2 1300w                                                     

Black Ice GTX 480mm radiator top mount push/pull                    

MCP35X dual pump w/ dual pump housing.                                

Corsair RGB water block. RGB Fan thru out                           

Phanteks Enthoo full tower

Windows 11 Pro

TheRhino wrote on 4/23/2019, 7:40 PM

A LOT of (current) programs are not optimized for Threadripper, so a 9900K performs better for nearly all the apps I use daily, including Vegas.

Bold statement and sure doesn’t reflect the truth either. There quite few users in here that use Threadripper successful and their systems are super fast, just ask @BruseUSA.

As far as Vegas is concerns. My system will absolutely decimate your 5ghz 8 cores 9900K in TL and rendering. Don't believe? .All you have to do is doing some searching here. You will find many samples and screen shots that show the Vegas performance that your 9900K can't touch it

I'm not arguing that the 9900K is better than a 1950x for all apps, but the 1950x does not "decimate" an oc'd 9900K on the right MB loaded with M.2 drives in RAID0...  Guru3D shows stock 1950x 15% rendering one specific codec faster in Vegas than stock 9900K... Vegas results vary greatly based on source video, effects, codecs, GPU, etc.  In comparison, the 9900K is about 20% faster completing batch processes in Photoshop... On my proprietary single-core app, the 9900K is about 25% faster than an OC'd 1950x based on comparisons to a colleague's workstation...

Amazon has the 2950x for $679 currently, so my 2nd workstation may get that or a Zen2 w/PCIe 4.0 later this year... I will eventually upgrade all (3) workstations in my studio but one needs to be an Intel due to proprietary software that crashes on AMD...Legacy gear, but must have, so no way around that, so I did the 9900K first...  Swapped-out MB/CPU/RAM/Cooler for <$1000, but couldn't have done it for $850 had I kept my Xeon's Noctua NH-D14 cooler:

  • $200 open-box ASUS Z390 WS MB with PLX chip that shares PCIe lanes = no bottlenecks...
  • Fits existing 15-bay ATX case.  AMD's e-ATX MB's require new $200+ case...
  • $500 9900K
  • $150 2x16GB DDR4 3000 (AMD needs 4 faster sticks...)
  • $140 Corsair H150i for quieter studio, but existing Noctua NH-D14 cooled to 5.0ghz...
  • For now used existing PS & onboard video until new AMD GPUs arrive...

Last changed by TheRhino on 4/23/2019, 11:23 PM, changed a total of 3 times.

Workstation C with $600 USD of upgrades in April, 2021
--$360 11700K @ 5.0ghz
--$200 ASRock W480 Creator (onboard 10G net, TB3, etc.)
Borrowed from my 9900K until prices drop:
--32GB of G.Skill DDR4 3200 ($100 on Black Friday...)
Reused from same Tower Case that housed the Xeon:
--Used VEGA 56 GPU ($200 on eBay before mining craze...)
--Noctua Cooler, 750W PSU, OS SSD, LSI RAID Controller, SATAs, etc.

Performs VERY close to my overclocked 9900K (below), but at stock settings with no tweaking...

Workstation D with $1,350 USD of upgrades in April, 2019
--$500 9900K @ 5.0ghz
--$140 Corsair H150i liquid cooling with 360mm radiator (3 fans)
--$200 open box Asus Z390 WS (PLX chip manages 4/5 PCIe slots)
--$160 32GB of G.Skill DDR4 3000 (added another 32GB later...)
--$350 refurbished, but like-new Radeon Vega 64 LQ (liquid cooled)

Renders Vegas11 "Red Car Test" (AMD VCE) in 13s when clocked at 4.9 ghz
(note: BOTH onboard Intel & Vega64 show utilization during QSV & VCE renders...)

Source Video1 = 4TB RAID0--(2) 2TB M.2 on motherboard in RAID0
Source Video2 = 4TB RAID0--(2) 2TB M.2 (1) via U.2 adapter & (1) on separate PCIe card
Target Video1 = 32TB RAID0--(4) 8TB SATA hot-swap drives on PCIe RAID card with backups elsewhere

10G Network using used $30 Mellanox2 Adapters & Qnap QSW-M408-2C 10G Switch
Copy of Work Files, Source & Output Video, OS Images on QNAP 653b NAS with (6) 14TB WD RED
Blackmagic Decklink PCie card for capturing from tape, etc.
(2) internal BR Burners connected via USB 3.0 to SATA adapters
Old Cooler Master CM Stacker ATX case with (13) 5.25" front drive-bays holds & cools everything.

Workstations A & B are the 2 remaining 6-core 4.0ghz Xeon 5660 or I7 980x on Asus P6T6 motherboards.

$999 Walmart Evoo 17 Laptop with I7-9750H 6-core CPU, RTX 2060, (2) M.2 bays & (1) SSD bay...

Musicvid wrote on 4/23/2019, 10:12 PM

https://www.vegascreativesoftware.info/us/forum/faq-how-can-i-make-my-video-preview-play-smoothly-in-vegas-pro--104624/

Former user wrote on 4/23/2019, 11:59 PM

 

As far as Vegas is concerns. My system will absolutely decimate your 5ghz 8 cores 9900K in TL and rendering. Don't believe? .All you have to do is doing some searching here. You will find many samples and screen shots that show the Vegas performance that your 9900K can't touch it

I'm not arguing that the 9900K is better than a 1950x, but the 1950x does not "decimate" an oc'd 9900K on the right MB loaded with M.2 drives in RAID0...  Guru3D shows stock 1950x 15% rendering one codec faster in Vegas than stock 9900K but Vegas results vary greatly based on source video, effects, codecs, GPU, etc. 

I would prefer better timeline performance over the increased rendering speed of 1950x. The 16 cores of threadripper can't help there, but increased frequency of 9900k plus Quicksync helps it handsomely beat the threadrippers. There's been a number of unhappy Threadripper people here complaining about timeline performance with vegas.

OldSmoke wrote on 4/24/2019, 6:44 AM

A LOT of (current) programs are not optimized for Threadripper, so a 9900K performs better for nearly all the apps I use daily, including Vegas.

Bold statement and sure doesn’t reflect the truth either. There quite few users in here that use Threadripper successful and their systems are super fast, just ask @BruseUSA.

As far as Vegas is concerns. My system will absolutely decimate your 5ghz 8 cores 9900K in TL and rendering. Don't believe? .All you have to do is doing some searching here. You will find many samples and screen shots that show the Vegas performance that your 9900K can't touch it

I'm not arguing that the 9900K is better than a 1950x for all apps, but the 1950x does not "decimate" an oc'd 9900K on the right MB loaded with M.2 drives in RAID0...  Guru3D shows stock 1950x 15% rendering one specific codec faster in Vegas than stock 9900K... Vegas results vary greatly based on source video, effects, codecs, GPU, etc.  In comparison, the 9900K is about 20% faster completing batch processes in Photoshop... On my proprietary single-core app, the 9900K is about 25% faster than an OC'd 1950x based on comparisons to a colleague's workstation...

Amazon has the 2950x for $679 currently, so my 2nd workstation may get that or a Zen2 w/PCIe 4.0 later this year... I will eventually upgrade all (3) workstations in my studio but one needs to be an Intel due to proprietary software that crashes on AMD...Legacy gear, but must have, so no way around that, so I did the 9900K first...  Swapped-out MB/CPU/RAM/Cooler for <$1000, but couldn't have done it for $850 had I kept my Xeon's Noctua NH-D14 cooler:

  • $200 open-box ASUS Z390 WS MB with PLX chip that shares PCIe lanes = no bottlenecks...
  • Fits existing 15-bay ATX case.  AMD's e-ATX MB's require new $200+ case...
  • $500 9900K
  • $150 2x16GB DDR4 3000 (AMD needs 4 faster sticks...)
  • $140 Corsair H150i for quieter studio, but existing Noctua NH-D14 cooled to 5.0ghz...
  • For now used existing PS & onboard video until new AMD GPUs arrive...

Would be nice to see some performance comparison between your new build and @BruceUSA system for timeline performance and rendering.

Proud owner of Sony Vegas Pro 7, 8, 9, 10, 11, 12 & 13 and now Magix VP15&16.

System Spec.:
Motherboard: ASUS X299 Prime-A

Ram: G.Skill 4x8GB DDR4 2666 XMP

CPU: i7-9800x @ 4.6GHz (custom water cooling system)
GPU: 1x AMD Vega Pro Frontier Edition (water cooled)
Hard drives: System Samsung 970Pro NVME, AV-Projects 1TB (4x Intel P7600 512GB VROC), 4x 2.5" Hotswap bays, 1x 3.5" Hotswap Bay, 1x LG BluRay Burner

PSU: Corsair 1200W
Monitor: 2x Dell Ultrasharp U2713HM (2560x1440)

BruceUSA wrote on 4/24/2019, 7:14 AM

A LOT of (current) programs are not optimized for Threadripper, so a 9900K performs better for nearly all the apps I use daily, including Vegas.

Bold statement and sure doesn’t reflect the truth either. There quite few users in here that use Threadripper successful and their systems are super fast, just ask @BruseUSA.

As far as Vegas is concerns. My system will absolutely decimate your 5ghz 8 cores 9900K in TL and rendering. Don't believe? .All you have to do is doing some searching here. You will find many samples and screen shots that show the Vegas performance that your 9900K can't touch it

I'm not arguing that the 9900K is better than a 1950x for all apps, but the 1950x does not "decimate" an oc'd 9900K on the right MB loaded with M.2 drives in RAID0...  Guru3D shows stock 1950x 15% rendering one specific codec faster in Vegas than stock 9900K... Vegas results vary greatly based on source video, effects, codecs, GPU, etc.  In comparison, the 9900K is about 20% faster completing batch processes in Photoshop... On my proprietary single-core app, the 9900K is about 25% faster than an OC'd 1950x based on comparisons to a colleague's workstation...

Amazon has the 2950x for $679 currently, so my 2nd workstation may get that or a Zen2 w/PCIe 4.0 later this year... I will eventually upgrade all (3) workstations in my studio but one needs to be an Intel due to proprietary software that crashes on AMD...Legacy gear, but must have, so no way around that, so I did the 9900K first...  Swapped-out MB/CPU/RAM/Cooler for <$1000, but couldn't have done it for $850 had I kept my Xeon's Noctua NH-D14 cooler:

  • $200 open-box ASUS Z390 WS MB with PLX chip that shares PCIe lanes = no bottlenecks...
  • Fits existing 15-bay ATX case.  AMD's e-ATX MB's require new $200+ case...
  • $500 9900K
  • $150 2x16GB DDR4 3000 (AMD needs 4 faster sticks...)
  • $140 Corsair H150i for quieter studio, but existing Noctua NH-D14 cooled to 5.0ghz...
  • For now used existing PS & onboard video until new AMD GPUs arrive...

Would be nice to see some performance comparison between your new build and @BruceUSA system for timeline performance and rendering.

A+ . Bring bring on. I am more then happy to run rendering and TL performance test against 9900k.. a y way or form you wish me to do and I will do it. I really wanting to see what other system out there can do better then mine.. Intel vs mine is prefered.

Intel i7 12700k @5.2Ghz all P Cores, 5.3@ 6 Core, Turbo boost 3 Cores @5.4Ghz. 4.1Ghz All E Cores.                                          

MSI Z690 MPG Edge DDR5 Wifi                                                     

TEAMGROUP T-Force Delta RGB 32GB DDR5 -6200                     

Samsung 980 Pro x4 Nvme .M2 1tb Pcie Gen 4                                     

ASRock RX 6900XT Phantom 16GB                                                        

PSU Eva Supernova G2 1300w                                                     

Black Ice GTX 480mm radiator top mount push/pull                    

MCP35X dual pump w/ dual pump housing.                                

Corsair RGB water block. RGB Fan thru out                           

Phanteks Enthoo full tower

Windows 11 Pro

BruceUSA wrote on 4/24/2019, 7:20 AM

 

As far as Vegas is concerns. My system will absolutely decimate your 5ghz 8 cores 9900K in TL and rendering. Don't believe? .All you have to do is doing some searching here. You will find many samples and screen shots that show the Vegas performance that your 9900K can't touch it

I'm not arguing that the 9900K is better than a 1950x, but the 1950x does not "decimate" an oc'd 9900K on the right MB loaded with M.2 drives in RAID0...  Guru3D shows stock 1950x 15% rendering one codec faster in Vegas than stock 9900K but Vegas results vary greatly based on source video, effects, codecs, GPU, etc. 

I would prefer better timeline performance over the increased rendering speed of 1950x. The 16 cores of threadripper can't help there, but increased frequency of 9900k plus Quicksync helps it handsomely beat the threadrippers. There's been a number of unhappy Threadripper people here complaining about timeline performance with vegas.

Please see my above post. Bring it let find out shall we?

Intel i7 12700k @5.2Ghz all P Cores, 5.3@ 6 Core, Turbo boost 3 Cores @5.4Ghz. 4.1Ghz All E Cores.                                          

MSI Z690 MPG Edge DDR5 Wifi                                                     

TEAMGROUP T-Force Delta RGB 32GB DDR5 -6200                     

Samsung 980 Pro x4 Nvme .M2 1tb Pcie Gen 4                                     

ASRock RX 6900XT Phantom 16GB                                                        

PSU Eva Supernova G2 1300w                                                     

Black Ice GTX 480mm radiator top mount push/pull                    

MCP35X dual pump w/ dual pump housing.                                

Corsair RGB water block. RGB Fan thru out                           

Phanteks Enthoo full tower

Windows 11 Pro

Chief24 wrote on 4/24/2019, 8:00 AM

And so it seems that a simple question has now turned into a typical Web Forum "Fanboy War".

Please, for those that come to this site, which is a User-to-User Help Forum (with the occasional stop-by's of the Magix team(s) members), not technical support, of which this is MAGIX, and not ADOBE! You like Photoshop, Premiere, LightRoom, Media Encoder, please just mention in passing but I am really getting tired of seeing the constant "bashing" of Vegas Pro/Movie Studio, and the other original that were once managed by Sony (Acid Pro/Studio, Sound Forge Pro/Studio).

Yep, went an looked at the "referenced" review by Guru3d, and yep, ThreadRipper beats out the 9900K. But per usual, like watching some YouTube Tech Tutorials, that is not good enough for our "testing" and does not equal what we want with our testing, so like some, "AMD may be slower, but look at the Price Difference" or "But the price to performance..." blah, friggin', blah!

I've used both platforms, Intel and AMD. Guess what? I use the one I can afford, and do I have issues, more primarily with the hardware than software, because software is constantly changing. No, I was not using the Original Video Factory by Digital Foundry. I was still "floating around on the big blue pond" at that time. And I originally started to try and use Pinnacle Studio, which I could never get to run properly. My fault or the software? Did I put enough time into learning how to use it properly, the correct way to import media/files or use ones listed as compatible? Who cares?

There is constant "re-gurgitation" of the whole Internet is correct type of MISINFORMATION being re-constituted as "the truth/gospel". I'm from Missouri, Show Me. Oh, and a "Mainstream" Intel processor does not have 40 PCI-e lanes, possibly the Motherboard, but not the processor. So it is currently with AMD "Mainstream" processors. Both Intel and AMD "High End Desktop" (HEDT) processors have more PCI-e lanes within the processor (28, 40, or 44 Intel; 60 on AMD). This does not include actual "Server" based processors like Xeon or Epyc. But since we want to take this forum thread to a different level, what is the actual reason on a "at least" four slot (Could be Dual, Triple, or Quad Channel for Intel; Dual or Quad channel for AMD) memory DIMM slots, the outer sets of channels away from the processor socket used as primary, and the closer ones as secondary? Let the fanboy flaming begin here!

So, if I go back to an earlier reference concerning the 9900K and its performance in Vegas Pro compared to ThreadRipper, then the same could be said of the review done by TechGage (Rob Williams, E.I.C.), though his had more to do with utilizing the GPU for rendering. Guess what? He stated there was a problem with the nVidia coding providing unexpected results. But according to the "Great YouTube Mantra", nVidia cards just decimate AMD cards in wait.....................................Premiere Pro. Again, this is a Magix forum. Go complain at Adobe's forum, which I do go to at times, and there is no difference.

Like all the Celebrities who "threaten" to go to a different Country if so and so is elected... PLEASE GO! I'LL HELP PAY THE AIR FARE!

***note for BruceUSA; finally got my rig to overclock stably at 3.9. Oh yeah! Friday should be Radeon Day!***

Self Build: #1 MSI TRX40 Pro Wi-Fi w/3960X (be Quiet! Dark Rock Pro TR4) @ stock; 128GB Team Group 3200 MHz; OS/Apps - WDSN850X PCI-e 4.0x4 4TB, Documents/Extras - WDSN850X PCI-e 4.0x4 4TB; XFX AMD Radeon 7900XTX (24.1.1); Samsung 32 Inch UHD 3840x2160; Windows 11 Pro 64-Bit (23H2 22631.3155); (2) Inland Performance 2TB/(2) PNY 3040 4TB PCI-e on Asus Quad M.2x16; (2) WD RED 4TB; ProGrade USB CFExpress/SD card Reader; LG 16X Blu-Ray Burner; 32 inch Samsung UHD 3840x2160.

VEGAS Pro 20 Edit (411); VEGAS Pro 21 Suite (315); VEGAS Pro 22 Suite (93) & HOS (Happy Otter Scripts); DVD Architect 7.0 (100);

Sound Forge Audio Studio 15; ACID Music Studio 11; SonicFire Pro 6.6.9 (with Vegas Pro/Movie Studio Plug-in); DaVinci Resolve (Free) 18.6.6

#2: Gigabyte TRX50 Aero D w/7960x (Noctua NH-U14S TR5-SP6) @ stock; 128GB Kingston Fury Beast RDIMM @4800 MHz; OS/Apps - Seagate Firecuda 540 2TB PCI-e 5.0x4; Documents/Extras/Source/Transcodes - 4TB WDSN850X PCI-e 4.0x4; 4TB Inland Performance PCI-e 3.0x4; 2TB Inland Performance PCI-e 4.0x4; BlackMagic PCI-e Decklink 4K Mini-Recorder; ProGrade USB SD & Micro SD card readers; LG 32 Inch UHD 3840.x2160: PowerColor Hellhound RX Radeon 7900XT (24.1.1); Windows 11 Pro 64-Bit (22631.3155)

VEGAS Pro 20 Edit (411); VEGAS Pro 21 Suite (315); VEGAS Pro 22 Suite (93) & HOS; DVD Architect 7.0 (100); Sound Forge Audo Studio 15; Acid Music Studio 11

Canon EOS R6 MkII, Canon EOS R6, Canon EOS R7 (All three set for 4K 24/30/60 Cinema Gamut/CLog3); GoPro Hero 5+ & 6 Black & (2) 7 Black & 9 Black & 10 Black & 11 Black & 12 Black (All set at highest settings - 4K, 5K, & 5.3K mostly at 29.970); Sony FDR AX-53 HandyCam (4K 100Mbps XAVC-S 23.976/29.970)

Marc-Gauvin wrote on 4/24/2019, 8:37 AM

I'm not here to get a war between AMD fans and Nvidia fans; i'm here to ask Magix to update their software to leverage the Nvidia RTX Tensor Cores and others last-gen hardwares. I'm tired to see softwares being weaponised for commercial purpose to create a war betwwen AMD GPU versus Nvidia GPU. At least; say up front that your software is only optimised for AMD GPU and Nvidia GPU are victim of drivers sabotage. I recieve publicity in my email to upgrade to vegas pro 16 but what's the point if Vegas Pro 16 doesn't leverage my 7 000$ workstation at the end ? Cool new features and plugins but not updated to 2019 hardwares ? Did i will spend hundreds for an Update without any performances boost from my RTX 2080 and my Threadripper 1950x because most old plugins use only 1 core at a time ? The answer is no. I will buy for 700$ Unity 2019 assets or i will buy Premiere Pro or Da Vinci to leverage my Hardwares.

Former user wrote on 4/24/2019, 8:57 AM

@Chief24 and others. There is a genuine, non fanboy interest in establishing which of the two main systems to choose from, Intel/AMD. The same applies to choice of GPU.

Last year when I was about to start a new build I thought long and hard about it all, both systems, I chose the Intel chip, one of the main reasons was that the perceived wisdom on the forum was that frequency, on balance, trumped core count. So while there will always be a bit of the mine is bigger than yours among the adolescents, comparisons of timeline playback and render times, within Vegas, serve a very useful purpose to other users considering a new build, or upgrading.

Things change, and maybe in the future with better multi core optimisation in Vegas? the perceived wisdom may change.

OldSmoke wrote on 4/24/2019, 9:03 AM

@Marc-Gauvin I am very different. I bought my hardware to suit my software, Sony Vegas Pro 11 in my case. At that time, the GTX580 was used by the development team to implement CUDA supported rendering with MC AVC encoder. Timeline performance was still ok with the footage at that time, HDV mostly.

CUDA rendering in Vegas went out the door when NVIDIA changed their hardware structure and API. With Full HD, 1080 60p becoming more and more what I use, the GTX580 was no longer sufficient and I switched to R9 290 only to be surprised how much faster it could handle the timeline and FX processing because of better OpenCL implementation. I can’t blame Sony and now Magix for supporting an open platform like OpenCL more than a proprietary one like NVENC, CUDA and others as there is a cost to that. So for me, I will always buy the hardware that matches my software.

 

Last changed by OldSmoke on 4/24/2019, 9:04 AM, changed a total of 1 times.

Proud owner of Sony Vegas Pro 7, 8, 9, 10, 11, 12 & 13 and now Magix VP15&16.

System Spec.:
Motherboard: ASUS X299 Prime-A

Ram: G.Skill 4x8GB DDR4 2666 XMP

CPU: i7-9800x @ 4.6GHz (custom water cooling system)
GPU: 1x AMD Vega Pro Frontier Edition (water cooled)
Hard drives: System Samsung 970Pro NVME, AV-Projects 1TB (4x Intel P7600 512GB VROC), 4x 2.5" Hotswap bays, 1x 3.5" Hotswap Bay, 1x LG BluRay Burner

PSU: Corsair 1200W
Monitor: 2x Dell Ultrasharp U2713HM (2560x1440)

fr0sty wrote on 4/24/2019, 7:29 PM

I'm not here to get a war between AMD fans and Nvidia fans; i'm here to ask Magix to update their software to leverage the Nvidia RTX Tensor Cores and others last-gen hardwares. I'm tired to see softwares being weaponised for commercial purpose to create a war betwwen AMD GPU versus Nvidia GPU. At least; say up front that your software is only optimised for AMD GPU and Nvidia GPU are victim of drivers sabotage. I recieve publicity in my email to upgrade to vegas pro 16 but what's the point if Vegas Pro 16 doesn't leverage my 7 000$ workstation at the end ? Cool new features and plugins but not updated to 2019 hardwares ? Did i will spend hundreds for an Update without any performances boost from my RTX 2080 and my Threadripper 1950x because most old plugins use only 1 core at a time ? The answer is no. I will buy for 700$ Unity 2019 assets or i will buy Premiere Pro or Da Vinci to leverage my Hardwares.

Neither Premiere Pro or Davinci utilize the tensor cores in your RTX. A quote from one of the Dacinci Resolve Devs:

"This is not true at all. You should get identical results on CUDA/OpenCL/Metal."

Premiere favors Nvidia cards, and has much poorer benchmarks using AMD cards, so it too favors one card over another much like Vegas does. Vegas happens to favor AMD due to their better OpenCL performance.

There is no perfect NLE. Resolve has tons of requirements and limitations (such as requiring dedicated hardware for full screen preview) and has a cumbersome workflow, Premiere can be unstable and favors Nvidia systems, Vegas can be unstable and seems to currently favor AMD based systems, Final Cut has weak coloring tools, extreme limitations when it comes to output formats, and in general tries to do too much hand-holding to guide you through the process, not enough customization.

The key is to find the workflow and performance that works best for your project types and hardware. You may find you prefer some apps for some projects, and other apps for others.

Last changed by fr0sty on 4/30/2019, 12:45 AM, changed a total of 1 times.

Systems:

Desktop

AMD Ryzen 7 1800x 8 core 16 thread at stock speed

64GB 3000mhz DDR4

Geforce RTX 3090

Windows 10

Laptop:

ASUS Zenbook Pro Duo 32GB (9980HK CPU, RTX 2060 GPU, dual 4K touch screens, main one OLED HDR)

Kinvermark wrote on 4/24/2019, 8:59 PM

+1. You have to give the programmers a chance to do their jobs. Patience is necessary. The commitment to continue to improve is there.

As far as bias towards AMD, I doubt that there is any, especially given that NVENC was implemented many months before VCE was made available in Vegas.

TheRhino wrote on 4/24/2019, 10:03 PM

@Chief24 and others. There is a genuine, non fanboy interest in establishing which of the two main systems to choose from, Intel/AMD. The same applies to choice of GPU...

Things change, and maybe in the future with better multi core optimisation in Vegas? the perceived wisdom may change.

I've been editing paid work on Vegas Video for 17 years, since 3.0, using (3) workstations at the same time to optimize workflow. I have built dozens of Intel & AMD workstations designed to optimize Vegas' performance yet remain 100% stable 24/7...  My work does NOT require a ton of FX, so benchmarks applying FX to still pictures do not reflect my real-world results.... A couple of legacy 32-bit single core apps/devices I use weekly will NOT run with stability on AMD, so at least one workstation has to have an Intel CPU.  I also do Photoshop work which currently performs better on a 5.0ghz Intel vs. 4.0ghz AMD.

With demand for 4K editing increasing, I chose to invest $1000 to upgrade my oldest workstation to a 9900K OC'd to 5.0 ghz.   The 9900K runs on an ASUS Z390 WS standard ATX size workstation-class motherboard using a PLX chip to enable all (4) PCIe 16 slots running electrically at 8X.  These handle GPU, Blackmagic capture card, M.2 RAID0, Hardware RAID10, and a 5th PCIe 4X slot accepts a 10G network card...  There are also (2) M.2 in RAID0 onboard.

I'm still researching GPUs but am getting good results with Intel's onboard video....  Threadripper would have required a $1500+ cpu/mb/ram investment, an e-ATX case (+$200), plus using one of my older GPUs or invest in new.  I'm not knocking Threadripper & will likely choose one for upgrading a 2nd workstation, especially if new software better-utilizes the 2990X's 32 cores, etc. The 9900K upgrade was more of a best/bang for $1000 vs. messing with a whole new case, psu, etc.

Last changed by TheRhino on 4/24/2019, 10:06 PM, changed a total of 2 times.

Workstation C with $600 USD of upgrades in April, 2021
--$360 11700K @ 5.0ghz
--$200 ASRock W480 Creator (onboard 10G net, TB3, etc.)
Borrowed from my 9900K until prices drop:
--32GB of G.Skill DDR4 3200 ($100 on Black Friday...)
Reused from same Tower Case that housed the Xeon:
--Used VEGA 56 GPU ($200 on eBay before mining craze...)
--Noctua Cooler, 750W PSU, OS SSD, LSI RAID Controller, SATAs, etc.

Performs VERY close to my overclocked 9900K (below), but at stock settings with no tweaking...

Workstation D with $1,350 USD of upgrades in April, 2019
--$500 9900K @ 5.0ghz
--$140 Corsair H150i liquid cooling with 360mm radiator (3 fans)
--$200 open box Asus Z390 WS (PLX chip manages 4/5 PCIe slots)
--$160 32GB of G.Skill DDR4 3000 (added another 32GB later...)
--$350 refurbished, but like-new Radeon Vega 64 LQ (liquid cooled)

Renders Vegas11 "Red Car Test" (AMD VCE) in 13s when clocked at 4.9 ghz
(note: BOTH onboard Intel & Vega64 show utilization during QSV & VCE renders...)

Source Video1 = 4TB RAID0--(2) 2TB M.2 on motherboard in RAID0
Source Video2 = 4TB RAID0--(2) 2TB M.2 (1) via U.2 adapter & (1) on separate PCIe card
Target Video1 = 32TB RAID0--(4) 8TB SATA hot-swap drives on PCIe RAID card with backups elsewhere

10G Network using used $30 Mellanox2 Adapters & Qnap QSW-M408-2C 10G Switch
Copy of Work Files, Source & Output Video, OS Images on QNAP 653b NAS with (6) 14TB WD RED
Blackmagic Decklink PCie card for capturing from tape, etc.
(2) internal BR Burners connected via USB 3.0 to SATA adapters
Old Cooler Master CM Stacker ATX case with (13) 5.25" front drive-bays holds & cools everything.

Workstations A & B are the 2 remaining 6-core 4.0ghz Xeon 5660 or I7 980x on Asus P6T6 motherboards.

$999 Walmart Evoo 17 Laptop with I7-9750H 6-core CPU, RTX 2060, (2) M.2 bays & (1) SSD bay...

OldSmoke wrote on 4/24/2019, 10:50 PM

The 9900K runs on an ASUS Z390 WS standard ATX size workstation-class motherboard using a PLX chip...

I had a ASUS WS board once with the same PLX chip set but returned it. The CPU has 16 PCIe lanes, no chip set will give you more. Yes, it can make 4 PCIe slots "look" as if they are running x8 but, the bandwidth can't be increased because the PLX chip is connected to the CPU with 16 lanes.

Have you done any Vegas benchmark testing on that machine?

It's a bit like recording video in 8bit and delivering in 10bit or higher.

Last changed by OldSmoke on 4/24/2019, 11:00 PM, changed a total of 2 times.

Proud owner of Sony Vegas Pro 7, 8, 9, 10, 11, 12 & 13 and now Magix VP15&16.

System Spec.:
Motherboard: ASUS X299 Prime-A

Ram: G.Skill 4x8GB DDR4 2666 XMP

CPU: i7-9800x @ 4.6GHz (custom water cooling system)
GPU: 1x AMD Vega Pro Frontier Edition (water cooled)
Hard drives: System Samsung 970Pro NVME, AV-Projects 1TB (4x Intel P7600 512GB VROC), 4x 2.5" Hotswap bays, 1x 3.5" Hotswap Bay, 1x LG BluRay Burner

PSU: Corsair 1200W
Monitor: 2x Dell Ultrasharp U2713HM (2560x1440)

BruceUSA wrote on 4/25/2019, 12:10 PM

This is why I love the X399 flat form that giving me 64 real PCIE Lanes. So that I can run an additional of 4 more Samsung 970 evo plus NVMe M2 . 3 NVMe slot comes with the MB in default. I will be having a total of 7 NVMe drive in my system, Here is a screen shot of an adapter to my my Samsung 970 evo plus. The speed matched the drive specs.

Intel i7 12700k @5.2Ghz all P Cores, 5.3@ 6 Core, Turbo boost 3 Cores @5.4Ghz. 4.1Ghz All E Cores.                                          

MSI Z690 MPG Edge DDR5 Wifi                                                     

TEAMGROUP T-Force Delta RGB 32GB DDR5 -6200                     

Samsung 980 Pro x4 Nvme .M2 1tb Pcie Gen 4                                     

ASRock RX 6900XT Phantom 16GB                                                        

PSU Eva Supernova G2 1300w                                                     

Black Ice GTX 480mm radiator top mount push/pull                    

MCP35X dual pump w/ dual pump housing.                                

Corsair RGB water block. RGB Fan thru out                           

Phanteks Enthoo full tower

Windows 11 Pro

TheRhino wrote on 4/25/2019, 12:11 PM

From my understanding, the PLX chip reduces flow to/from the PCIe slots not needed at the moment (nanosecond…).  For instance, when I am capturing from the Blackmagic card it goes to (1) M.2 RAID 0.  The PCIe hardware 12-drive RAID10, 10G network, etc. are not being used... Yes, on PAPER it looks like sacrifices are being made but it is hard to see any difference in real-world apps because they rarely saturate PCIe 3.0...

As noted, this is a budget upgrade but working-out nicely... The open box ASUS Z390 WS was only $200.  I found refurbished water-cooled AMD Vega 64 for $350 & am installing it next week. I will do some before/after render tests. The V16 sample project does NOT reflect my workflow... Some benchmarks rely heavy on the GPU, and do not max-out CPU... If you know of another real-world project I can download & test, I can post results. Again, Vegas is hard to benchmark apples-to-apples because the hodge-podge of Magix & 3rd party plugins utilized on any given project.

Currently I edit all projects from 4K source video, which requires color-editing or matching source video, but not a lot of FX. I then provide customers with 4K, 2K, 1080p & 720p. They get an intermediate codec of their choice along with a MP4 for easy previewing. I get better results doing this in Vegas vs. 3rd party apps. and usually open 2 instances of Vegas at once rendering both while I switch to a 2nd workstation working on an entirely different project. This also helps me to mentally remember whose project is on which workstation so that I know what step is next... The 9900K more than keeps up with me. I'm not waiting-around for it to finish renders at the end of the day, so I can turn it off and save electricity. 

Workstation C with $600 USD of upgrades in April, 2021
--$360 11700K @ 5.0ghz
--$200 ASRock W480 Creator (onboard 10G net, TB3, etc.)
Borrowed from my 9900K until prices drop:
--32GB of G.Skill DDR4 3200 ($100 on Black Friday...)
Reused from same Tower Case that housed the Xeon:
--Used VEGA 56 GPU ($200 on eBay before mining craze...)
--Noctua Cooler, 750W PSU, OS SSD, LSI RAID Controller, SATAs, etc.

Performs VERY close to my overclocked 9900K (below), but at stock settings with no tweaking...

Workstation D with $1,350 USD of upgrades in April, 2019
--$500 9900K @ 5.0ghz
--$140 Corsair H150i liquid cooling with 360mm radiator (3 fans)
--$200 open box Asus Z390 WS (PLX chip manages 4/5 PCIe slots)
--$160 32GB of G.Skill DDR4 3000 (added another 32GB later...)
--$350 refurbished, but like-new Radeon Vega 64 LQ (liquid cooled)

Renders Vegas11 "Red Car Test" (AMD VCE) in 13s when clocked at 4.9 ghz
(note: BOTH onboard Intel & Vega64 show utilization during QSV & VCE renders...)

Source Video1 = 4TB RAID0--(2) 2TB M.2 on motherboard in RAID0
Source Video2 = 4TB RAID0--(2) 2TB M.2 (1) via U.2 adapter & (1) on separate PCIe card
Target Video1 = 32TB RAID0--(4) 8TB SATA hot-swap drives on PCIe RAID card with backups elsewhere

10G Network using used $30 Mellanox2 Adapters & Qnap QSW-M408-2C 10G Switch
Copy of Work Files, Source & Output Video, OS Images on QNAP 653b NAS with (6) 14TB WD RED
Blackmagic Decklink PCie card for capturing from tape, etc.
(2) internal BR Burners connected via USB 3.0 to SATA adapters
Old Cooler Master CM Stacker ATX case with (13) 5.25" front drive-bays holds & cools everything.

Workstations A & B are the 2 remaining 6-core 4.0ghz Xeon 5660 or I7 980x on Asus P6T6 motherboards.

$999 Walmart Evoo 17 Laptop with I7-9750H 6-core CPU, RTX 2060, (2) M.2 bays & (1) SSD bay...

Former user wrote on 4/25/2019, 12:46 PM

@TheRhino

“ If you know of another real-world project I can download & test, I can post results”

Hi, TheRhino, there is also the “Red Car” test, (Nick Hope has a link) but maybe it also doesn’t reflect your workflow. https://drive.google.com/uc?id=1t8uOIie5vgZD11wXyHAvQdwWf_OkvO_V&export=download

Given you are about to install the Vega 64, and you have the same intel cpu, it would be a great opportunity to compare the Amd/Nvidia VP throughput when you have it installed.

I came across another user with similar cpu/gpu config but unfortunately it never happened.

I use the Asus code motherboard, I looked seriously at what you have but at the time, late last year, it was very expensive, I was put off by cost, no issue with plx capability, you did very well.

Former user wrote on 4/25/2019, 6:07 PM

From my understanding, the PLX chip reduces flow to/from the PCIe slots not needed at the moment (nanosecond…).  For instance, when I am capturing from the Blackmagic card it goes to (1) M.2 RAID 0.  The PCIe hardware 12-drive RAID10, 10G network, etc. are not being used... Yes, on PAPER it looks like sacrifices are being made but it is hard to see any difference in real-world apps because they rarely saturate PCIe 3.0...

When I was going to buy a plx board a while ago I tried researching the topic & only thing I recall is that there's no slowdown using 4xGPU's simultaneously for processing, and very low latency up to 8gpu . It sounds like in practice it's not too much of a concern but I could understand that people who enjoy benchmarking their computers would never want to use a board in a setup configuration that would cause a statistical slowdown when another board would not

mintyslippers wrote on 4/28/2019, 12:17 PM

@Former user Thanks for the redcar. Couldnt find it.

For reference. Render times using the spec in my profile are:

HEVC using NVENC: 21 seconds

MAGIX AVC using NVENC: 21 seconds

MAGIX AVC using Mainconcept: 63 seconds

EDIT: Woah! @Former user just found your timings from your monster i9 2080ti setup. Are they still the same or have you seen any improvements since then as my AVC and HEVC timings are pretty much the same. And my spec is a Ryzen 7 2700x and an RTX 2060.

Which is then very interesting. As it could show that bottlenecks in an editing system are not really with the CPU and GPU and after a certain point you are getting diminishing returns for your investment. As your rig is at a minimum £1,000 more than mine.

Former user wrote on 4/28/2019, 2:10 PM

Hi @mintyslippers Excellent results, glad you found the Red Car test.

I haven’t done any more testing as what I already did was I think pretty comprehensive...

The following url is the 12th. post down on the relevant page https://www.vegascreativesoftware.info/us/forum/intel-i9-9900k-processor-review-using-vegas-pro--113429/?page=2

Small extract from the above url post follows, Small overclock in brackets.. (O/C)

HW Acceleration = Nvidia

Render with Nvenc .......  0:21s …................................ .... (19s) … Hevc is 0:23s (20s)

Render with QSV ..........  0:18s …………………………….......... (15s)

Render with Cpu only ...  0:59s ......………………………............ (53s)

 

Yes, there is a substantial cost benefit to the Amd system, good luck with that.