AMD Threadripper 3970x /w Vegas Pro Edit 17

CarlM wrote on 1/30/2020, 12:56 AM

Hey folks, just recently finished up a PC rebuild using the latest beast from AMD and rolled together a bunch of benchmarks comparing old (10 year old 12-core Xeon system) and new (32-cores of Threadripping goodness:

Long story short, some of my primary workflows are seeing a better than 4x performance boost, check out the video for more and happy to answer any more detailed questions.

If PC storage is what floats your boat, here's more detail on that subsystem in particular:

-c

Comments

Steve_Rhoden wrote on 1/30/2020, 2:31 AM

Impressive indeed!

JN- wrote on 1/30/2020, 3:49 AM

@CarlM Hi Carl, I enjoyed that, very impressive. There’s a benchmarking project located here, can you add your test results to it also, thanks ...

Last changed by JN- on 1/30/2020, 6:28 AM, changed a total of 2 times.

---------------------------------------------

VFR2CFR, Variable frame rate to Constant frame rate link to zip here.

Copies Video Converts Audio to AAC, link to zip here.

Convert 2 Lossless, link to ZIP here.

Convert Odd 2 Even (frame size), link to ZIP here

Benchmarking Continued thread + link to zip here

Codec Render Quality tables zip

---------------------------------------------

PC ... Corsair case, own build ...

CPU .. i9 9900K, iGpu UHD 630

Memory .. 32GB DDR4

Graphics card .. MSI RTX 2080 ti

Graphics driver .. latest studio

PSU .. Corsair 850i

Mboard .. Asus Z390 Code

 

Laptop… XMG

i9-11900k, iGpu n/a

Memory 64GB DDR4

Graphics card … Laptop RTX 3080

BruceUSA wrote on 1/30/2020, 7:53 AM

Wow. But you know what? here, many Vegas users telling you otherwise. High Cores count don't matter bla bla. Hey, vegas users?? Please stay with your your 8 cores system please. No need for 4ghz and 32 cores. Vegas don't use it. hehehe.

PS. I think the problem is that people keep compared to the old dog slow Xeon 2ghz high cores counts. Those low ghz high cores count is dog slow compared to modern AMD TR . NO competition.

Last changed by BruceUSA on 1/30/2020, 7:57 AM, changed a total of 1 times.

CPU:  i9 Core Ultra 285K OCed @5.6Ghz  
MBO: MSI Z890 MEG ACE Gaming Wifi 7 10G Super Lan, thunderbolt 4
RAM: 48GB RGB DDR5 8200mhz
GPU: NVidia RTX 5080 16GB Triple fan OCed 3100mhz, Bandwidth 1152 GB/s     
NVMe: 2TB T705 Gen5 OS, 4TB Gen4 storage
MSI PSU 1250W. OS: Windows 11 Pro. Custom built hard tube watercooling

 

                                   

                 

               

 

BruceUSA wrote on 1/30/2020, 8:05 AM

3970X will be mine next system.

CPU:  i9 Core Ultra 285K OCed @5.6Ghz  
MBO: MSI Z890 MEG ACE Gaming Wifi 7 10G Super Lan, thunderbolt 4
RAM: 48GB RGB DDR5 8200mhz
GPU: NVidia RTX 5080 16GB Triple fan OCed 3100mhz, Bandwidth 1152 GB/s     
NVMe: 2TB T705 Gen5 OS, 4TB Gen4 storage
MSI PSU 1250W. OS: Windows 11 Pro. Custom built hard tube watercooling

 

                                   

                 

               

 

CarlM wrote on 1/30/2020, 9:12 AM

@CarlM Hi Carl, I enjoyed that, very impressive. There’s a benchmarking project located here, can you add your test results to it also, thanks ...

Just did the quick 'FPS' looping preview test here before work and got 5-10fps, and that only used 5-10% of the available CPU performance. I'll take a more careful look this weekend and upload results, maybe do some desktop OBS recordings as well to show how the system resources load up while running...

That being said, obviously this question of how many cores help is highly project specific. Seems that the more layers and post-processing work you're asking for in the project the more GPU dependent and less CPU dependent the results are. My somewhat click-bait'ie "better than 4x improvement!" statement above is specific to the workflow I'm talking about at the roughly 15 minute mark, which is taking a multi-hour long 4K gaming session recording, chopping out three 10'ish minute sections, adding an outro for YouTube and no or very few additional layers, and then rendering all three simultaneously with three copies of VP running at the same time (with GPU support turned off, VP breaks soon as you try to render from multiple instances with the GPU enabled). My somewhat more complicated projects returned a more pedestrian 2-3x improvement, but again it's so project specific, YMMV.

ps, if anybody has a personal test project you'd like me to run I'd be happy to do so if you can put it up some where I can download it.

BruceUSA wrote on 1/30/2020, 11:02 AM

@CarlM Hi Carl, I enjoyed that, very impressive. There’s a benchmarking project located here, can you add your test results to it also, thanks ...

Just did the quick 'FPS' looping preview test here before work and got 5-10fps, and that only used 5-10% of the available CPU performance. I'll take a more careful look this weekend and upload results, maybe do some desktop OBS recordings as well to show how the system resources load up while running...

That being said, obviously this question of how many cores help is highly project specific. Seems that the more layers and post-processing work you're asking for in the project the more GPU dependent and less CPU dependent the results are. My somewhat click-bait'ie "better than 4x improvement!" statement above is specific to the workflow I'm talking about at the roughly 15 minute mark, which is taking a multi-hour long 4K gaming session recording, chopping out three 10'ish minute sections, adding an outro for YouTube and no or very few additional layers, and then rendering all three simultaneously with three copies of VP running at the same time (with GPU support turned off, VP breaks soon as you try to render from multiple instances with the GPU enabled). My somewhat more complicated projects returned a more pedestrian 2-3x improvement, but again it's so project specific, YMMV.

ps, if anybody has a personal test project you'd like me to run I'd be happy to do so if you can put it up some where I can download it.

You will always get those bad mouths about TR are not good for Vegas.. You better off disregards all questions. Such specific benchmark generally don't tell you how over all the system performance. Vegas works best in a combination with the GPU. Go with AMD GPU and don't let anyone else tell you otherwise. AMD GPU are performed better. Unless ALL your footage are h265 then you go Nivida cards.

CPU:  i9 Core Ultra 285K OCed @5.6Ghz  
MBO: MSI Z890 MEG ACE Gaming Wifi 7 10G Super Lan, thunderbolt 4
RAM: 48GB RGB DDR5 8200mhz
GPU: NVidia RTX 5080 16GB Triple fan OCed 3100mhz, Bandwidth 1152 GB/s     
NVMe: 2TB T705 Gen5 OS, 4TB Gen4 storage
MSI PSU 1250W. OS: Windows 11 Pro. Custom built hard tube watercooling

 

                                   

                 

               

 

Kinvermark wrote on 1/30/2020, 11:36 AM

@CarlM

Thanks for posting! Great to see some Vegas performance improvements using "next gen" hardware.

If you have some time, it would be really great to test Vegas' timeline playback & scrubing performance too, as this is what really impacts the editing work. Particularly important is having smooth playback through transitions, composited elements such as moving text & lower thirds, thumbnail generation in both the explorer and project media windows, etc.

Looking forward to my next build with a threadripper, but as it is an expensive option, it would be nice to quantify the benefit somewhat.

JN- wrote on 1/30/2020, 12:03 PM

@CarlM Ok Carl, thanks for taking the time to do our “Benchmarking” project, looking forward to your posted render results in the “Benchmarking Continued” thread.

Last changed by JN- on 1/30/2020, 12:12 PM, changed a total of 3 times.

---------------------------------------------

VFR2CFR, Variable frame rate to Constant frame rate link to zip here.

Copies Video Converts Audio to AAC, link to zip here.

Convert 2 Lossless, link to ZIP here.

Convert Odd 2 Even (frame size), link to ZIP here

Benchmarking Continued thread + link to zip here

Codec Render Quality tables zip

---------------------------------------------

PC ... Corsair case, own build ...

CPU .. i9 9900K, iGpu UHD 630

Memory .. 32GB DDR4

Graphics card .. MSI RTX 2080 ti

Graphics driver .. latest studio

PSU .. Corsair 850i

Mboard .. Asus Z390 Code

 

Laptop… XMG

i9-11900k, iGpu n/a

Memory 64GB DDR4

Graphics card … Laptop RTX 3080

CarlM wrote on 1/31/2020, 3:17 PM

@CarlM

If you have some time, it would be really great to test Vegas' timeline playback & scrubing performance too, as this is what really impacts the editing work. Particularly important is having smooth playback through transitions, composited elements such as moving text & lower thirds, thumbnail generation in both the explorer and project media windows, etc.

Any suggestions on how best to quantify and repeatably test/benchmark these properties? I just posted some results over in the Benchmarking thread (https://www.vegascreativesoftware.info/us/forum/benchmarking-continued--118503/?page=3#ca740747) and that probably covers some of the bases with the FPS preview test. That being said, seems the more complex your project, the more layers, the more GPU dependent the results become, and with that benchmarking test project my GTX1080 was breathing hard while the CPU was idling at 5-10% load. Looking forward to the Big Navi coming from AMD this year, I'm thinking a faster PCIe 4.0 capable GPU may really unlock the performance for these modern systems.

TheRhino wrote on 1/31/2020, 10:16 PM

Wow. But you know what? here, many Vegas users telling you otherwise. High Cores count don't matter bla bla. Hey, vegas users?? Please stay with your your 8 cores system please. No need for 4ghz and 32 cores. Vegas don't use it. hehehe.

@BruceUSA Yes, the 3970X is a $2000 beast & certainly worthwhile for apps that take full advantage of larger numbers of multi-cores, like Handbrake, virtual servers, etc. However, @CarlM just posted his Vegas benchmark numbers & if his settings & data are correct, his $2000 3970X and $400 - $850 motherboard does not surpass a $500 9900K & $200 motherboard when it comes to rendering single instances of Vegas. This is not a 3970X limitation, but a Vegas one...

At some point, the 3970X, etc. SHOULD do better at rendering multiple instances of Vegas, but since Vegas has GPU-assisted encoding, it's not that simple... Most renders now tie-up the GPU as well, so, like CarlM noted, the GPU becomes the bottleneck... This is why I clone my source files across a 10G network & render them on another workstation while I start the next project on my primary one... My GPU can dedicate its resources to preview FPS, etc...

Last April I had a ton of 4K work on the way, so I couldn't wait for the 3950X to arrive in September (which was delayed until nearly 2020...) So when others were still pushing Threadripper 1950X / 2950X systems, I chose the 9900K, with limited upgrade path, because I didn't want to pay extra for Threadripper with a questionable upgrade path.... I paid just $1350 for my 9900K, ASUS Z390 WS motherboard, CPU cooler, liquid-cooled VEGA 64, & 32GB of DDR4. Nearly a year later I could sell my system for what I paid, or even make a profit since my 5.0 ghz (stable) system is popular among students/gamers...

In comparison, Threadripper 2950X components would have cost me $2000+ to achieve the same Vegas speed and since then the resell value has plummeted... The $500 3900X & $750 3950X are faster on more affordable motherboards... Also, AMD appears to be abandoning the TR4 socket / X399 chipset now that the TRX40 platform is needed to support the next generation... Recently my local Micro Center was selling the Threadripper 1950X for $350 before Christmas (1/3 intro price) & is selling the 2950X for $500 (1/2 intro price) right now & X399 motherboard prices are also falling... Sometimes you pick the right horse & sometimes not... 

That said, if I were choosing the best bang/buck today I would consider the $750 AMD 3950X with PCIe 4.0 motherboard. I'm not partial to Intel, just bang/buck...

Workstation C with $600 USD of upgrades in April, 2021
--$360 11700K @ 5.0ghz
--$200 ASRock W480 Creator (onboard 10G net, TB3, etc.)
Borrowed from my 9900K until prices drop:
--32GB of G.Skill DDR4 3200 ($100 on Black Friday...)
Reused from same Tower Case that housed the Xeon:
--Used VEGA 56 GPU ($200 on eBay before mining craze...)
--Noctua Cooler, 750W PSU, OS SSD, LSI RAID Controller, SATAs, etc.

Performs VERY close to my overclocked 9900K (below), but at stock settings with no tweaking...

Workstation D with $1,350 USD of upgrades in April, 2019
--$500 9900K @ 5.0ghz
--$140 Corsair H150i liquid cooling with 360mm radiator (3 fans)
--$200 open box Asus Z390 WS (PLX chip manages 4/5 PCIe slots)
--$160 32GB of G.Skill DDR4 3000 (added another 32GB later...)
--$350 refurbished, but like-new Radeon Vega 64 LQ (liquid cooled)

Renders Vegas11 "Red Car Test" (AMD VCE) in 13s when clocked at 4.9 ghz
(note: BOTH onboard Intel & Vega64 show utilization during QSV & VCE renders...)

Source Video1 = 4TB RAID0--(2) 2TB M.2 on motherboard in RAID0
Source Video2 = 4TB RAID0--(2) 2TB M.2 (1) via U.2 adapter & (1) on separate PCIe card
Target Video1 = 32TB RAID0--(4) 8TB SATA hot-swap drives on PCIe RAID card with backups elsewhere

10G Network using used $30 Mellanox2 Adapters & Qnap QSW-M408-2C 10G Switch
Copy of Work Files, Source & Output Video, OS Images on QNAP 653b NAS with (6) 14TB WD RED
Blackmagic Decklink PCie card for capturing from tape, etc.
(2) internal BR Burners connected via USB 3.0 to SATA adapters
Old Cooler Master CM Stacker ATX case with (13) 5.25" front drive-bays holds & cools everything.

Workstations A & B are the 2 remaining 6-core 4.0ghz Xeon 5660 or I7 980x on Asus P6T6 motherboards.

$999 Walmart Evoo 17 Laptop with I7-9750H 6-core CPU, RTX 2060, (2) M.2 bays & (1) SSD bay...

RealityStudio wrote on 2/2/2020, 1:01 AM

I guess this must depend on the workload. My typical workload is a 4k 100mbps video with Filmconvert add on and a water mark, in that instance my AMD 12 core 3900x is used only about 40% along with my NVidia 1080ti also not being used to the fullest. I've never been able to figure out why Vegas won't use all my hardware, but long since gave up on trying to sort it out and just live with it as a quirk of Vegas. Other software like Handbrake though will max out my 12 core processor. I've always found it kinda odd though that free software like Handbrake maxes out my hardware whereas paid software like Vegas doesn't.

adis-a3097 wrote on 2/2/2020, 2:13 AM

Is your Filmconvert plugit set so that it uses CPU or GPU? Vegas, also, can't use Cuda cores, so I heard...

RealityStudio wrote on 2/2/2020, 1:39 PM

Yeah Filmconvert is set to use gpu, I've been using it for many years. I'm doing a render right now and gpu is bouncing between 24% and 41% use, while cpu is around 33% use so the machine mostly sleeps when Vegas renders. Even with no watermark and Filmconvert disabled, so just rendering raw 100mbps 4k footage right from the camera using a Vegas 4k render preset the numbers aren't that different, still mostly using just about 1/3rd of the machines ability.

The good news with that is I can easily do other stuff with the computer and not even feel the effects of Vegas rendering since it's only using about 1/3rd of the machine so that is kinda nice, but it would be nice to figure out one day why a professional program like Vegas sleeps so much whereas a free program like Handbrake pegs the machine at full tilt.

In any case I've been using every version of Vegas since version 4 so I've long since come to accept it as a limitation of the program that it's somehow horribly inefficient when it comes to how it uses the pc. I had hoped that maybe when Magix took over that this mystery would be solved but clearly that's not going to be the case. So it goes.

TheRhino wrote on 2/2/2020, 3:56 PM

I’m not a programmer, but from what I understand there is a threshold where splitting the job up takes longer than doing it on fewer cores...  For apps like Handbrake, creating a ZIP, Cinebench, etc. it isn't a big deal. You have each core work on a chunk of data, then concatenate it all back together at the end.

In comparison, more complex but mostly linear apps like Vegas & Photoshop typically use a dedicate thread to manage & delegate processes to all the other threads… Certain tasks have to be completed in order & error checked so some threads are waiting on others to finish their fairly linear tasks… 

SO… unless you have server-type apps that can give a 32-core CPU lots of work to do, then you are just paying extra to have cores sit-around & do nothing…  And this goes back to my original premise…  I’d rather have (2) fast workstations vs. (1) extreme system for the same money.  I can keep (2) workstations busy with their separate GPUs, I/O systems, hard drives, etc. all-day without slowing each other down or slowing me down… Plus, if one fails for any reason, I still have a 2nd fast system to finish the job...

Workstation C with $600 USD of upgrades in April, 2021
--$360 11700K @ 5.0ghz
--$200 ASRock W480 Creator (onboard 10G net, TB3, etc.)
Borrowed from my 9900K until prices drop:
--32GB of G.Skill DDR4 3200 ($100 on Black Friday...)
Reused from same Tower Case that housed the Xeon:
--Used VEGA 56 GPU ($200 on eBay before mining craze...)
--Noctua Cooler, 750W PSU, OS SSD, LSI RAID Controller, SATAs, etc.

Performs VERY close to my overclocked 9900K (below), but at stock settings with no tweaking...

Workstation D with $1,350 USD of upgrades in April, 2019
--$500 9900K @ 5.0ghz
--$140 Corsair H150i liquid cooling with 360mm radiator (3 fans)
--$200 open box Asus Z390 WS (PLX chip manages 4/5 PCIe slots)
--$160 32GB of G.Skill DDR4 3000 (added another 32GB later...)
--$350 refurbished, but like-new Radeon Vega 64 LQ (liquid cooled)

Renders Vegas11 "Red Car Test" (AMD VCE) in 13s when clocked at 4.9 ghz
(note: BOTH onboard Intel & Vega64 show utilization during QSV & VCE renders...)

Source Video1 = 4TB RAID0--(2) 2TB M.2 on motherboard in RAID0
Source Video2 = 4TB RAID0--(2) 2TB M.2 (1) via U.2 adapter & (1) on separate PCIe card
Target Video1 = 32TB RAID0--(4) 8TB SATA hot-swap drives on PCIe RAID card with backups elsewhere

10G Network using used $30 Mellanox2 Adapters & Qnap QSW-M408-2C 10G Switch
Copy of Work Files, Source & Output Video, OS Images on QNAP 653b NAS with (6) 14TB WD RED
Blackmagic Decklink PCie card for capturing from tape, etc.
(2) internal BR Burners connected via USB 3.0 to SATA adapters
Old Cooler Master CM Stacker ATX case with (13) 5.25" front drive-bays holds & cools everything.

Workstations A & B are the 2 remaining 6-core 4.0ghz Xeon 5660 or I7 980x on Asus P6T6 motherboards.

$999 Walmart Evoo 17 Laptop with I7-9750H 6-core CPU, RTX 2060, (2) M.2 bays & (1) SSD bay...

adis-a3097 wrote on 2/2/2020, 7:32 PM

@RealityStudio

Quick test, just transcode without any effect plugin wse:

1) MAGIX AVC 100%

 

2) MAGIX HEVC 100%

 

3) MAGIX INTERMEDIATE 90%

 

4) MPEG 2 (FAST AF) 100%

 

5) SONY AVC (INTERNET temp) 55%

 

6) SONY XAVC 100%

 

7) XDCAM (EVEN FASTER AF) 100%

 

IDK, really can't relate to what you're saying. Maybe it's the PC config, the drivers, groundwater, who knows...I don't.

RealityStudio wrote on 2/2/2020, 9:58 PM

Yeah @adis-a3097 i don't know why only Vegas behaves that way. As I mentioned, even a raw mp4 file right from the camera, dumped on the timeline with no effects and rendered to a Magix 4x render preset won't max out the cpu cores. Any other software I use be it games, OBS, free software like Handbrake, etc will fully use the machine. Who knows, it's just something I've accepted as a quirk of the program.

TheRhino wrote on 2/3/2020, 7:24 PM

@RealityStudio @adis-a3097
If I run the 4K Rendertest on my i7-9750H laptop with GTX 2060 using NVENC encoding:
CPU = 50%
Intel GPU = 12%
GTX 2060 = 25%
UHD finishes in 1:41

If I run the 4K Rendertest using CPU only:
CPU = 100%
Intel GPU = 8%
GTX 2060 = 10%
UHD finishes in 2:56.

So maybe @CarlM can run a test using the CPU-only to see if 32-cores surpass the benefit of having GPU-assisted encoding... (choose the render choice without NVENC, QSV or AMD).

Workstation C with $600 USD of upgrades in April, 2021
--$360 11700K @ 5.0ghz
--$200 ASRock W480 Creator (onboard 10G net, TB3, etc.)
Borrowed from my 9900K until prices drop:
--32GB of G.Skill DDR4 3200 ($100 on Black Friday...)
Reused from same Tower Case that housed the Xeon:
--Used VEGA 56 GPU ($200 on eBay before mining craze...)
--Noctua Cooler, 750W PSU, OS SSD, LSI RAID Controller, SATAs, etc.

Performs VERY close to my overclocked 9900K (below), but at stock settings with no tweaking...

Workstation D with $1,350 USD of upgrades in April, 2019
--$500 9900K @ 5.0ghz
--$140 Corsair H150i liquid cooling with 360mm radiator (3 fans)
--$200 open box Asus Z390 WS (PLX chip manages 4/5 PCIe slots)
--$160 32GB of G.Skill DDR4 3000 (added another 32GB later...)
--$350 refurbished, but like-new Radeon Vega 64 LQ (liquid cooled)

Renders Vegas11 "Red Car Test" (AMD VCE) in 13s when clocked at 4.9 ghz
(note: BOTH onboard Intel & Vega64 show utilization during QSV & VCE renders...)

Source Video1 = 4TB RAID0--(2) 2TB M.2 on motherboard in RAID0
Source Video2 = 4TB RAID0--(2) 2TB M.2 (1) via U.2 adapter & (1) on separate PCIe card
Target Video1 = 32TB RAID0--(4) 8TB SATA hot-swap drives on PCIe RAID card with backups elsewhere

10G Network using used $30 Mellanox2 Adapters & Qnap QSW-M408-2C 10G Switch
Copy of Work Files, Source & Output Video, OS Images on QNAP 653b NAS with (6) 14TB WD RED
Blackmagic Decklink PCie card for capturing from tape, etc.
(2) internal BR Burners connected via USB 3.0 to SATA adapters
Old Cooler Master CM Stacker ATX case with (13) 5.25" front drive-bays holds & cools everything.

Workstations A & B are the 2 remaining 6-core 4.0ghz Xeon 5660 or I7 980x on Asus P6T6 motherboards.

$999 Walmart Evoo 17 Laptop with I7-9750H 6-core CPU, RTX 2060, (2) M.2 bays & (1) SSD bay...

CarlM wrote on 2/4/2020, 9:30 AM

If we're talking just the encoding part, I.E. Mainconcept vs. NVENC, the latter still delivers roughly 2x the throughput. This is with NVENC cranked up to full HQ settings (Profile = High, Preset = High quality, RC Mode = VBR - high quality, bit rate 40Mb avg/50Mb max), because without doing that I get noticeable visual artifacts which the MC encoder never has.

On the Video Processing end of the rendering pipeline, I do not enable GPU support, because my relatively flat projects don't benefit much from it and so I can render multiple videos at the same time. Does enabling GPU support for video processing impact the MC vs. NV question above? I doubt it but haven't definitively tested it, will try and report back...

fr0sty wrote on 2/4/2020, 11:14 AM

Yeah @adis-a3097 i don't know why only Vegas behaves that way. As I mentioned, even a raw mp4 file right from the camera, dumped on the timeline with no effects and rendered to a Magix 4x render preset won't max out the cpu cores. Any other software I use be it games, OBS, free software like Handbrake, etc will fully use the machine. Who knows, it's just something I've accepted as a quirk of the program.

What are the render time comparisons? Comparing CPU/GPU usage percentages is not a valid way to compare performance.

JN- wrote on 2/13/2020, 9:20 AM

@CarlM This Andtech article sheds some light on Win 10 limitations of greater than 32 core cpu's, but with some workarounds.

---------------------------------------------

VFR2CFR, Variable frame rate to Constant frame rate link to zip here.

Copies Video Converts Audio to AAC, link to zip here.

Convert 2 Lossless, link to ZIP here.

Convert Odd 2 Even (frame size), link to ZIP here

Benchmarking Continued thread + link to zip here

Codec Render Quality tables zip

---------------------------------------------

PC ... Corsair case, own build ...

CPU .. i9 9900K, iGpu UHD 630

Memory .. 32GB DDR4

Graphics card .. MSI RTX 2080 ti

Graphics driver .. latest studio

PSU .. Corsair 850i

Mboard .. Asus Z390 Code

 

Laptop… XMG

i9-11900k, iGpu n/a

Memory 64GB DDR4

Graphics card … Laptop RTX 3080

CarlM wrote on 2/13/2020, 10:16 AM

Yeah thanks J-N, saw that as the 3990x embargo dropped.  I intentionally didn't wait for the top end 64-core part because I was guessing there might be OS compatibility or NUMA issues like with the previous generation top end Threadripper parts, and sure enough happy to see I was right.

Still pondering how to do another video on benchmarking with GPU enabled and disabled for both video processing and/vs encoding.  The complexity of the project and the fine points of the output format make such a huge impact on rendering performance...makes it difficult to come up with data that would be useful as a generalized review.

JN- wrote on 2/13/2020, 11:35 AM

@CarlM Maybe wait for the next VP17 update?

It might just be a bit more stable. Thanks for your input already, great to have such an exotic to add to the benchmarking.

Thing is, you get great value from it because of your use case of running multiple instances of VP.

 

---------------------------------------------

VFR2CFR, Variable frame rate to Constant frame rate link to zip here.

Copies Video Converts Audio to AAC, link to zip here.

Convert 2 Lossless, link to ZIP here.

Convert Odd 2 Even (frame size), link to ZIP here

Benchmarking Continued thread + link to zip here

Codec Render Quality tables zip

---------------------------------------------

PC ... Corsair case, own build ...

CPU .. i9 9900K, iGpu UHD 630

Memory .. 32GB DDR4

Graphics card .. MSI RTX 2080 ti

Graphics driver .. latest studio

PSU .. Corsair 850i

Mboard .. Asus Z390 Code

 

Laptop… XMG

i9-11900k, iGpu n/a

Memory 64GB DDR4

Graphics card … Laptop RTX 3080

TheRhino wrote on 2/16/2020, 8:30 AM

@CarlM Thing is, you get great value from it because of your use case of running multiple instances of VP.

However, all instances of Vegas are utilizing the same GPU, RAM, I/O resources, etc. so no matter how many cores the CPU has, multiple instances of Vegas eventually saturate the capabilities of the rest of the system, especially the GPU... Therefore, as noted earlier, for the money it is better to have (2+) fast workstations vs. (1) much pricier top-end workstation. This allows me to start a new project on my primary workstation without having background renders, transfers to client USB or TB3 drives, etc. hogging GPU, RAM, I/O resources, etc. |

For instance, I like to drop the batch-rendered files to new tracks on the timeline to make certain the file names match the project markers & that there are no glaring errors. For a 2 hour project rendered to (3) different codecs, when I drop them onto new tracks the system has to "build peaks" for 6 hours of video... While one system is doing this, I simply return to the other system & keep working on my new project. Mentally it also helps me to distinguish how far-along each project is. The final VEG only gets sent to the render workstation when it has been completed...

BTW, this is one reason I like Vegas so much vs. other NLEs... All I need to do is send a copy of the VEG & source files across my 10G network & I can render the project on any system... I keep the source, stock footage & music drives & directories all labeled the same so Vegas never has to hunt for the footage, which also saves time...

Workstation C with $600 USD of upgrades in April, 2021
--$360 11700K @ 5.0ghz
--$200 ASRock W480 Creator (onboard 10G net, TB3, etc.)
Borrowed from my 9900K until prices drop:
--32GB of G.Skill DDR4 3200 ($100 on Black Friday...)
Reused from same Tower Case that housed the Xeon:
--Used VEGA 56 GPU ($200 on eBay before mining craze...)
--Noctua Cooler, 750W PSU, OS SSD, LSI RAID Controller, SATAs, etc.

Performs VERY close to my overclocked 9900K (below), but at stock settings with no tweaking...

Workstation D with $1,350 USD of upgrades in April, 2019
--$500 9900K @ 5.0ghz
--$140 Corsair H150i liquid cooling with 360mm radiator (3 fans)
--$200 open box Asus Z390 WS (PLX chip manages 4/5 PCIe slots)
--$160 32GB of G.Skill DDR4 3000 (added another 32GB later...)
--$350 refurbished, but like-new Radeon Vega 64 LQ (liquid cooled)

Renders Vegas11 "Red Car Test" (AMD VCE) in 13s when clocked at 4.9 ghz
(note: BOTH onboard Intel & Vega64 show utilization during QSV & VCE renders...)

Source Video1 = 4TB RAID0--(2) 2TB M.2 on motherboard in RAID0
Source Video2 = 4TB RAID0--(2) 2TB M.2 (1) via U.2 adapter & (1) on separate PCIe card
Target Video1 = 32TB RAID0--(4) 8TB SATA hot-swap drives on PCIe RAID card with backups elsewhere

10G Network using used $30 Mellanox2 Adapters & Qnap QSW-M408-2C 10G Switch
Copy of Work Files, Source & Output Video, OS Images on QNAP 653b NAS with (6) 14TB WD RED
Blackmagic Decklink PCie card for capturing from tape, etc.
(2) internal BR Burners connected via USB 3.0 to SATA adapters
Old Cooler Master CM Stacker ATX case with (13) 5.25" front drive-bays holds & cools everything.

Workstations A & B are the 2 remaining 6-core 4.0ghz Xeon 5660 or I7 980x on Asus P6T6 motherboards.

$999 Walmart Evoo 17 Laptop with I7-9750H 6-core CPU, RTX 2060, (2) M.2 bays & (1) SSD bay...

eikira wrote on 2/16/2020, 2:20 PM

If we're talking just the encoding part, I.E. Mainconcept vs. NVENC, the latter still delivers roughly 2x the throughput. This is with NVENC cranked up to full HQ settings (Profile = High, Preset = High quality, RC Mode = VBR - high quality, bit rate 40Mb avg/50Mb max), because without doing that I get noticeable visual artifacts which the MC encoder never has.

On the Video Processing end of the rendering pipeline, I do not enable GPU support, because my relatively flat projects don't benefit much from it and so I can render multiple videos at the same time. Does enabling GPU support for video processing impact the MC vs. NV question above? I doubt it but haven't definitively tested it, will try and report back...

i cant praise voukoder enough. you can tweak NVENC on a broad range in voukoder to eleminate artifacts etc. (setting specific GOP and so on). and it utilizes the gpu much stronger as the vegas internal GPU profiles.

have you checked it out? www.voukoder.org, its just a time saver and still retaining the picture quality. i would not want to miss this anymore.