Impressions: Moving from VP14/4770K to VP19/12700K

Hulk wrote on 12/19/2021, 9:59 AM

I have been long overdue for both an update to Vegas and my computer. Last month I decided to upgrade and replaced my motherboard with an Asus Z690m Prime, 32GB G.Skill DDR4 CL16 memory, WDC 1TB SN850 boot/program drive, Samsung 2TB 970 EVO plus NVMe work drive, Alder Lake 12700K CPU, and Noctua chromax.black cooler. The rest of my rig remains. I am a somewhat experienced builder having built my own systems for the past 30 years. My main concern during a build is stability so I only use quality motherboards (I'm partial to Asus) with vendor qualified RAM. I don't overclock. In fact I'm running the P cores on this rig at 4.5GHz instead of the stock 4.7GHz. The reduction in performance is insignificant but the system runs cooler and quieter and I know any possible instability is not being caused by the CPU. I'm running Windows 11. Whenever I build a new rig the first thing I install and test is Vegas so as I add software or hardware if there is some issues in Vegas I know what caused it.

Obviously the increase in performance from my trusted 4770K to the 12700K is enormous. What used to require hours is now counted in minutes, minutes in seconds. It's not an order of magnitude faster but half that (5x) in most compute oriented workloads. If you are wondering why I am using the iGPU instead of a discrete card there are two reasons. First, last time I experimented with discrete cards, approximately 4 years ago I couldn't get the Vegas stability where I needed it. Second, I always render software based for highest quality for the file size. So a discrete GPU is only useful for me during preview and for the most part the 12700K and 770 iGPU are sufficient. I guess the third reason would be the ridiculous prices of GPUs these days. When supply catches up with (and surpasses) demand I'll reinvestigate a discrete GPU. Most likely AMD since their OpenCL performance is very good and that's what's most important for preview.

I have found the transition to Vegas 19 from 14 to be pretty much drama free. I'm not in love with the black color scheme. It's looks cool but the grey in 14 seemed to be easier for me to find things. I'm also not a fan of the various buttons on the track controls requiring a separate click to activate. But those are minor things I admit.

As far as stability I find 19 much more stable than 14. Part of this may be due to my new rig, part to 19. I didn't test 19 on my old rig so I don't have an apples-to-apples comparison unfortunately. I do know that in the past I would limit project to about 15 minutes as Vegas seemed to get flakey with projects longer than that. Think 30 seconds clips. So far I have gone as high as 170 clips with the new rig and 19 with very good stability. I'm quite pleased.

Anyway I wanted to pass along my experience with Vegas 19 and Alder Lake. Continued thanks to everyone who has helped me with Vegas over the years! There is an insane amount of Vegas knowledge on this board. We are lucky to have you all!

Mark

Comments

FayFen wrote on 12/19/2021, 10:48 AM

Will you share your total cost $$$ for this new system?

Hulk wrote on 12/19/2021, 11:16 AM

Will you share your total cost $$$ for this new system?

Of course!

Motherboard - $190 (Amazon, could have saved $20 at Microcenter combined deal but out of stock)
12700K - $350 (Microcenter, now selling for $300, but I'm outside of return period)
WDC 1TB 850 NVMe SSD - $160
Samsung 2TB 970 EVO Plus NVMe SSD - $220
32GB (2x16GB) G.Skill Kingston Fury DDR4 3600 CL 16 - $140 (Newegg)
Noctua NH U12-A Chromax.black cooler - $120 (Amazon)

$1190 plus NJ state sales tax. Except for the SN850 I didn't go for the best I tried to hit the inflection point of the price vs. performance curve. For example, I couldn't see spending an extra $250 for the 12900K for an additional Gracemont cluster (4 additional E cores). Or literally hundreds more to go with a DDR5 motherboard and associated memory. But of course buying/building a system is a very subjective endeavor so there is no right or wrong decisions, everybody will have different priorities and workflows.

If anyone has questions or comments please feel free to ask. I don't have anywhere near the knowledge level with Vegas that many around here do but I'm pretty good at building systems, which is why I'm contributing this information in the hopes that it provides additional information for people looking to buy or build with Vegas specifically in mind.

As I wrote in my original post make sure you buy vendor qualified RAM for your motherboard and buy a higher end motherboard, "Z" series for Intel generally has better power delivery, caps, etc... because they are capable of overclocking. Also after you load the OS immediately check for a BIOS update. After that I recommend doing Windows updates and then loading Vegas and running it a few days. Once you have determined it is stable image the drive. Now as you being to load additional software continually make sure all is well with Vegas. If something goes haywire you have the drive image for a quick C drive restore. I generally image my C drive about once a month using an external SSD. Actually an internal SATA SSD with a $10 SATA to USB converter cable, remember I'm extremely frugal!

My old 4770K system was still quite usable, goes to show what a great architecture Haswell was. But the 12700K is a beast, especially with "bursty" workloads like Vegas, Photoshop, DxO PureRaw, ... If my main focus was highly multithreaded apps that run for hours I would have probably gone with a 5950X, which is still a beast of a CPU of course. But most apps, like Vegas and Photoshop only really use 8 or so cores so that is why Alder Lake competes well with higher core count AMD alternatives. Anyway, the 12700K, especially now for $300 at Microcenter hit the sweet spot for me.

Former user wrote on 12/19/2021, 6:50 PM

Would you mind doing the benchmark project?

https://www.vegascreativesoftware.info/us/forum/benchmarking-results-continued--118503/

1080P, and 4K quicksync renders if you're limited for time. I'd like to see your times when using the IGPU as processor. If it's dramatically slow, try turning off GPU processing (not GPU decoder) and see if that's faster. This assumes hardware encoding still works when GPU processing is disabled, I think at one point it didn't

Hulk wrote on 12/19/2021, 7:14 PM

Would you mind doing the benchmark project?

https://www.vegascreativesoftware.info/us/forum/benchmarking-results-continued--118503/

1080P, and 4K quicksync renders if you're limited for time. I'd like to see your times when using the IGPU as processor. If it's dramatically slow, try turning off GPU processing (not GPU decoder) and see if that's faster. This assumes hardware encoding still works when GPU processing is disabled, I think at one point it didn't

Looking at the benchmark it's entirely GPU constrained. Running it without a discrete GPU would be like running a modern game on CPU only!

I differ from most people around here in that I have no interest in GPU driven renders. I have found their quality to be terrible at bitrates where RenderPlus or frameserving to Handbrake would provide MUCH better quality. I think the last time I used any render natively from Vegas was about 10 years ago.

My only use for a discrete GPU is for preview and honestly for what I do I can get along fine with the iGPU.

Former user wrote on 12/19/2021, 7:25 PM

Hardware rendering isn't actually about hardware rendering, it's to allow all your cpu to go to rendering instead of encoding. It can also help with more accurate GPU stats, as a person with 16core cpu isn't automatically going to beat someone with 8cores.

These sorts of tests are actually very common in gaming due to the shortage/expense of discreet GPU's. Comparing IGPU's. It would be interesting to see your results, but if you're not interested that's also fine

Hulk wrote on 12/19/2021, 8:32 PM

I'm sorry but I'm not quite following your reply. I probably wasn't specific enough in my response. Sorry about that.

By hardware rendering I mean GPU rendering using non-general purpose compute units, which is how hardware renders using GPU's work. The thousands of dedicated parallel processing units in GPU's are designed from a hardware level to perform just a few tasks/instructions very efficiently. To date GPU rendering available to to us has been quite low quality like nVidia or Intel QSV solutions. It's hard to implement high quality GPU rendering as can be attested by the fact that Handbrake still is a CPU based solution. The general purpose nature of a CPU allows the programmer to create the application with the best quality in mind whereas using GPU you have to program "around" what the GPU can do.

CPU-only tests in gaming are used to test the compute of CPU's, that is true. But the problem here is I don't see the benefit of comparing CPU's to GPU's? All of the test results are using discrete cards? Is there a results table using CPU only that I didn't see?

I ran quite a few NLE benchmark sites going as far back as 20 years. This script is beautifully written but honestly the testing procedure and results are complex and somewhat redundant. The key when running benchmarks is to eliminate as many variables, especially human created ones as possible. When estimating preview fps for example when it's varying all over the place, you can only specify a range and you have to keep significant digits in mind. So if you are seeing 10 to 18 fps with it being in the middle of that range most of the time then you would write 14 +-4fps. That means I can guarantee the average frame rate is in the range provided. Anything else is just guessing and everybody guesses differently unless trained to the same standard.

The rendering and preview results all scale in a proportional manner. What I mean by that is the systems that render quickly also preview well. The test could have been simplified into 2 renders. One with a GPU hardware render, which would also provide preview ability, and one using only CPU based render, which would indicate CPU ability in Vegas.

 

Former user wrote on 12/19/2021, 8:56 PM

@Hulk For the Vegas Benchmark, If you have Excel on your PC you can sort the columns if you wish, no idea if this is any use to you,

Here i sorted by Encode Mode (Q column) by A-Z,

 

RogerS wrote on 12/19/2021, 8:57 PM

Looking at the benchmark it's entirely GPU constrained. Running it without a discrete GPU would be like running a modern game on CPU only!

Is it though? The 8th fastest result was a CPU-only encode. 10th fastest was QSV (iGPU). You can also just look at column P (encode mode) if you are interested in the render performance of CPUs.

Last changed by RogerS on 12/19/2021, 8:57 PM, changed a total of 1 times.

Custom PC (2022) Intel i5-13600K with UHD 770 iGPU with latest driver, MSI z690 Tomahawk motherboard, 64GB Corsair DDR5 5200 ram, NVIDIA 2080 Super (8GB) with latest studio driver, 2TB Hynix P41 SSD and 2TB Samsung 980 Pro cache drive, Windows 11 Pro 64 bit https://pcpartpicker.com/b/rZ9NnQ

ASUS Zenbook Pro 14 Intel i9-13900H with Intel graphics iGPU with latest ASUS driver, NVIDIA 4060 (8GB) with latest studio driver, 48GB system ram, Windows 11 Home, 1TB Samsung SSD.

VEGAS Pro 21.208
VEGAS Pro 22.239

Try the
VEGAS 4K "sample project" benchmark (works with VP 16+): https://forms.gle/ypyrrbUghEiaf2aC7
VEGAS Pro 20 "Ad" benchmark (works with VP 20+): https://forms.gle/eErJTR87K2bbJc4Q7

Hulk wrote on 12/19/2021, 10:50 PM

Looking at the benchmark it's entirely GPU constrained. Running it without a discrete GPU would be like running a modern game on CPU only!

Is it though? The 8th fastest result was a CPU-only encode. 10th fastest was QSV (iGPU). You can also just look at column P (encode mode) if you are interested in the render performance of CPUs.

I don't want to be the bearer of bad news but the chart doesn't make sense. Below I sorted CPU only results from fastest to slowest.

9900k beating 11900k? Impossible.
9900k in 2nd place and another like 15 places down and over a minute slower?
Sandy bridge 2600k's beating Rocket Lake's, Zen 2's and other newer architectures with equal or more cores?
3700x beating 9900k? Highly unlikely.

I could point out so many inconsistencies but I think you get the idea.

None of this makes any sense. There is a mish-mosh results that makes no sense because most of these people used GPU encoding, different encoding settings, with different CPU's on different GPU's, there is no consistency especially when you consider all variables should be eliminated except for the CPU.

I mean really all you have to do is look at the random location of all the CPU's and you know this isn't CPU-only encoding.

If you have CPU rendering then the CPU's line up nicely in blocks when the chart is sorted.

I'm sorry there are so many unaccounted for variables here it's pretty much useless. The only general trend I can see if faster GPU's do better at whatever people think they are testing. Like I wrote above the testing is methodology is too complex and too multi-variate. Variables must be isolated to achieve meaningful results.

i9-10980XE18AMDRX-6800XTFHDCPU00m : 34s

Ryzen 9-5950x16AMDRX-550 XTFHDCPU00m : 41s

i9-9900K8NvidiaRTX-2080 TiFHDCPU00m : 47s

i9-11900K8NvidiaRTX-3080FHDCPU00m : 48s

i9-10900X10NvidiaRTX-3080 XC3 UltraFHDCPU00m : 51s

Ryzen 7 3700X8AMDRX-580FHDCPU00m : 58s

Ryzen 7 3700X8AMDRX-580FHDCPU01m : 02s

i7-11700k8AMDVega FrontierFHDCPU01m : 17s

i7-980x6AMDRX-580FHDCPU01m : 24s

Ryzen 7 3700X8AMDRX-470FHDCPU01m : 30s

Ryzen 7 3700X8AMDRX-470FHDCPU01m : 31s

i9-10980XE18AMDRX-6800XTUHDCPU01m : 32s

i7-2600K4NvidiaGTX-1080FHDCPU01m : 32s

i7-2600K4NvidiaGTX-1080FHDCPU01m : 34s

i9-10900X10NvidiaRTX-3080 XC3 UltraUHDCPU01m : 40s

Ryzen 9-5950x16AMDRX-550 XTUHDCPU01m : 40s

i9-9900K8NvidiaRTX-2080 TiUHDCPU01m : 53s

Ryzen 7 3700X8AMDRX-580UHDCPU02m : 13s

TR-1950X16AMDRadeon 7UHDCPU02m : 16s

Ryzen 7 3700X8AMDRX-580UHDCPU02m : 21s

Ryzen 7 3700X8AMDRX-470UHDCPU02m : 37s

i7-7700hq4NvidiaGTX-1050FHDCPU02m : 38s

Ryzen 7 3700X8AMDRX-470UHDCPU02m : 41s

i7-11700k8AMDVega FrontierUHDCPU03m : 02s

i7-2600K4NvidiaGTX-1080UHDCPU03m : 35s

i7-980x6AMDRX-580UHDCPU04m : 51s

i7-2600K4NvidiaGTX-1080UHDCPU04m : 59s

i7-4770K4AMDR9-390UHDCPU05m : 01s

i7-7700hq4NvidiaGTX-1050UHDCPU05m : 29s

i7-2600K4AMDRadeon HD 6850FHDCPU06m : 45s

i7-2600K4AMDRadeon HD 6850UHDCPU15m : 11s

 

Former user wrote on 12/20/2021, 12:51 AM

None of this makes any sense. There is a mish-mosh results that makes no sense because most of these people used GPU encoding

I think the GPU encoding is good, because the GPU encoding will never be the bottleneck, the speed of the encode will never be slower than the time it takes to render the frames, it will be the combination of the CPU and GPU processing, or CPU only if GPU was turned off that determines the speed

At least this was the intention, but now we know that NVENC encoding on Vegas pauses every 60 frames unlike the hardware encoders on Intel, and AMD, so that makes the numbers less useful as far as determining how well the CPU and GPU combination would works on a CPU encode, as systems with Nvidia GPU's will no longer be pausing every 60 frames.

 

RogerS wrote on 12/20/2021, 1:38 AM

As with any user-submitted benchmarks the best way to weed out noise is to have more results. The previous caretaker of this data did challenge people who had clearly questionable results and I tried to keep that up helping people who are unsure of their settings. The benchmark also began with VP 16 and goes through VP 19 so version of Vegas is another consideration.

It's a bit complex in that you can render either FHD or UHD and then MagicHEVC or AVC (though almost all responses are AVC).

So in your chart there should be a break after the worst performer: i7-2600K4NvidiaGTX-1080FHDCPU01m : 34s where it switches to UHD and the ranking essentially restarts. (though it's so slow it actually gets lapped by a i9-10980XE18 which beats its FHD time in UHD)

The 9900K in question was at 4.9GHz. Vegas likes high clock speeds over core count so it outpeforming faster CPUs is plausible.
The i9-11900K was a laptop with the iGPU unavailable.

As far as the GPU role goes, even in CPU-only renders it still has a role in decoding which will also equalize things a bit with lesser CPUs that have an iGPU (Intel).

If anyone more knowledgeable in Google Sheets would like to help me create filters it could make comparing like with like easier.

If you want benchmarks totally controlled by the testmaker, try techgage.com/ which has a few looking at CPU and GPU for VP 18, though on a less diverse set of hardware.

Last changed by RogerS on 12/20/2021, 3:41 AM, changed a total of 1 times.

Custom PC (2022) Intel i5-13600K with UHD 770 iGPU with latest driver, MSI z690 Tomahawk motherboard, 64GB Corsair DDR5 5200 ram, NVIDIA 2080 Super (8GB) with latest studio driver, 2TB Hynix P41 SSD and 2TB Samsung 980 Pro cache drive, Windows 11 Pro 64 bit https://pcpartpicker.com/b/rZ9NnQ

ASUS Zenbook Pro 14 Intel i9-13900H with Intel graphics iGPU with latest ASUS driver, NVIDIA 4060 (8GB) with latest studio driver, 48GB system ram, Windows 11 Home, 1TB Samsung SSD.

VEGAS Pro 21.208
VEGAS Pro 22.239

Try the
VEGAS 4K "sample project" benchmark (works with VP 16+): https://forms.gle/ypyrrbUghEiaf2aC7
VEGAS Pro 20 "Ad" benchmark (works with VP 20+): https://forms.gle/eErJTR87K2bbJc4Q7

Hulk wrote on 12/20/2021, 8:01 AM

Managing a benchmark site is hard. I know that. As I wrote I've done it many times. I'm actually still updating this one: https://forums.anandtech.com/threads/handbrake-1-3-3-benchmark-your-system-new-benchmark-criteria.2588294/?view=date

The thing is with Vegas and other video editors is you have to separate the operations. At a high level basically two things are going on. First, the assembly of the timeline. This is simply processing all of the events of the timeline and "mixing" it down to one video stream. Think "Best/Full" preview. Second, that stream is transcoded to a delivery format. We tend to use the term "render" to encompass both of these tasks.

If you render to a format that is not GPU accelerated then the assembly of the timeline will still be GPU accelerated (as long as the GPU is selected in preferences) but the transcoding will not be. In this example project, the assembly portion has extremely high compute, it is at one end of the assembly compute spectrum. At the other end would be a couple short clips with a few crossfades. The assembly on that would require very little compute.

So, what is going on with this particular test project, as I wrote above, is that since it is so assembly heavy, as witnessed by the amount of GPU compute required to preview, that ANY render is going to be GPU dependent. That is why the results are all over the place even when sorted by CPU. It is still primarily GPU dependent as none of the systems can even preview the project in real time it would therefore be impossible to render it in real time.

If you only look at UHD or FHD results you will notice that with the exception of a few outliers (they probably used the wrong settings) that preview fps lines up with render time. This makes sense if you think about what I wrote above. In order to transcode you must first assemble (full quality preview) the timeline. Whether you are doing a CPU or GPU transcode it doesn't matter with this clip since it is so short (light transcode portion of the render) and heavy on assembly part of the render.

Old Smoke will remember that we went through this with various Vegas benchmarks as far as 15 years ago. Before that I ran a benchmark site for Ulead's MediaStudio Pro way back in the day. Did some work with Spot for them as well way back;)

Anyway, this is a beautiful little project and would make for a good benchmark once the parameters of the test and what is actually being tested is clarified.

This is what I would suggest:

First, clearly explain that the render process consists of assembly and transcode. Furthermore this test project is extremely assembly heavy and will test your computers ability to preview a complex Vegas timeline.

Second, have the user render the project to a low compute non-GPU enabled format. One that is only intra-frame compression, not temporally compressed inter-frame. This basically takes the compute portion out of the test so what is being represented by the render time is how well the tested system can preview a complex timeline. But in this case you have a definite number based on the time to render. fps can be calculated simply by knowing the number of frames in the test project. This will provide an accurate, level playing field to assess relative GPU performance of all benchmarked systems. And I would bet with a low overhead final delivery format the render fps would only be slightly higher than the preview fps for a given system. Again, since the assembly on this test project is so high.

Third, to actually test the transcoding (non-GPU) of the system I would create a test project that consists of the output of the first test with a couple of transitions and looped a few times. Then I would have the user render this to one of the interframe codecs.

Two times would be recorded. The first being an indication of the test system's preview capability, and the second an indication of the systems CPU capability during transcode.

Make both tests in UHD format to simply the results table, time for testing, and limit confusion for the tester.

Anyway, that's my two cents on this.

Former user wrote on 12/20/2021, 3:15 PM

So, what is going on with this particular test project, as I wrote above, is that since it is so assembly heavy, as witnessed by the amount of GPU compute required to preview, that ANY render is going to be GPU dependent. That is why the results are all over the place even when sorted by CPU. It is still primarily GPU dependent as none of the systems can even preview the project in real time it would therefore be impossible to render it in real time.

It's a benchmark of the combination of the CPU and GPU to render the frames. That's what most people are interested in that look at the table, That's the reason for hardware encoding version, first, we're not interested in CPU encoding speed, because it's not a benchmark of CPU encoding, it's a benchmark of CPU + GPU frame rendering, without the encoding portion. The 2nd as I mentioned is to standardize the test, and give more meaningful results to what this benchmark is about (rendering frames using GPU + CPU). It's not helpful for CPU to be wasted with encoding which may bottleneck the rendering of the frames we're actually trying to benchmark.

Then there was your suggestion to not use hardware encoding but instead a cpu friendly encoding codec. At 4K I don't think there's many to choose from, but maybe prores422, and for me on my computer that would work, I get the identical time using NVENC(via voukoder, due to vegas NEVENC being broken) as I do to Prores422, but my CPU use is now in the 80's , i'm close to becoming CPU bound, probably those with 2-4 cores less would be CPU bound, so encoding using CPU would then start to affect the frame rendering speed.

I'm still not understanding your problem with hardware encoding. It's being used because it will accept the rendered frames as fast as they can be generated, it does not create a choke point. Also it is hoped people do the tests with GPU and CPU encoding (4 tests total) .

I have nothing to do with this project, just my observations

 

 

Hulk wrote on 12/20/2021, 3:56 PM

Hi Todd,

I totally understand your points and they are valid. To summarize my point. This benchmark project very heavily stresses the assembly of the timeline, which like preview is GPU dependent. I don't remember off-hand but I think it's only 50 or so seconds long. Just as with games, the bottleneck with this benchmark project is first going to be the GPU and then the CPU. That is why if you sort the rendering times you will see most of the faster GPU's at the top of the list. CPU's are important but in this bench they play a secondary role unless you have a really old, outdated CPU.

We could discuss how much CPU vs GPU you need to balance this project but what' the point, right? That balance point changes for each project.

My second point is that it is hard to compare systems when some people are encoding, FHD some UHD, some with AMD, nVidia, or Intel GPU encoding, and other variables within those encoders.

By simply having everyone encode to UHD using a non-GPU accelerated codec it would level the playing field and make apples-to-apples comparisons possible. The GPU would still be an important part of the test but results would be comparable across all systems.

As for why I'm not a fan of GPU encoding it's pretty simple. I find the results to be terrible. Encode some UHD video at a bitrate of 5Mbps using Handbrake (or RenderPlus in Vegas) and then do the same with one of the GPU accelerated codecs. The GPU one will shows all sorts of compression artifacts, macroblocks, blurryness,etc.. The high quality software encoded version will look great.

But to each his/her own! Everybody has their own preferences and workflow.

 

TheRhino wrote on 12/20/2021, 9:38 PM

last time I experimented with discrete cards, approximately 4 years ago...
I have found the transition to Vegas 19 from 14 to be pretty much drama free...
But to each his/her own! Everybody has their own preferences and workflow....

Encode some UHD video at a bitrate of 5Mbps using Handbrake (or RenderPlus in Vegas) and then do the same with one of the GPU accelerated codecs. The GPU one will shows all sorts of compression artifacts, macroblocks, blurriness, etc...

@Hulk A LOT has changed concerning how Vegas utilizes PCIe GPUs from V14 to V19, so IMO when prices settle you are going to want to add a good bang/buck GPU even if you continue to use non-GPU accelerated codecs for encoding. With modern versions of Vegas, the CPU/GPU are tightly inter-woven...

For instance, I removed the VEGA 56 GPU from my 11700K system to resell it so I can replace it with a 6800 XT. Once removed, before installing the new GPU, I re-ran some of my AVID DNxHR and ProRes render templates. Although these intermediate codecs are not GPU-assisted, not having the VEGA there to handle some of the workload causes my renders to take 2X to 3X as long...

Also worth noting, if I have the AMD VEGA installed, but choose QSV renders to utilize the Intel iGPU for encoding, the QSV MP4 renders only take a tad longer than my AMD-assisted VCE MP4 renders. Windows Task manager shows BOTH the AMD GPU & Intel iGPU actively engaged, but of course the iGPU is working harder now since it has to decode and encode QSV... HOWEVER, if I disable the AMD GPU in device manager and/or remove it from the system altogether, my QSV renders also take 2X to 3X as long...

You noted that you are not happy with GPU-assisted codecs rending UHD video at 5 Mbps... Is there a reason you need to use such low bitrates? Some of the default Vegas GPU-assisted templates are 20Mbs but you can create custom templates at higher bitrates. With today's fast WiFi, fast Internet, large USB sticks & faster devices there is no need to keep the bitrates so low anymore...

IMO the whole Vegas2HandBrake became a popular work-around due to some problems with V13-V14's poor MP4 quality. That was before Magix made a bunch of improvements...

BTW, the new AMD 6800 XT makes my preview window silky smooth... It cost $1300, so I am selling my VEGA 56 for around $750 and my VEGA 64 for around $850 to pay for it... That leaves my 2nd workstation without a GPU but IMO it is a dog without it's PCIe GPU... I'm working on the systems literally side-by-side and there is no comparison...

Workstation C with $600 USD of upgrades in April, 2021
--$360 11700K @ 5.0ghz
--$200 ASRock W480 Creator (onboard 10G net, TB3, etc.)
Borrowed from my 9900K until prices drop:
--32GB of G.Skill DDR4 3200 ($100 on Black Friday...)
Reused from same Tower Case that housed the Xeon:
--Used VEGA 56 GPU ($200 on eBay before mining craze...)
--Noctua Cooler, 750W PSU, OS SSD, LSI RAID Controller, SATAs, etc.

Performs VERY close to my overclocked 9900K (below), but at stock settings with no tweaking...

Workstation D with $1,350 USD of upgrades in April, 2019
--$500 9900K @ 5.0ghz
--$140 Corsair H150i liquid cooling with 360mm radiator (3 fans)
--$200 open box Asus Z390 WS (PLX chip manages 4/5 PCIe slots)
--$160 32GB of G.Skill DDR4 3000 (added another 32GB later...)
--$350 refurbished, but like-new Radeon Vega 64 LQ (liquid cooled)

Renders Vegas11 "Red Car Test" (AMD VCE) in 13s when clocked at 4.9 ghz
(note: BOTH onboard Intel & Vega64 show utilization during QSV & VCE renders...)

Source Video1 = 4TB RAID0--(2) 2TB M.2 on motherboard in RAID0
Source Video2 = 4TB RAID0--(2) 2TB M.2 (1) via U.2 adapter & (1) on separate PCIe card
Target Video1 = 32TB RAID0--(4) 8TB SATA hot-swap drives on PCIe RAID card with backups elsewhere

10G Network using used $30 Mellanox2 Adapters & Qnap QSW-M408-2C 10G Switch
Copy of Work Files, Source & Output Video, OS Images on QNAP 653b NAS with (6) 14TB WD RED
Blackmagic Decklink PCie card for capturing from tape, etc.
(2) internal BR Burners connected via USB 3.0 to SATA adapters
Old Cooler Master CM Stacker ATX case with (13) 5.25" front drive-bays holds & cools everything.

Workstations A & B are the 2 remaining 6-core 4.0ghz Xeon 5660 or I7 980x on Asus P6T6 motherboards.

$999 Walmart Evoo 17 Laptop with I7-9750H 6-core CPU, RTX 2060, (2) M.2 bays & (1) SSD bay...

Hulk wrote on 12/20/2021, 10:40 PM

 

last time I experimented with discrete cards, approximately 4 years ago...
I have found the transition to Vegas 19 from 14 to be pretty much drama free...
But to each his/her own! Everybody has their own preferences and workflow....

Encode some UHD video at a bitrate of 5Mbps using Handbrake (or RenderPlus in Vegas) and then do the same with one of the GPU accelerated codecs. The GPU one will shows all sorts of compression artifacts, macroblocks, blurriness, etc...

@Hulk A LOT has changed concerning how Vegas utilizes PCIe GPUs from V14 to V19, so IMO when prices settle you are going to want to add a good bang/buck GPU even if you continue to use non-GPU accelerated codecs for encoding. With modern versions of Vegas, the CPU/GPU are tightly inter-woven...

For instance, I removed the VEGA 56 GPU from my 11700K system to resell it so I can replace it with a 6800 XT. Once removed, before installing the new GPU, I re-ran some of my AVID DNxHR and ProRes render templates. Although these intermediate codecs are not GPU-assisted, not having the VEGA there to handle some of the workload causes my renders to take 2X to 3X as long...

Also worth noting, if I have the AMD VEGA installed, but choose QSV renders to utilize the Intel iGPU for encoding, the QSV MP4 renders only take a tad longer than my AMD-assisted VCE MP4 renders. Windows Task manager shows BOTH the AMD GPU & Intel iGPU actively engaged, but of course the iGPU is working harder now since it has to decode and encode QSV... HOWEVER, if I disable the AMD GPU in device manager and/or remove it from the system altogether, my QSV renders also take 2X to 3X as long...

You noted that you are not happy with GPU-assisted codecs rending UHD video at 5 Mbps... Is there a reason you need to use such low bitrates? Some of the default Vegas GPU-assisted templates are 20Mbs but you can create custom templates at higher bitrates. With today's fast WiFi, fast Internet, large USB sticks & faster devices there is no need to keep the bitrates so low anymore...

IMO the whole Vegas2HandBrake became a popular work-around due to some problems with V13-V14's poor MP4 quality. That was before Magix made a bunch of improvements...

BTW, the new AMD 6800 XT makes my preview window silky smooth... It cost $1300, so I am selling my VEGA 56 for around $750 and my VEGA 64 for around $850 to pay for it... That leaves my 2nd workstation without a GPU but IMO it is a dog without it's PCIe GPU... I'm working on the systems literally side-by-side and there is no comparison...

I appreciate the advice and comments. I have hope for the upcoming Intel ARC cards as far as pricing and availability. Also Intel claims they will work in conjunction with the Intel iGPU.

The 5000kbps was just an example of how much better at any given bitrate software encoding is vs. hardware. RenderPlus in Happy Otter provides excellent results and the ability to batch render so I guess I simply prefer to have smaller more efficiently encoded files rather than larger less efficiently encoded ones for equal quality. There is a certain elegance to well-encoded video that I appreciate. Hardware encodes seem clunky and wasteful to me. Yes, it's totally subjective!

Again thanks for the advice. I will take it to heart.

Mark

Former user wrote on 12/20/2021, 11:14 PM
 

We could discuss how much CPU vs GPU you need to balance this project but what' the point, right? That balance point changes for each project.

With the hardware render I see between about 35% - 70% CPU, so that should be enough to differentiate CPU's quite a bit, based on how much work they can do per cycle. There is a constant synchronous relationship between the CPU and GPU, and i'm not looking at the figures right now, but I think as example the fast intel's even with less cores beat the fast AMD's with more cores in hardware encoding using the same GPU's, and that's interesting

My second point is that it is hard to compare systems when some people are encoding, FHD some UHD, some with AMD, nVidia, or Intel GPU encoding, and other variables within those encoders.

The hardware encoder should not be a variable, as long as it's fast enough for the rendered frame benchmark, and at 1080p and 4K intel's amd's and Nvidia's are all faster at encoding then the rate that frames can be rendered, so in theory there are no variables, all the hardware encoders will absorb and encode at the same rate, because that's not where the latency is, it's in the render stage, there will be quality differences, but that's not of interest as it's not a test of the encoder

UNFORUNATLY this actually isn't the case, so you bring up a good point, NVENC is broken on Vegas and pauses every 60frames, this makes a huge difference in comparisions of direct transcodes, but with GPU intensive benchmarks like this in the case of 4K hardware encode, I see a difference of 8seconds between using Vegas NVENC, and Voukoder NVENC

 

 

As for why I'm not a fan of GPU encoding it's pretty simple. I find the results to be terrible. Encode some UHD video at a bitrate of 5Mbps using Handbrake (or RenderPlus in Vegas) and then do the same with one of the GPU accelerated codecs. The GPU one will shows all sorts of compression artifacts, macroblocks, blurryness,etc.. The high quality software encoded version will look great.

But to each his/her own! Everybody has their own preferences and workflow.

 

At 1440P encode I stick with software encoding, at 4K it's over to HEVC Hardware encode. If the video is short enough 4K AVC is still reasonable, I just don't normally have the time to wait

TheRhino wrote on 12/22/2021, 7:55 AM

if I disable the AMD GPU in device manager and/or remove it from the system altogether, my QSV renders also take 2X to 3X as long...

I'm going to elaborate on this a little more because I'm literally sitting here waiting for a lengthy paid job to finish rendering so the client gets it before Christmas...

Currently my 9900K / VEGA 64 LQ system has (3) open instances of Vegas all batch-rendering at once. The first is using about 70% of the CPU and very little GPU to render 4K DNxHR 444 as requested by the client. The 2nd is using 100% of the GPU & about 15% more of the CPU to render to 4K HEVC. The 3rd is using any remaining resources to render to AVC MP4 which now pegs BOTH my CPU & GPU usage at 100% solid with no drops until at least one-of-three instances of Vegas has finished batch processing. Both the HEVC & AVC tasks are rending at about 45 fps average each and the DNxHR is only about 15 fps, so I'll switch one of the others to finish DNxHR when it is done with AVC, etc...

For really big jobs, or to start my next job, I like having a 2nd best bang/buck system vs. paying top-dollar to have s single best/fastest PC so I appreciate when Vegas users like @Hulk post affordable system specs with the latest Intel 12xxx CPUs, etc. But like others, I am interested in how the Intel 12xxx CPUs perform when paired with a capable GPU, so hopefully we'll see some others post when they complete their upgrades - especially those buying hardware right now to write it off on their 2021 taxes...

Last changed by TheRhino on 12/22/2021, 7:56 AM, changed a total of 1 times.

Workstation C with $600 USD of upgrades in April, 2021
--$360 11700K @ 5.0ghz
--$200 ASRock W480 Creator (onboard 10G net, TB3, etc.)
Borrowed from my 9900K until prices drop:
--32GB of G.Skill DDR4 3200 ($100 on Black Friday...)
Reused from same Tower Case that housed the Xeon:
--Used VEGA 56 GPU ($200 on eBay before mining craze...)
--Noctua Cooler, 750W PSU, OS SSD, LSI RAID Controller, SATAs, etc.

Performs VERY close to my overclocked 9900K (below), but at stock settings with no tweaking...

Workstation D with $1,350 USD of upgrades in April, 2019
--$500 9900K @ 5.0ghz
--$140 Corsair H150i liquid cooling with 360mm radiator (3 fans)
--$200 open box Asus Z390 WS (PLX chip manages 4/5 PCIe slots)
--$160 32GB of G.Skill DDR4 3000 (added another 32GB later...)
--$350 refurbished, but like-new Radeon Vega 64 LQ (liquid cooled)

Renders Vegas11 "Red Car Test" (AMD VCE) in 13s when clocked at 4.9 ghz
(note: BOTH onboard Intel & Vega64 show utilization during QSV & VCE renders...)

Source Video1 = 4TB RAID0--(2) 2TB M.2 on motherboard in RAID0
Source Video2 = 4TB RAID0--(2) 2TB M.2 (1) via U.2 adapter & (1) on separate PCIe card
Target Video1 = 32TB RAID0--(4) 8TB SATA hot-swap drives on PCIe RAID card with backups elsewhere

10G Network using used $30 Mellanox2 Adapters & Qnap QSW-M408-2C 10G Switch
Copy of Work Files, Source & Output Video, OS Images on QNAP 653b NAS with (6) 14TB WD RED
Blackmagic Decklink PCie card for capturing from tape, etc.
(2) internal BR Burners connected via USB 3.0 to SATA adapters
Old Cooler Master CM Stacker ATX case with (13) 5.25" front drive-bays holds & cools everything.

Workstations A & B are the 2 remaining 6-core 4.0ghz Xeon 5660 or I7 980x on Asus P6T6 motherboards.

$999 Walmart Evoo 17 Laptop with I7-9750H 6-core CPU, RTX 2060, (2) M.2 bays & (1) SSD bay...

Hulk wrote on 12/22/2021, 8:05 AM

I'm going to get a GPU eventually. But these prices are literally insane. I will provide an update when I do. I'm waiting patiently for Intel's ARC to be released. Early performance leaks are promising. But who knows if they'll be widely available and affordable. I'm hoping Intel is seeing the huge opening in the market for affordable GPU's and has moved one of it 14nm fabs to cranking out GPU's. One can hope, right? On a whim I bought some intel stock.

Howard-Vigorita wrote on 12/23/2021, 1:28 AM

@Hulk Curious if the igpu in the 12th gen runs faster than the one in my 11th gen uhd750. Just ran that benchmark with vp19 selecting my 11900k's Intel uhd750 igpu on the Vegas video preferences tab and got these results: qsv fhd: 2 min 17 sec; qsv uhd: 5 min 8 sec. All settings as specified in the benchmarking thread and any other Vegas settings at defaults.

RogerS wrote on 12/23/2021, 2:04 AM

Thanks Howard, I'd also be interested in seeing 12th gen. iGPU results. Hopefully we get someone to run the benchmark and upload results.

To the results google sheet I also added a few simple filters to be able to view data by encode mode and color coded HD and UHD so it's clear which results go with which. That's about the extent of my sophistication with data reporting.

 

Hulk wrote on 12/23/2021, 7:12 AM

@Hulk Curious if the igpu in the 12th gen runs faster than the one in my 11th gen uhd750. Just ran that benchmark with vp19 selecting my 11900k's Intel uhd750 igpu on the Vegas video preferences tab and got these results: qsv fhd: 2 min 17 sec; qsv uhd: 5 min 8 sec. All settings as specified in the benchmarking thread and any other Vegas settings at defaults.

Sure I'll run it. What is the encode quality you used as that will affect the result. I don't see anything regarding Intel QSV settings in the instructions? Could you post a screenshot of your Intel QSV encode settings so we have equal settings?

RogerS wrote on 12/23/2021, 7:17 AM

Hi Hulk, thank you. For NVENC and VCE it's just "default." Is that an option for QSV? If not, could you share what the options are? I no longer have QSV available for MagixAVC with my own system to check. Feel free to share a screenshot of what the QSV render options currently look like and we can confirm or define the parameters together and then add guidance for future benchmarks.

RogerS wrote on 12/23/2021, 7:31 AM

What about the following (please check QSV and if someone can confirm AMD default settings that would be helpful)

For Mainconcept use RC Mode: h264_VBR
For NV Encoder use preset: default and RC Mode: VBR
For QSV use preset: Balanced and RC Mode: VBR
For VCE use ?