Techgage includes Vegas Pro in hardware benchmarks. This time it's looking at rendering using different high-end AMD and Intel CPUs, comparing many cores with lower clock speeds to fewer cores and faster speeds. There is also a test using GPUs and of one of the Vegas AI options.
One odd thing here is AMD CPUs winning over Intel on the AI Fx benchmarks. I wonder if the iGPU is disabled and if that's part of why "each time we revisit, we seem to encounter odd performance scaling, or just odd behavior in general. This is the reason why there is no CPU+GPU test above; the end results are just unpredictable."
The article does not specifically mention an Intel graphics driver. If no driver were installed, Windows would attach its generic Microsoft Display Adapter driver and Vegas would ignore it as a gpu or igpu. It also talks about disabling disruptive services but does not mention WindowsUpdate or internet connectivity... that might pull down an Intel driver but Vegas would not be able to see the igpu until after a Vegas restart.
Disabling the Intel igpu in bios would make test results more consistent from run to run and level the playing field when comparing to a cpu that does not have an on-board igpu. But less representative of comparative performance that might be experienced by real-world users.
Former user
wrote on 8/13/2022, 10:16 PM
One odd thing here is AMD CPUs winning over Intel on the AI Fx benchmarks. I wonder if the iGPU is disabled and if that's part of why "each time we revisit, we seem to encounter odd performance scaling, or just odd behavior in general. This is the reason why there is no CPU+GPU test above; the end results are just unpredictable."
@RogerS It is strange, Vegas uses OpenVINO, an Intel CPU AI accelerator that's not available on AMD, we know that much but I guess we don't know exactly what it's used for. As a comparision , and this is old information prob 1+ years old since I looked into it, Topaz AI apps use OpenVINO but only for face recognition, face tracking, and face enhancement, but it's enough to make Intel CPU's faster even though recognizing faces is only a small part of It's upscaling and denoising.
It doesn't seem to make sense that OpenVIno on Intel CPU's is so slow in comparision to AMD unless that's also a bug he's discovered the same way he is responsible for the discovery of the slow NVENC encode bug. 6th - 10th gen Intel CPU's can't use their IGPU's for OpenVIno, so it's not necessary and the acceleration should still be active even if the IGPU is not available due to it being turned off in bios or a lack of drivers.
I would love to see some Intel users with 11 gen+ CPUs and active iGPUs test these Fx directly. Maybe I will be able to do so myself if the price of new computers is right later this autumn.
@RogerS Took a shot at it. I was going to put the 3 Vegas fx that Techgage used for their Vegas/gpu ratings into a short gpu test project but noticed that Median FX was the only one classified in Vegas as gpu accelerated. Colorized and Style Transfer are listed as 32-bit floating point. So I went through the gpu-list and just picked a bunch of others to add to Median and applied them (8 total fx) onto 10 seconds of the same generated media used in the Sample Project. Here's what I came up with:
And these are the render times I got from vp20 standard default Magix avc 4k 29.97 templates with the 6900xt selected in Video Properties running in an 11900k system:
Nvenc: 0:31; Vce: 0:27; Qsv: 0:31
Selected igpu doesn't matter here because it's all generated media and fx.
Also ran it with the Nvidia 1660ti selected in video properties in the same 11900k system:
Here's how my machines came out. Note that I replaced a few fx in the test to get it to run on Vegas as far back as version 16 (so more folks can try it) so the numbers are different from this morning. The link above will get the revised version now.
@Howard-Vigorita the discrepancy @RogerS I was talking about is not related to GPU's. Look at the following to understand
Everything is slower on VP19, and that's most likely true, Every version is the same engine weighted down with more bloat/features, but how do the Intel results make sense?
I would predict your GPU results will be much better, system AVC probably similar to me, but Ai Tagging CPU uses Openvino, and your result should be noticeably better than mine, but for some reason Vegas's results with 12th gen Intels using AI don't show any dramatic performance increase
@Former user You referring to this? If so, the gpu-fx investigation I did would not shed any light on that.
One odd thing here is AMD CPUs winning over Intel on the AI Fx benchmarks. I wonder if the iGPU is disabled and if that's part of why "each time we revisit, we seem to encounter odd performance scaling, or just odd behavior in general. This is the reason why there is no CPU+GPU test above; the end results are just unpredictable."
I think you'd need to put something together with just ai-assisted fx and investigate behavior on amd-cpu systems compared to systems with intel cpu+igpus. I suspect you'd find that with Intel cpu+igpus that there is substantial 3D utilization showing in the igpu that would suggest their libs use the igpu if it's available. That would only be interesting to me, however, if the fx actually performed better.
Cpus that lack igpus often trade igpu real estate for more cores... maybe that trade-off is a better deal for ai-fx. I have a few Intel cpus that lack igpus (xeon and 980x) but they don't have more cores so I wouldn't be able to do the complete analysis myself. The newer x-series Intels are more like the Ryzens. No igpu but higher core counts.
.... someone can put together a "benchmark" and we can test? I have an all AMD computer.
That would be great. Found that the Style Transfer crashes my 11900k/6900xt and reported it on the crash screen. Here's the link to download the ai_test.veg project:
@Former user You referring to this? If so, the gpu-fx investigation I did would not shed any light on that.
One odd thing here is AMD CPUs winning over Intel on the AI Fx benchmarks. I wonder if the iGPU is disabled and if that's part of why "each time we revisit, we seem to encounter odd performance scaling, or just odd behavior in general. This is the reason why there is no CPU+GPU test above; the end results are just unpredictable."
I think you'd need to put something together with just ai-assisted fx and investigate behavior on amd-cpu systems compared to systems with intel cpu+igpus. I suspect you'd find that with Intel cpu+igpus that there is substantial 3D utilization showing in the igpu that would suggest their libs use the igpu if it's available. That would only be interesting to me, however, if the fx actually performed better.
@Howard-Vigorita That's the question, do Intel CPU's 6th gen+, and Intel CPU's + IGPU's 11th gen+(Intel irisXE only? not sure) actually see an advantage in Vegas Pro, if we know they should be advantaged we possibly can know Techgauge guy did something wrong with his VP19 test.
If he did try to standardize tests and use only the GPU decoder for the Nvidia rtx3070 for all tests I don't think that interferes with OpenVIno using IGPU, but what would affect OpenVino is if he didn't have Intel drivers installed. If that were true, file IO auto would choose Nvidia decoder if it wasn't set to Nvidia, so it wouldn't be disadvantaged by not using GPU decoding but would be by any AI acceleration from IGPU. The importance of IGPU is that unlike the CPU, the IGPU may be doing very little processor wise, while the CPU is very busy
Here's a benchmark of a popular face tracker
Which raises more questions, why is the 11th gen more efficient than the 12th gen. At a guess it's probably because 12th gen lacks avx512 which openvino uses. The toolkit has been updated since this benchmark so I would expect 12th gen to be much better now
@Former user I have a partial answer. On the 11990k, Intel igpu utilization is very high with the ai-fx while cpu utilization is moderate. When I disabled the igpu in bios, cpu utilization jumped to 100%. But the run with the igpu is slightly slower. However, it might be beneficial in a real-world render where other things would be going on and the ai-fx wouldn't be hogging the cpu.
My machines with Intel hd630's in them showed no igpu utilization and their cpus ran at 100%. The laptop showed a little igpu utilization, probably because it's display is wired into it.
That's the good news. Now the bad news. The Style Transfer part of my test project crashes on every one of my systems except for the xeon and the laptop, igpus disabled in bios or not. "I have no response to that." But I did all the crash screen info.
Former user
wrote on 8/15/2022, 7:53 PM
@Howard-Vigorita Your benchmark doesn't work with voukoder (for me) and also Vegas is highly unstable using it, I got 35 seconds, 55 seconds and 65 secondsfor encode time using MagixAVC NVENC, and twice it crashed. Those resuslts don't make any sense. maybe a benchmark where only a single AI effect is used, or just leave style transfer out if that looks to be the only problem
That's good news about you confirming openvino operating with IGPU, the only thing to do now is confirm it actually makes a difference
I guess the Style Transfer I did is more of a torture test than a benchmark... or maybe an abstract art generator.
I think there are problems with Style Transfer. If you send detailed crash reports, maybe they'll get fixed someday. I haven't had any crashes on Colorize and Upscale so you can just disable Style Transfer and gauge without it for now... I just updated it so future downloaders will get it with Style Transfer already unchecked in track fx.
No crash here at all. I set the timeline to loop and watched with horror as RAM went to 18GB. Playback fps were between 1-4fps.
First screenshot is timeline playback. Second screenshot is while exporting in MAGIX/ACC using AMD VCE to the correct frame rate. Third screenshot is when rendering is complete with over 21GB of RAM used. OUCH!!!!
@fr0sty you mention "style transfer is enabled", how?
@Former user I was able to render with Voukoder AMD AMF "stock" with no crashes. Screenshot 4 is Voukoder during rendering, and screenshot 5 is the finished rendered file, which took longer than MAGIX/ACC with AMD VCE.
@Howard-Vigorita that was one ugly mean test! Geez... I'll never use that in any video! 😬
Overall, Vegas performed marginally, even though CPU usage was high and RAM was exceedingly high But again, no crashes. But again, no crashes.
Former user
wrote on 8/16/2022, 5:38 AM
@Reyfox your 18.5 seconds encode speed seems way too fast, Is it some how possible that all the FX are not being used, not encoded to 4K or it isn't fully encoded? It's really strange. After testing again, 52.5 seconds is repeatable. I mentioned before getting a 35second result, unfortunately did not check the encode, but I know I thought it wasn't correct. I get about 18GB of used ram by Vegas, which sounds very similar to you.
Your GPU should not play a factor here for processing or encoding, and you have an AMD CPU so no cpu or igpu openvino acceleration . It's also interesting that voukoder works for you. Vegas doesn't crash for me it instantly goes to 100% and does nothing. This is my rendered benchmark video, is yours identical?
Hey, guys. Thanks for trying it out and verifying the crash on Style Transfer isn't just me. @Reyfox, it's enabled by just pulling up the track fx and checking the box to enable the Style Transfer fx.
Gonna dig into Style Transfer a little more today. All it does is cycle through all the supplied presets. Maybe it's as simple as one of them being messed up. Odd that it only blows up on some cpus.
@Howard-Vigorita and @Former user, something must be wrong with my brain. @Former user, once I ticked on Style Transfer, it was even slower, and just as ugly as yours.
So... new numbers again.....
First screenshot, while rendering. Second screenshot, rendering complete.