Initial render benchmarks with Intel A750/A770

RogerS wrote on 10/7/2022, 11:11 AM

TechGage unveiled their latest benchmarks of the new Intel GPUs. It includes performance in Vegas using VP 19.

It's interesting to see the higher-end Intels outperform medium spec NVIDIA and AMD cards for transcodes and also GPU-heavy Fx like median (whereas the less powerful a380 struggled there).

https://techgage.com/article/intel-arc-a750-a770-workstation-review/

There are also tests with Lightroom (though a bit marred by yet another AMD driver bug) and other creative software.

Comments

Former user wrote on 10/7/2022, 6:55 PM

He says:

  • As much as we’ve loved testing VEGAS Pro over the years, we admit that the current state of the software (at least version 19) has made benchmarking difficult. When the software first launched, we encountered such bizarre scaling, that we gave up on testing, and never ended up returning to it. With VP20 out, we’re planning to dig into that soon, and hope to see more realistic scaling across-the-board.

He's in for a disappointment unless this new decoder gets here soon

RogerS wrote on 10/7/2022, 7:48 PM

I suggested he wait an update or two as I don't see much change from 19 in this regard. I think he wants to test it on new hardware sooner though and that would be useful (thinking about a new system myself).

Howard-Vigorita wrote on 10/8/2022, 12:51 AM

I tried to figure out what he was talking about in his last article and found no issues with any of the fx listed as gpu accelerated. Only fx I found to be screwy was the Style Transfer ai fx, which should work with or without gpu assistance. I have the feeling that the problem there is an Intel issue only they can resolve.

Reyfox wrote on 10/8/2022, 4:53 AM

I do think it's important for Vegas to be included in the testing of hardware over at Techgage. It gives Vegas some much needed exposure.

I wonder if @Deathspawner (Rob Williams) is in touch to the Vegas programming team with his issues.

 

Former user wrote on 10/14/2022, 10:54 PM

H.264 decoding/encoding benchmark . With these tests, Resolve most likely causes no bottlenecks and the results are the maximum decoding/encoding performance of the hardware. Arc decoding worse than IGPU which I think is what @Howard-Vigorita concluded also.

https://www.pugetsystems.com/labs/articles/Intel-Arc-A750-A770-Content-Creation-Review-2376/#DaVinciResolveStudio-H_264DecodingandEncoding

Deathspawner wrote on 10/15/2022, 7:03 PM

I do think it's important for Vegas to be included in the testing of hardware over at Techgage. It gives Vegas some much needed exposure.

I wonder if @Deathspawner (Rob Williams) is in touch to the Vegas programming team with his issues.

I am in touch, and they have been hugely accommodating in the past. Once I get some initial GPU testing done, I'll reach out to them with some details - unless I get lucky and everything happens to scale fine.

GPU aside, I'll have some VP20 CPU performance next week w/ AMD Zen 4 and Intel 13th-gen Core. I don't think that's as interesting as GPU, but it's something (and probably safer than GPU right now, by the sounds of it, haha.)

Also, thank you for your continued support.

RogerS wrote on 10/15/2022, 7:43 PM

I think that's very interesting (considering building a 13th gen system) so look forward to it!

Howard-Vigorita wrote on 10/16/2022, 12:04 AM

H.264 decoding/encoding benchmark . With these tests, Resolve most likely causes no bottlenecks and the results are the maximum decoding/encoding performance of the hardware. Arc decoding worse than IGPU which I think is what @Howard-Vigorita concluded also.

@Former user I don't think I looked too closely at avc/h.264 decoding before. But you got me wondering so I concocted a near-lossless avc clip from the lossless hevc clip I usually test with. Used ffmpeg to transcode it to 8-bit avc so it could be decoded in hardware. Made it with this powershell script:

$IN = "hevc-lossless-an.mov"
$OUT = "nearLL-avc8bit.mp4"
ffmpeg -i "$IN" -c:v libx264 -preset ultrafast -crf 1 -pix_fmt yuv420p -profile:v high -level:v 4.0 -y "$OUT"

I've now got arc380 cards in my 9900k and 11900k systems with the igpus disabled in bios. Ran the arc times that way. Activated the igpus to time them while disabling the arc boards in the device manager. Here's what I got:

9900k xdcam ex 30p 1080p: arc: 0:19; igpu: 0:17

9900k MainConcept 30p 2160p: arc: 1:34; igpu: 1:34

9900k Qsv 30p 2160p: arc: 0:23; igpu: 0:25

11900k xdcam ex 30p 1080p: arc: 0:14; igpu: 0:15

11900k MainConcept 30p 2160p: arc: 1:23; igpu: 1:23

11900k Qsv 30p 2160p: arc: 0:19; igpu: 0:26

They all came out pretty close on decoding with the 9900k/uhd630 only edging out the arc on hd xdcam. On combo qsv encoding/decoding, the 11900k/uhd750 was significantly slower. And no faster than the 9900k/uhd630.

I also tested Arc hardware av1 encoding using Voukoder. Rendered at the same speed as hevc-qsv . And YouTube took it without issue.

Btw, I could see Resolve Studio doing a little hyper action when I had both the Arc and the uhd750 active at the same time... there was utilization in both the arc and igpu. Vegas just uses the igpu and ignores the arc for qsv rendering and legacy-hevc decoding which is why I disable the igpu. The arc is significantly faster at both of those things. I also tested Vegas with 2 arcs and it also ignores the 2nd one so I pulled it. If Vegas implements Intel hyper, I'll put another arc board that I have back in.

Reyfox wrote on 10/16/2022, 11:31 AM

@Deathspawner looking forward to your results with all this new hardware that you will be testing!

Newbie😁

Vegas Pro 22 (VP18-21 also installed)

Win 11 Pro always updated

AMD Ryzen 9 5950X 16 cores / 32 threads

32GB DDR4 3200

Sapphire RX6700XT 12GB Driver: 25.3.1

Gigabyte X570 Elite Motherboard

Panasonic G9, G7, FZ300

Former user wrote on 10/16/2022, 8:35 PM

@Howard-Vigorita Is that test file you're using 920 frames?

One thing I've noticed if I compare Vegas and Resolve with 4K AVC source files, Vegas's GPU decoder via (NVDEC) is only capable for being driven at around 60% bit of pulsing above that, but Resolve averages 90%, pulsing to 99%, this puts a cap on Vegas encoding speeds at around 100fps, turn the decoder off, about 140fps, but I was limited by CPU.

Resolve as a comparison did 180-190fps using decoder, and 250fps with decoder turned off. Vegas need to improve both the render engine, and the decoder. Maybe that will be the one package when they do the update. It's good having all these figures with different cards to make comparisons with future update

Howard-Vigorita wrote on 10/17/2022, 1:53 PM

921 frames. But I wouldn't lose any sleep over it. I've noticed some of my Vegas implementations only process 920 frames but it doesn't seem to make a difference when I run ffmetrics. Numbers come up exactly the same. I'm guessing ffmpeg stops comparing when either the source or destination clip ends. I think it's a Windows media-write thing that sometimes leaves off the last packet. If it dropped the 1st packet from the render, that would make a really big difference in the metrics because none of the frames would line up. My laptop was rendering 920 frames under Win10 and changed to rendering 921 frames after I upgraded to Win11. I've some times seen it change when upgrading from one Vegas build to another.

I did compare the 10-bit 921-frame hevc source clip to the 8-bit 921-frame avc transcode and got this with ffmetrics:

psnr: 58.9782; ssim: .9989; vmaf: 100.0000

That's was using ffmpeg v4 with ffmetrics... though I expect v5 which yields slightly higher vmaf numbers would be the same in this case.