Hevc 11900k decoding

Howard-Vigorita wrote on 8/25/2022, 4:20 PM

Did a series of render-time measurements on my 11900k system comparing decoding with the onboard Intel uhd750 igpu with an add-on Nvidia 1660ti:

These renders were all done with the original RedCar project using media substitution with my own transcode and test clips that I shot with a Canon xf605.

The uhd750 actually doesn't look that bad doing single-rate 4:2:2 hevc. Although the Nvidia is a bit faster if I shoot 10-bit 4:2:0 instead. Note that when an Intel igpu is detected by the legacy decoder, substantial 3d utilization is observed in the igpu; when no Intel igpu is detected, cpu utilization goes up instead into the 100% range.

Strange results with double-rate hevc using legacy decoding in Vegas. But I think it might be a camera anomaly... something about the Canon 4:2:0 59.94fps metadata that the legacy decoder doesn't like. I'll have to put together a similar clip-set with my zcam e2 and see how that compares.

Comments

eikira wrote on 8/25/2022, 4:38 PM

in my eyes, the quality seems to be slightly better with intel. have you noticed this too or do you think differently?

Howard-Vigorita wrote on 8/25/2022, 5:12 PM

Yes, I agree. But the differences between encoders isn't that great. When I check with ffmetrics, Nvidia encoding comes out a bit higher, however. But the hugest jump in quality measurements comes from using legacy-hevc decoding. Which, after seeing igpu utilization, has to be powered by an Intel hybrid runtime.

Howard-Vigorita wrote on 8/25/2022, 7:34 PM

Here's an update that adds 2 new xf605 xavc intra codecs that just came out with its v1.0.1.1 firmware update:

These codecs are credited to reported user dissatisfaction with the previous Canon xf605 xavc codecs that were all Long-Gop. Looks like Vegas is handling these quite well.

Former user wrote on 8/25/2022, 7:45 PM

But the hugest jump in quality measurements comes from using legacy-hevc decoding. Which, after seeing igpu utilization, has to be powered by an Intel hybrid runtime.

Vegas will force SO4 decoder with some 10bit formats. You aren't using legacy just because you set legacy, always check which decoder is in use

Howard-Vigorita wrote on 8/25/2022, 9:03 PM

But the hugest jump in quality measurements comes from using legacy-hevc decoding. Which, after seeing igpu utilization, has to be powered by an Intel hybrid runtime.

Vegas will force SO4 decoder with some 10bit formats. You aren't using legacy just because you set legacy, always check which decoder is in use

@Former user I rely pretty much exclusively on render times vrs ffmetrics quality measurements in making my workflow choices. And I already know how the quality analysis shakes out after a few spot checks... no changes there.

Legacy-avc decoding I don't ever use in practice. Because it sometimes takes longer even though I've never seen an ffmetrics improvement. Thought I just try it again for the xavc just to see if the render time was any better on the faster cpu with the new Canon xavc/mxf intra codec... seems about the same as I remember.

Former user wrote on 8/25/2022, 10:06 PM

@Howard-Vigorita with the 10bit 420 59.94 fps it looks like you really are using legacy decoder, but 10bit 422 59.94fps legacy you're using forced SO4 GPU decoding, not legacy decoder. In my opinion it's therefore should not be in the column designated as legacy decoding because that's not what's happening

Maybe you could put a * next to the result if prefer your results being relative to the actual decoder setting in Vegas rather than the actual decoder in use

 

Howard-Vigorita wrote on 8/26/2022, 5:02 PM

@Howard-Vigorita with the 10bit 420 59.94 fps it looks like you really are using legacy decoder, but 10bit 422 59.94fps legacy you're using forced SO4 GPU decoding, not legacy decoder. In my opinion it's therefore should not be in the column designated as legacy decoding because that's not what's happening

Maybe you could put a * next to the result if prefer your results being relative to the actual decoder setting in Vegas rather than the actual decoder in use

 

@Former user I don't ever mess with so4 internal settings. That's beyond me. The legacy column reflects render time with one of the legacy decoding boxes checked in i/o preferences. I think the only reason the 4:2:0 60p legacy-hevc is so much higher than 4:2:2 is because of what the Canon camera codec puts in the format-profile metadata... they're different and the legacy-decoder likes one better than the other.

Canon codec 4:2:0 60p format profile: Main 10@L5.1@High   
Canon codec 4:2:2 60p format profile: Format Range@L5.1@High

Fwiw, the 4:2:0 profile looks closer to what I'm used to seeing.

Howard-Vigorita wrote on 8/29/2022, 6:25 PM

Got a new toy. And I think I like it...

Seems to excel at decoding everything over the uhd750 and most everything over the 1660ti which edges it out slightly on a few things. But totally kills it on hevc 4:2:2. Only place it falls down is on legacy-hevc 4:2:0. I suspect the lib in Vegas needs an update to recognize 11th & 12th Gen Intel gpus.

Btw, I had to disable the igpu to test legacy-hevc and qsv rendering because there's no way I can see in Vegas to specify which Intel gpu to use and it seems to select the igpu if both can be seen. The a380 looks so good, I see no reason to re-enable the uhd750.

Former user wrote on 8/29/2022, 7:12 PM

@Howard-Vigorita the legacy results continue to be interesting and unexpected more so with the new GPU. Are you able to try hyper encoding? Not sure if there needs to be an option that you tick or if it works automatically. Resolve has it, and Vegas should have it, or maybe not implemented yet?

Also if you try your AI benchmarks again, that should show a dramatic improvement if Vegas is able to your A380

Howard-Vigorita wrote on 8/29/2022, 7:40 PM

See no mention of hyper encoding anywhere in the Intel Arc Control app or in Resolve Studio 18.0.1 which says it's the latest. But Resolve has stopped throwing gpu failure messages since I replaced the 1660ti+uhd750 with the Arc and seems to like it, letting me check it off for decoding. Vegas seems to like it too. Will run my benches on it overnight and see how qsv does compared to vce... qsv is usually a slight bit slower. I'll report on that in its own thread.

Former user wrote on 8/29/2022, 8:23 PM

It was in the release notes in February for Resolve https://forum.blackmagicdesign.com/viewtopic.php?f=21&t=155441

Look forward to hearing more about this GPU. In an ideal world where your primary use for A380 is as a GPU decoder it would still be possible to activate hyper encode while using your AMD GPU for processing, most likely A380 would have to be main processing GPU to get benefit from GPU accelerated OpenVino AI

Howard-Vigorita wrote on 8/29/2022, 8:49 PM

The Resolve release note says it's "for multiple Intel GPU systems". Sounds like they're referring to a mining rig. I got one x4 slot left in my box. But there's supposed to be a bigger Arc on the way... maybe it'll have multiple gpus on the same board.

Former user wrote on 8/29/2022, 9:08 PM

Haha, perhaps. Intel are showing it used in a laptop with an ARC + IGPU combo . up to 1.6x speed increase they say

nonam3 wrote on 10/27/2023, 7:39 PM

@Howard-Vigorita

Hi,

Can You share Your experience with Vegas responsiveness when You use Intel Arc vs Nvidia gpu ?

I'm interested in general timeline responsiveness (not encode).

Would You say that there is no difference or NV is faster ?

And maybe do You have experience with Amd and vegas.

 

What I'm looking for is speed up of Ui / preview. Currently entertaining two options, Intel CPU + iGPU + dGPU or new TR + dGPU + dGPU (be it Intel Arc or Amd) (already got RTX 4090, paired with Zen2 TR 3960x) that will speed up decoding and playback / app responsiveness.

Any thoughts / suggestions are more than welcome.

 

Howard-Vigorita wrote on 10/27/2023, 9:42 PM

@nonam3 I only use Arc for decoding as though it was an igpu. I use Amd's in my desktops as the main gpus. My laptop is the only one I have with an Nvidia (3060) as the main gpu plus an Intel Iris as the igpu. The laptop is reasonably decent for the road, but not as high powered as my desktop Amd's. I like having the Arcs for superior decoding of hevc, which is my exclusive 4k camera format. I'm also seeing av1 and vp9 qsv rendering as a future possibility but not actually doing any of that yet.

The Amd 6900xt is my main work horse and an excellent performer but I'm finding it not so great doing multicam cuts which require all cameras on screen at the same time. Might be because I use a single hdmi through a kvm so I can switch between multiple computers... my kvm only switches a single hdmi. I find moving the hdmi to the 11900k's Arc a770 is much better for multicam edit if I also swap the roles of the Amd and Arc while doing that. I don't think the Arc can handle being the main gpu for grading, fx, and ai, but for multicam edit with no fx in play, it's a very smooth performer. I've been switching them back before I get to color grading. I might be able to have the best of both worlds with another monitor and a dual-hdmi switch, one monitor fed by the Arc and the other by the Amd. Don't have the hardware to try that yet so I'm speculating.

The only Nvidia board I have is a 1660 in my Xeon that I use for decoding which it's reasonably good at. It's not in the same class as the 5700xt in that machine but it's nice being able to render nvenc on occasion. I have a spare Arc a380 I've been meaning to swap in for the 1660, but I never seem to get around to it. I tested the a380 against a 1660ti in another machine once and they were similar performers but the Arc does more render and decode formats but not nvenc.

nonam3 wrote on 10/27/2023, 9:52 PM

@Howard-Vigorita Thank You appreciate input !