I generally expect to get the fastest renders of hevc projects from my newer 11900k system with it's water-cooled 6900xt. But this is what I got from the RedCar project using three 4k hevc source clips and rendering them to 4k hevc to simulate a multicam project:
Oddly, the faster 11900k system runs over 60% slower than the 9900k if legacy-hevc decoding is selected. But with legacy-hevc unchecked, the faster 11900k system outperforms the older system as expected. Not sure why this is but whatever library is being used for legacy-hevc, it doesn't seem to like something about the hd750 that is in the 11900k very much.
To see the quality differences from these decoding choices, I had to stick to a straight-up transcode in order to have a solid input to output comparison to feed to the ffmetrics utility. For this I used a single 30-second clip I shot with a zcam e2 in zraw which their converter let me save hevc-lossless; Vegas seems to be able to handle it pretty well. The zcam can do what it calls a "partial-debraying" which means it can only reorder the chroma data while preserving the 4:2:0 subsampling characteristic of the Sony Exmor sensor in the camera and writing it to media with a bit depth of 10 bits. Once exported out of the camera, the app can also do the full debraying with an upscale to 4:2:2 or 4:4:4 but I didn't do that. For this test I measured the elapsed time for Vegas to do the transcode from hevc 4:2:0 to a render with Magix Hevc using the MainConcept encoder preset configured for 10-bit maximum quality output at a maximum bitrate to 240 mbps. It was pretty slow but I think it approaches Vegas' possible upper limits of quality.
Curiously, the 11900k system outperforms the 9900k in this single-clip transcode. All my previous tests like this show a similar dramatic quality improvement for hevc-legacy over igpu decoding going as far back as v17, so that's nothing new. Based on prior tests I did with v16 of Vegas, the current hevc-legacy option seems to have been it's default. Been doing my recent final renders for deliverables with hevc-legacy on my 9900k since I first burned-in the 11900k so I guess I'll keep doing that.
Out of curiosity, I also ran my little quality test on a xeon system, which has no igpu, but uses 2 gpu's instead. At the moment I have an Amd Vega64 and an Nvidia 1660 in it and normally set the Amd as the main gpu and the Nvidia for decoding. For this test I rendered with the Nvidia so I could compare its 10-bit output which Amd doesn't support. I tend to go by the vmaf quality which although not quite as high as MainConcept, looks pretty good.
The interesting thing here is that I got identical quality from both the Amd and Nvidia decoders... suggesting the onboard chips used for decoding might be identical. I should also note that igpu selection in every case made no difference to either speed or quality when hevc-legacy was selected but there was definitely activity happening in the secondary gpu even when none was selected... I didn't try pulling the Nvidia board or disabling the Intel igpu in bios but it seems that's what it would take.