Did a series of render-time measurements on my 11900k system comparing decoding with the onboard Intel uhd750 igpu with an add-on Nvidia 1660ti:
These renders were all done with the original RedCar project using media substitution with my own transcode and test clips that I shot with a Canon xf605.
The uhd750 actually doesn't look that bad doing single-rate 4:2:2 hevc. Although the Nvidia is a bit faster if I shoot 10-bit 4:2:0 instead. Note that when an Intel igpu is detected by the legacy decoder, substantial 3d utilization is observed in the igpu; when no Intel igpu is detected, cpu utilization goes up instead into the 100% range.
Strange results with double-rate hevc using legacy decoding in Vegas. But I think it might be a camera anomaly... something about the Canon 4:2:0 59.94fps metadata that the legacy decoder doesn't like. I'll have to put together a similar clip-set with my zcam e2 and see how that compares.