Comments

alifftudm95 wrote on 10/25/2024, 10:08 AM

If I recall correctly, no any GPU atm can hardware decode HEVC 422 10bit file, even the latest one from Nvidia & AMD.

Only Apple M1,M2,M3 Chips are able to do so, and Intel 13/14th gen CPU.

 

 

Editor and Colorist (Kinda) from Malaysia

MYPOST Member

Laptop

MacBook Pro M4 Max

16 Core CPU and 40 Core GPU

64GB Memory

2TB Internal SSD Storage

Anti-Glare 4K HDR Screen

 

PC DEKSTOP

CPU: Ryzen 9 5900x

GPU: RTX3090 24GB

RAM: 64GB 3200MHZ

MOBO: X570-E

Storage:

C DRIVE NVME M.2 1TB SSD GEN 4

D DRIVE NVME M.2 2TB SSD GEN 4

E DRIVE SATA SSD 2TB

F DRIVE SATA SSD 2TB

G DRIVE HDD 1TB

Monitor: Asus ProArt PA279CV 4K HDR (Bought on 30 August 2023)

Monitor: BenQ PD2700U 4K HDR (RIP on 30 August 2023)

 

 

 

Howard-Vigorita wrote on 10/25/2024, 3:32 PM

Decoding hevc 4:2:2 is Intel's claim to fame for its 11th gen+ desktop igpus and Arc gpus. I believe they're the only processors that can do that. I don't recommend shooting 4:2:2 hevc, however. Unless you have a camera sensor whose physical geometry can actually capture the extra elements used by 4:2:2. I have two 3-ccd HD cameras like that, which were by bread and butter for about a decade. But unfortunately they went out of production when 4k sensors came along. Because the tradeoff in chip density wasn't worth it; more pixels look better to earthlings than fewer pixels with more color elements than they can discern. I don't think any 3-ccd sensors have been made since then by anyone. Except maybe for tv & film studios whose budgets are way beyond the means of ordinary mortals.

Btw, I have an Arc a770 and my 4k cameras give me the option to shoot hevc 4:2:0 or 4:2:2, so I've tried both. Ultimate quality ends up the same after edit and render because those cameras all use sensors that only capture the elements used by 4:2:0. They synthetically upscale missing elements needed to output 4:2:2. Doing the upscale later in the edit chain is apparently just as good, if not better. The result was expected and intuitively obvious. With no possible offsetting quality gain, the larger storage space required by 4:2:2 footage adds insult to injury. As does the Arc decoding performance; although it's faster than cpu decoding, it's noticeably slower compared to 4:2:0 for both playback and rendering.