Question: 8 vs 10-bit HEVC Renders?

ALO wrote on 4/1/2025, 11:22 AM

Working within an 8-bit project (ie, 8-bit FR or legacy), with 10-bit footage on the timeline, does it make any difference if the render is set to 8 or 10-bit color depth?

I would think no...but who knows?

 

Comments

RogerS wrote on 4/1/2025, 8:08 PM

I think no because the project bit depth is putting everything through an 8 bit pipeline.

Steve_Rhoden wrote on 4/1/2025, 8:13 PM

@ALO No, it wouldn't make a difference.

Howard-Vigorita wrote on 4/1/2025, 10:03 PM

@ALO When I render Nvidia hevc, I always go with 10bit when the source footage is 10bit hevc. If the source footage is 8bit, I would render 8bit if all I was doing was trimming. But if there's fx processing that adds detail or content, I'd go 10bit.

Note that the 8bit project settings refers to back-end data-storage and/or math processing precision, not a single stream pipe-width that puts a cap on render quality... because projects with fx constitute multiple parallel processing streams and measurements I've done suggest they must be getting delivered in parallel to the cpu or gpu which performs both the mux and render. However, I have measured a pipe-width constraint when Vegas feeds input to 3rd party render plugins, suggesting Vegas does a pre-mux and funnels it into a limited bandwidth single pipe... but that only seems to affect 4k projects because the 3rd-party pipe was probably designed back in the hd days and never upscaled for 4k. I've noticed that measured output quality decreases only slightly comparing 8bit 4k projects with 10-bit hevc media when rendered by internal Vegas presets, compared to 32bit projects, which is easily offset by a minimal increase in render bit-rate/file-size.

ALO wrote on 4/1/2025, 11:50 PM

Here's something interesting:

Claude says yes, it can make sense to render 8-bit projects with 10-bits-per-pixel (instead of 8) when encoding to HEVC because HEVC compression algorithms are optimized for 10-bit color depth, and so tend to give better visual results ie fewer compression artifacts.

In any case at the same bitrate I assume you end up with the same file size, so I guess if it doesn't cause problems with render times or crashes why not?

RogerS wrote on 4/2/2025, 12:13 AM

Who is Claude and is Claude familiar with VEGAS?

Try using a tool to measure the results both ways and see if it's any different.

Ultimately I don't now if the metrics catch artifacts so I think the point is to avoid banding, posterization and other color artifacts. Try comparing 8-bit renders to 8-bit and 10-bit against project 32-bit mode as well, especially if you are doing log transforms as the 32-bit pipeline there seems to make a significant difference. Visually does the final render look acceptable?

ALO wrote on 4/3/2025, 6:05 PM

https://www.anthropic.com/

RogerS wrote on 4/3/2025, 8:59 PM

Does this chatbot cite a source? What if you ask about quality impacts of maintaining the same bitrate while increasing the bit depth from 8 to10 bits precision (more tonal values with fewer bits to cover each).

RogerS wrote on 4/3/2025, 11:42 PM

Since I'm not seeing data here let me start with 32-bit vs 8-bit project settings with a conversion LUT for Slog 3 added via CGP at the media level.

I picked media that had banding issues before.
Then I did renders to ProRes XQ in 32-bit full, 8-bit full and then MagixHEVC with 32-bit full and 8-bit full projects.

The 32-bit variants are somewhat higher performing than the 8-bit project ones but whether the difference is meaningful I'm not sure. When I have more time I'll try other test files and also with a visual check (will use images with smooth gradients to judge banding).

And then again with 2 more Mainconcept MagixHEVC renders at the same bitrate but this time with 10-bit checked in the render dialog.

So the most significant quality difference in this test appears to be the render 8 vs 10-bit, not the project bit depth setting.

How the render quality metrics compare to real world visual quality is something I'll look at later and am curious what others see as well.



 

Last changed by RogerS on 4/3/2025, 11:54 PM, changed a total of 2 times.

Custom PC (2022) Intel i5-13600K with UHD 770 iGPU with latest driver, MSI z690 Tomahawk motherboard, 64GB Corsair DDR5 5200 ram, NVIDIA 2080 Super (8GB) with latest studio driver, 2TB Hynix P41 SSD and 2TB Samsung 980 Pro cache drive, Windows 11 Pro 64 bit https://pcpartpicker.com/b/rZ9NnQ

ASUS Zenbook Pro 14 Intel i9-13900H with Intel graphics iGPU with latest ASUS driver, NVIDIA 4060 (8GB) with latest studio driver, 48GB system ram, Windows 11 Home, 1TB Samsung SSD.

VEGAS Pro 21.208
VEGAS Pro 22.239

Try the
VEGAS 4K "sample project" benchmark (works with VP 16+): https://forms.gle/ypyrrbUghEiaf2aC7
VEGAS Pro 20 "Ad" benchmark (works with VP 20+): https://forms.gle/eErJTR87K2bbJc4Q7

RogerS wrote on 4/4/2025, 10:50 AM

I had a chance to review the renders and there is a banding issue with the 8-bit project ones, even with ProRes XQ (which I believe is 12-bit). The 10-bit HEVC Mainconcept one has banding.

So while the scores may look good the files are not acceptable in my opinion.

This is with Slog 3 and a conversion LUT using 32-bit ProRes XQ as the reference. A straight transcode may yield different results but I think the normal use case requiring > 8 bits of precision involves log conversions.

ALO wrote on 4/5/2025, 9:17 AM

Do we get a 10-bits-per-pixel option in AV1 in the Vegas beta? I don't see that in the current build of VP22.

Here's more (from ChatGPT) on the subject:

https://chatgpt.com/share/67f13b63-b86c-8002-a1e9-bf1ec14f2785

RogerS wrote on 4/5/2025, 11:42 AM

AV1 encoding is broken in my testing, there are posts in the forum about it.

ALO wrote on 4/5/2025, 8:09 PM

oh. thanks by the way for the Shutter Encoder recommendation (on an unrelated thread)--it's a great tool and clearly better for AV1 encodes!

RogerS wrote on 4/5/2025, 9:38 PM

VEGAS AV1 would be fine in regards to encode quality except for frequent glitched frames in my testing (which doesn't show up in render metrics). There are also decoding issues with AV1 though only when the CPU decoder is used. Hopefully development will continue and these issues will be sorted out.

The ChatGPT "conversation" is interesting though in some places inconsistent and in others counterintuitive. I don't know enough to say if it's wrong but we could use an expert to validate or dispute the findings. Why would 16.7 million possible colors/shades require more data to represent than 1 billion? Are the answers about 10-bit delivery files relevant for both HDR and SDR workflows? Is it conflating precision for internal calculations with that of the encoded files?

The answers are delivered with great confidence but as these tools are not capable of cognition I wonder if they make sense. I'd request sources and interrogate the sources to see what they say, whether if it's applicable to the situation in question and if these tools accurately have represented the sources or not.

ALO wrote on 4/6/2025, 7:04 PM

I can tell you anecdotally my own testing says stay away from HEVC and AV1 for SD renders of old DV source. The codecs are optimized for high resolution captures and end up discarding lots of data in SD because the algo mistakes it as noise. Believe it or not I think MPEG-2 is the best choice--which is basically the exact opposite of what I was expecting.