Color Processing and Bit Depth in Vegas

karma17 wrote on 12/16/2018, 10:39 PM

I was trying to wrap my mind around what Vegas Pro is doing when you set the bit depth to 32 bit and maybe like some others, found myself confused and probably misinformed about what is going on under the hood. I found this article, and even though it references a software program I will not mention by name, I think it explains it well and assume lot of the same processes apply to Vegas. Initially, the scenario I was attempting to fully understand is what happens when you import 10-bit source footage such as Sony's own AVC-IntraFrame codec, set the bit depth before render to 32 bit, and then render out to Magix Intermediate XQ .

My question was: Is the output on the rendered video file 8 bit or 10 bit? Would it ever be 10 bit, for instance, if you rendered to MXF wrapper? When I ck MediaInfo for Magix Intermediate XQ, it doesn't list Bit Depth whereas in the other "non Magix Intermediate" codecs it shows Bit Depth as 8. In the article, the scenario I'm asking about is most closely related to Scenario #5 at the bottom of the article.

So I am not sure of the answer but I do wish to understand. I want to thank those on this forum for sharing their knowledge in taking the time to explain these things. I do try to figure things out on my own before I post here and don't mind trying to research and test myself.

http://blogs.adobe.com/VideoRoad/2010/06/understanding_color_processing.html

Comments

Marco. wrote on 12/17/2018, 3:58 AM

It is confusing to name that thing "32 bit". It is "32 bit floating point" and it is not the 32 bit what makes the difference but the floating point processing. This is totally different from any kind of integer pixel processing.

And rendering is different from project's internal video processing. If the renderer doesn't support more than 8 bit, you're done, which is the case when using Magix Intermediate. All of the Magix renderer versions of ProRes are 8 bit only. You need to use a 10 bit video codec instead (or a 10 bit or higher (or a float point) image sequence codec).

Musicvid wrote on 12/17/2018, 9:03 AM

@karma17,

Again, a thoughtful and intelligent posting, and very much on-topic. As Marco will tell you, it is a wide area of competing maths and confusing nomenclature, and even old math teachers get it wrong with some regularity.

Youve got 90% of the concept, so I'll try not to muddy the waters with my usual overkill. A couple of points that may help you though.

Normal integer math uses locked endpoints such as [0, 255]. Anything meeting or exceeding those boundaries is forever locked (clipped) at [0, 255] relative to space. So things with wide dynamic range and flat scans, for instance, become very problematic in post. Think of a rubber band stretched between two pegs. You can do almost anything with [1, 254], but 0 and 255 are forever.

Look at all those locked pixels in perfectly exposed 8 bit HD source.

The classic approach in the graphics arts industry has been to work in [2,253] or even [10, 245] (Epson default), accepting even a little more flatness in an already confined working space. We've lived with it for decades.

Enter float processing to the rescue. Take higher (10) bit source and "float" it representatively in a space that is so large that for practical purposes, endpoints can never be reached. 10 bit, 12 bit, even 16 bit would look like a sardine in a 32 bit ocean. Voila! (Or viola!).

Of course, if you render that masterpiece with an 8 bit codec, you've run the risk of locked endpoint pixels again. This can look great on a screen if no further changes are made, but it can sure play hell with printers and subsequent generations, as my old friend and mentor johnmeyer would tell you. Likewise, float processing does not reduce banding in 8 bit renders.

Also, downsampling from the pipeline to 8 bit output introduces another factor called "dither." It's a holdover from the printing industry, and actually adds noise (blur) to soften jaggedness around pixels that were thrown away during downsampling. You can see why I get steamed when someone says 8-32-8 processing does something "beneficial." Likewise, float processing does not reduce banding in 8 bit renders. Did I tell you I invented a perpetual motion machine when I was 12? Ran for half a second.

Now, the output bit depth potential is limited entirely by the encoder. Most codecs are 8 bit, with a handful of 10 bit encoders available in Vegas, too. I say "potential" because a ten bit encoder will not increase the number of bits of 8 bit source. I call it "adding air." A gallon of water is still a gallon, even inside a 5 gallon bucket (you should have heard the internet alchemists scream when I said that!).

So we've got a bunch of stuff called "32 bit" in editing, much of it unrelated. Marco is correct to say that what follows is the most important. 32 bit RGBA is not 32 bit pipeline math, is not 32 bit colour depth, is not 32 bit architecture, is not ..., well, you get it.

karma17, you are a gem in an ocean of reluctant learners. Maybe the best a teacher can do is try to be a little less wrong than the resistance. For example, Marco is usually less wrong than me.

Musicvid wrote on 12/17/2018, 11:11 AM

If the renderer doesn't support more than 8 bit, you're done, 

I like that. So much misinformation on the street.

karma17 wrote on 12/17/2018, 12:15 PM

Thank you so much for the detailed response. I really appreciate it. Slowly getting it.

One last thing if I can ask.

In the Magix V16 manual, on page 46. there is this statement:

"When using 8-bit input/output, the 32-bit floating point (video levels) setting can prevent banding from compositing that contains fades, feathered edges, or gradients."

Is this what's adding to the confusion and people thinking "8-32-8" is beneficial? I notice the statement refers only to compositing and not video per se, but still suggests an anti-banding benefit, which I have seen stated on other posts and places.

http://forums.creativecow.net/docs/forums/post.php?forumid=24&postid=982291&univpostid=982291&pview=t

Musicvid wrote on 12/17/2018, 12:52 PM

"When using 8-bit input/output, the 32-bit floating point (video levels) setting can prevent banding from compositing that contains fades, feathered edges, or gradients."

I can say with some confidence that I disagree with that statement, and removing the word "output" would correct it to my satisfaction. In fact, hours spent testing just that theory failed to reveal any difference. Never got a speck on the radar. Same with 8 bit input.

8 bit banding is 8 bit banding. Inflated grading precision is reversed by simple decimation, plus a bit of low grade shadow noise "may" be introduced (untested).

If the renderer doesn't support more than 8 bit, you're done.

https://www.vegascreativesoftware.info/us/forum/10-bit-vs-8-bit-grading-the-musical--111748/

That said, there is no harm either in doing it that way, except it takes longer.

Thanks again for keeping an open mind and considering all points of view along your journey.

Someone asking all the right questions makes me a bit nervous, if one must know.

And to those who truly see a difference with their untrained eyes, go right ahead and do what you are doing. Grading in outer space can really do no harm, unless you're working under a production deadline, that is...

 

Musicvid wrote on 12/17/2018, 2:45 PM

There is one exception I ran across years back.

IF you are working with 8 bit uncompressed 4:4:4 source, AND you plan on YUV intermediate, go with the float grading and 10 bit output. Reason: the extra bits do negate some of the effects of degrading to 4:2:2 chroma subsampling, at least in theory. And my earliest tests showed a clear improvement in 10-8 dither noise in x264 over Mainconcept in Vegas. Again, Vegas has undergone a number of revisions since then, so I welcome newer input.

karma17 wrote on 12/17/2018, 10:05 PM

Thanks again for taking the time to provide such a detailed response. I shoot in 10-bit AVC-intra, so I'm just trying to be sure I understand and this really helps.

Have a great Holiday!!

Musicvid wrote on 12/17/2018, 11:10 PM

I shoot in 10-bit AVC-intra, so I'm just trying to be sure I understand and this really helps.

I would edit, grade, and master 10 bit door to door. Then I would Render 8 bit for delivery, keeping my masters or project for future HDR prints.

Thanks again for all the right questions. Listen to everything that's been said, then go and do the right thing. Happy Holidays too!

fr0sty wrote on 12/17/2018, 11:58 PM

Wouldn't that make Vegas' reference to the project bit depth under the preview monitor false as well? In the preview monitor it lists the resolution followed by bit depth, so for instance 8 bit project at 4k is listed under the preview monitor as "3840x2160x32", which adds up to 8 bits per channel in RGBA. Put it into 32 bit mode, this number jumps to 128... but techincally it is only processing bit depth values at whatever bit depth the source video is encoded at, correct? So the proper way to list it would be "3840x2160x40" for a project using 10 bit video?

Last changed by fr0sty on 12/18/2018, 12:00 AM, changed a total of 2 times.

Systems:

Desktop

AMD Ryzen 7 1800x 8 core 16 thread at stock speed

64GB 3000mhz DDR4

Geforce RTX 3090

Windows 10

Laptop:

ASUS Zenbook Pro Duo 32GB (9980HK CPU, RTX 2060 GPU, dual 4K touch screens, main one OLED HDR)

Marco. wrote on 12/18/2018, 2:30 AM

No matter which (integer) bit depth of source media is used, when the project property is set to use floating point, any video processing inside this project (except some fx which does not support floating point math) will be made on base of floating point math.
So I think the indication on the preview window isn't totally wrong, but imho it would be more precise to label that thing as "floating point" instead of "128".

max8 wrote on 4/7/2019, 11:45 AM

And rendering is different from project's internal video processing. If the renderer doesn't support more than 8 bit, you're done, which is the case when using Magix Intermediate. All of the Magix renderer versions of ProRes are 8 bit only. You need to use a 10 bit video codec instead (or a 10 bit or higher (or a float point) image sequence codec).

Hello,

I almost started a ProRes "10 bit" render process before reading this.

I made a mistake during recording a ProRes 422 HQ file. I created a Vegas project to correct that (process creates heavy cpu load) and want to rerender it so I can use the corrected version as if nothing happened. Now I learned that a ProRes-->ProRes rendering with 10 bit isn't possible. Which intermediate codec will be rendered in 10 bit? (uncompressed, etc. is not an option because of the file size) I've read about a problem with Sony XAVC-I (or was it Sony AVC). I'm using Vegas Pro 15 (build 321, since later ones displayed green frames with P2 footage).

If I understood correctly it's that simple: import ProRes file in 32 float project and rerender using an 10 bit capable codec. (?)

Marco. wrote on 4/7/2019, 12:09 PM

The video codecs which support 10 bit export from Vegas Pro are:

  • Sony YUV (uncompressed)
  • XAVC-I
  • XAVC_L
  • HEVC
  • XDCAM SR
  • Cineform (needs install of 3rd party codec)

And also there are some choices of image sequence export with at least 10 bit:

  • DPX: 10 Bit
  • TIFF: 16 Bit
  • EXR: Float Point (32 Bit)
max8 wrote on 4/7/2019, 2:48 PM

OK, thanks.

But the imported ProRes is decoded as 10 bit (with 32 bit float project settings) and an XAVC-I export will contain the "real" 10 bit?

Eagle Six wrote on 4/7/2019, 2:56 PM

The video codecs which support 10 bit export from Vegas Pro are:

  • Sony YUV (uncompressed)
  • XAVC-I
  • XAVC_L
  • HEVC
  • XDCAM SR
  • Cineform (needs install of 3rd party codec)

And also there are some choices of image sequence export with at least 10 bit:

  • DPX: 10 Bit
  • TIFF: 16 Bit
  • EXR: Float Point (32 Bit)

@Marco. there are no 10 bit templates for MAGIX Intermediate?

System Specs......
Corsair Obsidian Series 450D ATX Mid Tower
Asus X99-A II LGA 2011-v3, Intel X99 SATA 6 Gb/s USB 3.1/3.0 ATX Intel Motherboard
Intel Core i7-6800K 15M Broadwell-E, 6 core 3.4 GHz LGA 2011-v3 (overclocked 20%)
64GB Corsair Vengeance LPX DDR4 3200
Corsair Hydro Series H110i GTX 280mm Extreme Performance Liquid CPU Cooler
MSI Radeon R9 390 DirectX 12 8GB Video Card
Corsair RMx Series RM750X 740W 80 Plus Gold power pack
Samsung 970 EVO NVMe M.2 boot drive
Corsair Neutron XT 2.5 480GB SATA III SSD - video work drive
Western Digitial 1TB 7200 RPM SATA - video work drive
Western Digital Black 6TB 7200 RPM SATA 6Bb/s 128MB Cache 3.5 data drive

Bluray Disc burner drive
2x 1080p monitors
Microsoft Window 10 Pro
DaVinci Resolve Studio 16 pb2
SVP13, MVP15, MVP16, SMSP13, MVMS15, MVMSP15, MVMSP16

max8 wrote on 4/7/2019, 5:16 PM

I did some testing:

I imported ProRes 10 bit footage (containing a slight brightness gradient) into a 32 bit float project, applied a levels filter, set the gamma to 5 and rendered it to ProRes HQ. Then I reimported that and did the same. When I imported this clip (2 render generations) and applied 4 levels filters with gamma 0,45 (almost the opposite effect of 2 times 5,0) there was almost no banding visible (slightly rough histogram). (With XAVC-I slightly more banding/artifacts and with DNxHD (set to 10bit) clearly visible banding)

When I did the same (again, ProRes) with project settings set to 8 bit (only for rendering) there was considerable banding visible.

Doesn't that indicate that ProRes is being rendered with 10 bit? (and DNxHD behaved like it "should")

AVsupport wrote on 4/7/2019, 8:37 PM

Some might know, I've recently fallen in love with Grass Valley HQX Intermediate (via the Great Otter Scripts) at is 'Resolves' all my XAVC-S playback issues in VP which doesn't handle Long-GOP codecs very well.

Interestingly, this is indeed a 10-Bit intermediate:

https://www.grassvalley.com/docs/WhitePapers/professional/GV-4097M_HQX_Whitepaper.pdf

Whilst I do not know what it does with a 10-Bit source (because I cannot check), I understand that it uses 10-Bit for a 8-Bit source, and if viewed in properties looks like that:

note the 'x24'.

Playback via aviplug.dll. Works well and fast. But all this might not answer your question ;-)

 

 

 

my current Win10/64 system (latest drivers, water cooled) :

Intel Coffee Lake i5 Hexacore (unlocked, but not overclocked) 4.0 GHz on Z370 chipset board,

32GB (4x8GB Corsair Dual Channel DDR4-2133) XMP-3000 RAM,

Intel 600series 512GB M.2 SSD system drive running Win10/64 home automatic driver updates,

Crucial BX500 1TB EDIT 3D NAND SATA 2.5-inch SSD

2x 4TB 7200RPM NAS HGST data drive,

Intel HD630 iGPU - currently disabled in Bios,

nVidia GTX1060 6GB, always on latest [creator] drivers. nVidia HW acceleration enabled.

main screen 4K/50p 1ms scaled @175%, second screen 1920x1080/50p 1ms.

fr0sty wrote on 4/7/2019, 8:42 PM

I know regarding magix intermediate, I was trying to figure out a bug that prevented rec2020 color space from rendering to that codec. It would default back to rec709 instead. I don't know if this also meant it was only encoding 8 bit, but I do know there was a color space limitation at least at some point, it could have been fixed in a more recent update, as this was months back. I did report it to the team.

Systems:

Desktop

AMD Ryzen 7 1800x 8 core 16 thread at stock speed

64GB 3000mhz DDR4

Geforce RTX 3090

Windows 10

Laptop:

ASUS Zenbook Pro Duo 32GB (9980HK CPU, RTX 2060 GPU, dual 4K touch screens, main one OLED HDR)

fan-boy wrote on 4/7/2019, 9:14 PM

@Marco.

I just did a sample render with Vegas project settings , set to 32 bit . The big deal is this:

I used Magix Pro-Res 422LT . Vegas says that 422LT imported video is 32 bit . But , when I drop it into Davinci Reslove , it says it is 10 bit , in Metadata Tab . So , which is it ? 32 bit render or 10 bit render ?

If Magix Pro-Res 422LT really is 10 bit render , then that is pretty cool , at 1 billion colors .

one note on 32 bit project in Vegas . I always notice , when importing 8 bit , the viewer always looks more vibrant , as soon as I switch to 32 bit with gamma 2.222 with Transform OFF ( No ACES ) . Using 32 bit does seem to improve imported time line media . Even if the final render does go out to 8 bit ( render to render should retain more quality using 32 bit project ) .

Vegas Help says to improve time line performance to do it in 8 bit , Then when ready to render , switch project to 32 bit . I was doing that , until I ran into an issue , as you mentioned above . Having some gradient on the time line was adjusted and looked ok using 8 bit . But when switching to 32 bit for the render caused gradient portions to look different in the Viewer . Thus it seems best to : do it all with 32 bit , or do it all with 8 bit . ( Vegas Pro , do it all with 32 bit )

Marco. wrote on 4/8/2019, 4:59 AM

"But the imported ProRes is decoded as 10 bit (with 32 bit float project settings) and an XAVC-I export will contain the "real" 10 bit?"

Yes.

"there are no 10 bit templates for MAGIX Intermediate?"

That's right, there are no 10 bit presets and there is no 10 bit setting.

"Interestingly, this is indeed a 10-Bit intermediate"

Yes and no. HQX offers "real" 10 bit encoding, but Vegas Pro would always decode HQX as 8 bit. The only AVI codecs which Vegas Pro can read as 10 bit are CineForm and Sony YUV.

Marco. wrote on 4/8/2019, 5:06 AM

"Vegas says that 422LT imported video is 32 bit."

Vegas Pro reports the amount of bits of four channels while "8 bit" or "10 bit" means bits at one channel, so actually this is 8 bit (each channel).

"when I drop it into Davinci Reslove , it says it is 10 bit , in Metadata Tab ."

This confuses me. Maybe I'm wrong about the missing 10 bit ProRes export from Vegas Pro. I'll try to check again.

"I always notice , when importing 8 bit , the viewer always looks more vibrant , as soon as I switch to 32 bit with gamma 2.222 with Transform OFF ( No ACES )."

This depends on the floating point mode selected. Usually you should use "Video Levels" only.

Edit:
I re-checked the bit depth of Magix Intermediate exports out of floating point Vegas Pro projects and this really is 10 bit, not 8 bit like I said before.

Eagle Six wrote on 4/8/2019, 10:28 AM
Edit:

I re-checked the bit depth of Magix Intermediate exports out of floating point Vegas Pro projects and this really is 10 bit, not 8 bit like I said before.

@Marco. Thank You.

System Specs......
Corsair Obsidian Series 450D ATX Mid Tower
Asus X99-A II LGA 2011-v3, Intel X99 SATA 6 Gb/s USB 3.1/3.0 ATX Intel Motherboard
Intel Core i7-6800K 15M Broadwell-E, 6 core 3.4 GHz LGA 2011-v3 (overclocked 20%)
64GB Corsair Vengeance LPX DDR4 3200
Corsair Hydro Series H110i GTX 280mm Extreme Performance Liquid CPU Cooler
MSI Radeon R9 390 DirectX 12 8GB Video Card
Corsair RMx Series RM750X 740W 80 Plus Gold power pack
Samsung 970 EVO NVMe M.2 boot drive
Corsair Neutron XT 2.5 480GB SATA III SSD - video work drive
Western Digitial 1TB 7200 RPM SATA - video work drive
Western Digital Black 6TB 7200 RPM SATA 6Bb/s 128MB Cache 3.5 data drive

Bluray Disc burner drive
2x 1080p monitors
Microsoft Window 10 Pro
DaVinci Resolve Studio 16 pb2
SVP13, MVP15, MVP16, SMSP13, MVMS15, MVMSP15, MVMSP16