"Dithering" in 32-bit floating point projects

larry-peter wrote on 8/4/2013, 4:44 PM
I had posted a bit on this topic in another thread and rather than take that thread further off track, wanted to state that I think there are several ways to minimize the limitations of 8 bit source video when color correcting in 32 bit floating point, color banding in particular.

I’m still experimenting to find methods that work best, but I started out thinking about dithering as it applies to a digitally sampled source. This is the original test I did that showed me there must be ways to improve the look (at least as it applies to banding) of 8 bit source footage.

I made a very dark 8-bit gray-scale gradient .png that displayed from 0-10 in Vegas’ waveform. That’s the top waveform display in the image below. Stairstepping is obvious. My goal was to see if I could stretch the luminance of that image to 0-100 and reduce the stairstepping.

I put that png in a 32 bit FP project. Stretching the luminance with a Levels plugin just took the stairsteps further apart, as you would expect. Then I created a full 0-255 gray-scale gradient in a video track above the .png. I set the compositing mode to Lighten and set the track opacity to around 4%. Although it was hard to see any difference, there was a slight lift on the left side of the waveform, peak level almost reached 11. I thought of this as adding the dithering “noise”. Then I put a chain of several Levels plugins on the output and used extreme levels and gamma adjustments until I got the range from 0-100 covered. The middle waveform display shows what this created. Many additional “steps” of luminance that weren’t apparent before.

The bottom waveform display shows the results of duplicating this process in an 8 bit project.

Since I initially did this test, I’ve started experimenting with similar methods when color correcting in 32 bit floating point. Things like using color gradients as you would a glass filter on the lens. It seems to only take a tiny amount of “new” floating-point information added in to get around some of the 8-bit barriers. You can argue whether the result is “better” or “worse” but it can allow for more extreme processing of 8 bit source material without artifacts.

This is just a personal exercise I thought I would share. With the time/money I’ve spent playing with this I probably could have bought a 10bit camera by now.

Would love to hear any comments, suggestions.


Comments

musicvid10 wrote on 8/4/2013, 5:34 PM
Nice that you're doing some deeper inquiry into this.
I ran some less involved tests a while back, using gradients created in 32-bit float projects in Vegas. Starting with this best-case scenario, the critical point for introducing banding appeared to be the conversion to 8-bit (in the render), a delivery medium we're unfortunately stuck with for some time to come. I haven't tested my theory that a little gaussian noise might improve, or at least mask some of the banding.
larry-peter wrote on 8/4/2013, 6:31 PM
It seems like it could, or perhaps even a script that could introduce some random number ranges into the 32 bit math stream. (I'm not familiar enough with Gaussian equations to know how "random" the noise elements would be.)
wwjd wrote on 8/4/2013, 6:53 PM
how does adding film grain affect this kinda thing?
larry-peter wrote on 8/4/2013, 9:32 PM
Any pixel-sized element, such as grain would defeat the purpose, at least for what I'm looking for. I'm no where near an expert in the math behind image processing, but I'm using what I know about dithering in the audio realm to see how it can apply to video.

The idea in audio dithering is to introduce an inaudible noise signal that has just enough power to cause some random "flipping" of the last couple of zeros and ones in the digital word. In a case of an audio fade, the idea is you won't hear the quantization or "stairstepping" of the final part of the fade to silence. Those random flipped bits keep it from being noticed. (Drastic simplification, but close)

When you take a video signal from an 8-bit environment to a floating point math environment, my idea is that you can introduce a low level floating point "noise" to give you in-between-8-bit values to work with that weren't originally there. The chunks of film grain are new visible elements. The concept of dithering works by adding (supposedly) imperceptible signals.
wwjd wrote on 8/4/2013, 10:27 PM
I love your idea here but I'm not too techie so I didn't grasp how you did it - even though the words made sense as I read them.... have you applied it to real video yet? and what did it do? Could you apply a grain layer and see what that does? I'd heard it was sort of similar by adding in additional pixel confusors - sort of a poor man's dither.
I've also upscaled 8-bit to 4k which seems to add additional color levels inbetween the pixel that were already there. It's not exact science, but if you grab 4 pixels and in the process of stretching them across 16, the upscaler might be adding "Relative" colors, closest match gradient effectively giving inbetween levels.
Or maybe transmorgifying into CINEFORM or other format might do some fill in the blanks for 8-bit files.

Could you try the grain thing in your test to graph the reaction?
gorilla grain and holy grain both offer free grains
http://gorillagrain.com/features
http://holygrain.com/downloads.html
farss wrote on 8/5/2013, 3:47 AM
I'm very much inclined to agree with Musicvid's comments above.
Just some random additional thoughts in no particular order:

1) Vegas does very little to help us work beyond 8 bpc. The scopes only display 8 bit values and the media generators also are only 8 bit. If you want to end up with something beyond 8 bpc in Vegas you need to use divide compositing.

2) Dithering is used when down sampling. All cameras internals work at greater than 8 bits, typically 12 bits and above. During the down sampling in the camera would seem the appropriate place for dithering.

3) Deliberately adding noise may appear to cure one problem and could introduce another e.g. blocking when using lossy codecs.

4) Many HDTVs and monitors use dithering. To really see what's going on with the image during these experiments you need good tools. I've been misled by this issue.

Bob.
larry-peter wrote on 8/5/2013, 7:49 AM
Bob, you're right on all counts. My use of the term dithering was just to connect with the understanding users already have of the term. What I'm playing around with is not dithering as far as being the last stage in down-sampling. I'm just adding low level information to see if it can be used transparently to eliminate some artifacts from working with 8-bit sources.

Since noise removal is usually one stage in transcoding to a low bit rate codec, adding noise would probably introduce problems with low bit rate lossy formats.

To answer wwjd, I have used a version of this to solve problems with real-world footage. One example - In the documentary I just finished, there was a shot of a child visiting with Santa that was underexposed that I wanted to include in a dreamy montage. Even working in 32 bit, I couldn't eliminate the banding I saw when I added enough diffusion to match the other shots. I duplicated the track, added a heavy glow to the new track, and used an Add compositing mode with around 4% opacity. After adjusting levels, gamma to match the surrounding scenes, I could no longer see the banding, even in the 8 bit render to MP4. When I have time to restore that project archive, I'll post stills of that scene. I have no delusions that this is making anything BETTER, but if it's perceived as better I'll take it.

Back in the days when I was working with Jaleo, it was an 8 bit i/o NLE, but did 12 bit internal math. It had a node, "Dither" that was meant to be applied before render and it made a big difference with color banding in the output.

wwjd wrote on 8/6/2013, 10:51 AM
atom, any chance you could post up an idiot proof, 5+(?) step procedure for dithering? I kinda followed you in the first post, but wasn't sure exactly how to try it myself. My play footage in my other post would be perfect to try this on with that gradient blue sky
larry-peter wrote on 8/6/2013, 2:21 PM
Wwjd, if you’re looking for idiot-proof, you’re talkin’ to the wrong guy. ;-) This is a chimp playing with firecrackers. I’m still trying to convince myself if it works at all. If you want to join in be prepared to spend a lot of time that could be used more productively – and you’ll be constantly wondering how much is subjective. There are no good comparisons to the original footage because you’re adding something that wasn’t originally there in hopes it will allow more color correction to be done. (at least that’s my goal)

Here is a quick example I threw together, grabbing the first footage I found with a gray sky – it’s out of focus and bad exposure but I’m going to take this to the extreme anyway.
Here’s a grab of the original footage.



Let’s say I want to make the sky blue. I added New Blue ColorFast as a video output effect so everything I do on the timeline is prior to the correction. (I set Colorfast’s mask blend and spread very low because I’m purposely trying to make this look as bad as possible and show as many artifacts as possible). I chose a deep blue tint, cranked it to the max and also took Saturation to the max. Here it is. Ugly. Not only is banding apparent, you can even see the macroblocks showing up from the AVCHD codec.



So let me see what I can do to make the sky more realistic – perhaps stretch the levels a bit to add some gradation. So I duplicate the track, mask the sky and apply Color Curves as an event effect. Below is this result. Better, but macroblocks are still visible as well as lots of banding.


Then I turned off the Curves plugin on the masked sky event. I created a 32 bit gray scale gradient as a Photoshop file. Full black to white, and added this on a video track above my original, oriented so black was over the ground, white over the sky. Composite mode was set to Overlay and opacity at 40%. I settled on 40% because my luminance peaks were close to what I had with the previous attempt. The result below is still ugly because of the extreme CC settings I’m using, but even though the macroblocks are visible in the darker areas, the transitions from blue to white in the upper sky look way better . If I tweaked the gradient so it was mid-gray to white, instead of black to white, I’m sure it could look even better. These kind of results make me think I’m adding something to the 8 bit image that the cc can grab onto in a usable fashion.



Is all of this worth it? Am I just kidding myself? I don’t know. I don’t have the gear to really test it. It’s play time experiments for me right now. I have no better tutorial to offer than this, because I’m just trying various things based on the assumption that if I can add values to an 8-bit image that fall in-between 8 bit integers, I may be able to do things I couldn’t before. Pursue this at your own risk.
videoITguy wrote on 8/6/2013, 7:45 PM
And>
Subject: RE: interesting video from Sony
Reply by: Serena
Date: 8/6/2013 5:16:31 PM

>>>How to get 10 bit video out of Vegas to our monitors??<<<<

The path is through a BMD Decklink (options/secondary viewer), click 10-bit output having clicked 32-bit video. My HP Dreamcolor grading monitor, for 10-bit input, demands RGB into its display port (or reverts to native gamut), so I have to feed the Decklink output through a BMD HDLink Pro 3D DisplayPort . Don't know about the needs of other 10-bit monitors.
Reply | Report Abuse

Subject: RE: interesting video from Sony
Reply by: videoITguy
Date: 8/6/2013 5:41:32 PM

Serena, your point is monitoring 10bit video digital intermediates in a 32bit setting of Vegas Project.?? correct?.. well understood. BUT from Vegas what are you using as a render codec to final output that is any more than 8bit rendered file ? ?

Subject: RE: interesting video from Sony
Reply by: Serena
Date: 8/6/2013 7:02:33 PM

>>>>what are you using as a render codec to final output that is any more than 8bit rendered file ? ?<<<<<

I use the 10-bit Cineform codec, which renders 10-bit avi files out of Vegas. To date I've been using 8-bit source files (PMW-EX1 internally recorded) but transfer these to 10-bit 4:2:2 for post. So getting 10-bit into the monitor hasn't been necessary and have been happily working through a DVI RGB path. Now I'm moving into the BMD cinema camera/Resolve world and while it isn't essential to put 10-bit on the monitor (Resolve will output 8-bit for monitoring), it is preferable for grading. Resolve will not output to a secondary monitor via a graphics card (excuses based on monitor quality) so a Decklink card is needed. My initial error was buying a BMD Intensity Pro card, which works but is 8-bit (although the BMD website implies10-bit).

And my added comment - the included BlackMagic MJPEG codec for direct capture thru the BMD Intensity Card from the card inputs would suggest that the card could pass 10bit source into the MJPEG codec. But I believe as Serena suggests that this is a false assumption of the card - as it was really designed to be connected to capture from 8bit sources and pass the 8bits to a possible 10bit codec.



musicvid10 wrote on 8/6/2013, 8:15 PM
My name isn't Serena, but the point that we're stuck with 8-bit delivery for now is the same one I made above.

But fortunately, all 10->8 bit downsampling is not equal. Rendering a pristine gradient from Vegas to Mainconcept AVC shows more banding, for example, than rendering 10-bit 4:2:2 DNxHD->8-bit x264 in Handbrake. The beauty of a test like this is the results are independent of both source and bitrate (it's a still).

Here's the Vegas 8 project I used; feel free to use it for testing various outputs to compare banding. It's already been leveled to REC 709 to simulate real-world conditions.
http://dl.dropboxusercontent.com/u/20519276/yuv.veg