WOT: 10bit vs 8bit camera codec

Comments

Bill Ravens wrote on 8/18/2010, 3:49 AM
Serena wrote that the sensor is a linear device. In my years of experience in the remote sensing profession, sensors are rarely linear. They usually have very complicated intrinsic/native gamma curves. The electronics provide a compensating gamma curve to result in a linear output. By changing the bias voltage on the sensor, the "local" gamma curve is optimized.

So, inevitably, someone, in this kind of discussion, is going to say..."yeah, but, what I see is stairstepping with 8 bit. Can I fix that with 10 bit display?"

So, that's my interest in a discussion like this. When I look at gradients displayed on my Samsung monitor, especially shots with a lot of sky, I see a very objectionable and distracting series of concentric color steps, instead of a continuous gradient. I see this on footage captured with my EX1; and, I see this on footage on broadcast television.

Is that stairstepping produced from the camera sensor, the camera processing, the NLE, the compression algorithms, the display electronics, ad infinitum. All I know is that I want the stairstepping in the images of the sky to go away.
megabit wrote on 8/18/2010, 4:22 AM
This is a very interesting discussion, and myself not even being educated in electronics (I'm an MSc in mechanics), I didn't take any active part in it... Not until it returned back to the basics, i.e. the practical aspects of the final product (being the picture we're watching), and Bill saying:

"All I know is that I want the stairstepping in the images of the sky to go away."

Now this is what I really care for, and - with all the benefits of the HD-SDI 10 bit Y'CbCr - I know all too well from my experience that those benefits can be lost somewhere down the the road, on its way to even a greatest viewing device.

An example: my plasma HDTV is really a good model, and usually I don't see any serious banding at all when using a good source (like my own 8 bit nanoFlash recording). However, sometimes when I have VLC open on my primary monitor, ready to play stuff, and only then I switch my ATI card to extend the desktop onto the plasma - something weird is happening, and - after dragging the VLC window onto the plasma - the color depth looks like it's not even 8 bits, so horrid the banding becomes!

This only happens sporadically when I'm not careful enough, and use the above sequence of action. I have no idea what's causing it, but I'm sure that even if I had my video in 10 bit all the way from the camera, the same weird thing would happen, and spoil my watching experience...

AMD TR 2990WX CPU | MSI X399 CARBON AC | 64GB RAM@XMP2933  | 2x RTX 2080Ti GPU | 4x 3TB WD Black RAID0 media drive | 3x 1TB NVMe RAID0 cache drive | SSD SATA system drive | AX1600i PSU | Decklink 12G Extreme | Samsung UHD reference monitor (calibrated)

Serena wrote on 8/18/2010, 6:08 AM
>>>>wrote that the sensor is a linear device<<<<
The voltage stored by a CCD pixel is linearly proportional to photon count. I should have been more specific about the type of sensor; I was assuming that we are talking about CCD or CMOS (and only presume that a CMOS has a similar characteristic).

Yes, it's that banding in subtle colour gradients that we want to banish. Needs 10bit to achieve that in post so we naturally think we need 10bit out of the camera. The EX will give 10bit out of the HDSDI but this is rounded/truncated to 8bit if recorded in a Nanoflash. Which was the essence of the debate that started on CML-prosumer, where Alister Chapman argued that 10bit is excessive and counter-productive for any camera with a noise level not better than -56dB. But is that true? Coda.
farss wrote on 8/18/2010, 6:18 AM
"Is that stairstepping produced from the camera sensor, the camera processing, the NLE, the compression algorithms, the display electronics, ad infinitum. All I know is that I want the stairstepping in the images of the sky to go away. "

Most likely the display itself. Some panels are only 6 bit. My old Dell 24" is just horrid. I was doing something in AE a couple of days ago and the banding I got myself into was so bad the glow gradient looked like it was made out of Lego.

Bob.
BrianAK wrote on 8/18/2010, 7:57 AM
"Most of us have 8 bit displays, so at the end of the day we often need to deliver an 8 bit stream. While that would be intuitive, that's not the case because 8-bit Y'CbCr is not the same as 8-bit RGB.

The legal range for Y' is 16-235 (for 8-bit)... "


Good point, 8 bit isn't really 256 codes. I was thinking about this with respect to 14 bit Phantom imagery I have been working with (given that its problably not 14 usable bits) The imagery needs to be converted for 8-bit (legal range) delivery.



"Is that stairstepping produced from the camera sensor, the camera processing, the NLE, the compression algorithms, the display electronics, ad infinitum. All I know is that I want the stairstepping in the images of the sky to go away"

It seems to me that they are all equally guilty. In a perfect world, a better sensor would be able to produce signals levels that when digitally converted (at a higher bit depth) the camera processing would produce a file with more meaningful data. The NLE would then be able to process this higher bitdepth, and the compressor would be able to produce a higher bitdepth output file that the high bitdepth monitor could display.

As you can see Im addicted to bits. What gets me thinking is when we talk about the useful number of bits or graduations that our eyes can "see". I know when I look outside I can see an enormous amount of dynamic range without banding, I have a hard time equating that to bit depth.

GlennChan wrote on 8/18/2010, 10:09 AM
i.e. is there any reason not to record 10bit, would doing so produce a worse outcome. I think we can all agree the asnwer is No.
Some compression schemes may benefit from recording at a lower bit depth?

DCT compression is awful at ultra low bitrates... so you want to throw away information first so it doesn't have to compress as much. That's why good JPEG encoders do not use chroma subsampling if the compression ratio is low, and do use it for lower quality modes. This might also apply to bit depth (?).

Is that stairstepping produced from the camera sensor, the camera processing, the NLE, the compression algorithms, the display electronics, ad infinitum.
The transfer function of a LCD panel is some sort of s-shaped curve. You need to adjust the digital values going into to compensate for that, so you lose some performance there. The display try to implement some sort of dithering algorithm (either temporal or spatial), but those processes can still introduce artifacts.

You can take the camera sensor out of the equation by using test patterns, CG generated images, or footage that you know is high quality.

---

Plasmas have a transfer function that is mostly linear, so they tend to have problems with shadows. You might find that shadows are grainier/noisier than they should be.
farss wrote on 8/18/2010, 3:24 PM
"Some compression schemes may benefit from recording at a lower bit depth? "

More to the point many compression schemes don't support 10bit e.g. mpeg-2.
I think this is what has fuelled the debate that got Serena interested in this question. There's two HD SDI recorders available:

1) Convergent Design's Nanoflash that records mpeg-2 at various bitrates and with a choice of long GOP or I frame only. A small box at a quite cheap price.

2) The Cinedeck that uses the Cineform codec, supports HD SDI and 3G SDI. That box includes a 7" monitor and is about as expensive as an XDCAM EX1/3.

This is hardly an apples to apples comparison.

"You can take the camera sensor out of the equation by using test patterns, CG generated images, or footage that you know is high quality."

I've tried to do this myself in AE but Marcie has gone MIA :(
Based on Adam Wilt's tutorial on handling highlights from Cineon files in AE clearly 10bit blows 8bit out of the water. Again though this is not an apples to apples comparison. The Cineon log curve has white at 635 so there's a large part of the dynamic range available for highlights.

The only apples to apples comparison I can think of that is accessible to Vegas users is comparing the 10bit and 8 variants of the Sony YUV codec. As we've discussed previously one issue is Vegas simply truncates 10bit values to 8bit, there's no dithering so working with CGI you can get color banding issues.

Bob.
Serena wrote on 8/18/2010, 8:05 PM
It's a bit easier to say things about the workings of cameras for astrophotography because only imaging and ADC are in the camera and all subsequent processing in separate software packages. Many prosumer astro cameras use Sony and Kodak video chips that normally are used in 8 bit cameras, but here the sensors are cooled (peltier) to 30-40 C below ambient (to largely eliminate thermal noise) and output is read out to 16bits. One that I've just bought employs a Kodak KAF 8300M "full frame" chip (2/3 inch) which has a well depth of 25000e. Other chips (looking at Sony) have greater well depth, the Sony ICX405AK (1/3inch) has a well depth of 60000e. Cooled these have noise levels < 15e and many <10e. Which tells us that the CCDs we are typically using are not limited technically to 8 bit latitude and that their practical capabilities are determined by how they are employed in our cameras. While this doesn't clarify anything relating to this thread, it may be interesting.