Comments

farss wrote on 5/18/2005, 4:34 PM
Nothing truly startling in there. 10bit is better than 8bit (duh), BetaSP is pretty much as good as DV25 but if you factor in generational loss and things like tape dropouts...
Avids make pretty looking video by use of chroma smoothing but you can suffer generational loss in the process. SDI is also more lossy than 1394 for capturing, again well know, 1394 is superior to SDI for DV25 capture as it's a bit copy. Only way to fly if you need SDI is SDTI but I hear that causes big problems with routers and switches.

Only thing that bugs me about so many of these articles and much of this game in general is it's all focussed on NTSC.
Bob.
Coursedesign wrote on 5/18/2005, 5:20 PM
I totally sympathize with you re PAL articles. It is clear that the world needs more PAL article writers! (hint-hint...)

As for the non-startling contents, that's a given. I suggested this article here only because there's been a lot of religious talk about this vs. that, with just dry theory or "say-so" to back it up in the forum.

Here it was all in black-and-white, so to speak :O).
GlennChan wrote on 5/18/2005, 6:03 PM
In practical use, 10bit may not be all that much better than 8bit? The reason I say that is because there's noise in the CCDs which seem to affect all luminance values. On footage I have this noise is a lot greater than rounding error. I do not have footage from high-end cameras though.

Also... the rounding error from 8-bit numbers is hard to perceive on a monitor (although you can see it if you were looking for it and have a little practice).

2- In real world situations, you'd see color information in color and your eye doesn't have as good color resolution as luminance resolution. You can test this yourself by looking at the images on the website... there's a good reason Graeme mapped saturation values to black and white!

Yes 10bit 4:2:2 is better than 8bit 4:1:1 (smoothed or whatever), but except for specific situations I would say the two are about equal quality in terms of practical quality (perceived by the average television viewer, who doesn't look for banding, who has a consumer TV, at normal viewing distance... not magnified 300%, receiving signals off a video server compressed with MPEG2, etc. etc.)

The special situations would probably be secondary color correction and chroma key, where 4:2:2 would give a real edge. In both these cases chroma information can affect luminance information, and the lower color resolution can be made visible.
Coursedesign wrote on 5/18/2005, 6:30 PM
The noise from 2/3" CCDs is a heck of a lot lower than what you find in typical 1/3" CCDs. As much as 15 dB lower, that's quite a bit.

There are a few more specific situations where 10-bit 4:2:2 makes a difference: effects and transitions, uprezzing to HD, subjects with high color saturation, subjects with a high luminance range (that can then be compressed in post rather than seeing the highlights just blown out).

It's true that a lot of consumers couldn't tell the difference because of cheap "motel TVs", "who cares", etc. There will always be a lowest common denominator...

It is also true that DV 4:1:1 when not exposed to its limitations looks very good in broadcast.
GlennChan wrote on 5/19/2005, 1:19 AM
Wouldn't 10-bit be of benefit to low contrast images, because you are expanding out the numbers?

Even in that case, I don't think 10-bit VS 8-bit would matter that much when you are dealing with real-world images. Perhaps someone could do an experiment with a 10-bit image captured on a overcast day, and perform a s gamma/color curves on the footage (which would 'expand' it out).
farss wrote on 5/19/2005, 2:59 AM
I'd agree that the average viewer isn't going to see much difference between an 8 bit image and a 10 bit one, certainly not on the average TV in less than dark room. Where you do get an advantage is when you acquire the image at 10bit 4:2:2, you have more data to work with. If you want to bring up the detail in the shadows you can, the data is there to bring up. Yes, you're still going to end up having to fit it into 8bits but you've got a bigger range to work from when you decide what goes into those 8bits.
10 bit video is probably of more value in high contrast scenes, because you've got a wider dynamic range you've got a better chance of keeping detail in the high and low lights. You can decide how you want it to look when you have to fit it into the 8 bit space.
You can see the same thing at work with audio, most people are quite happy with the 16 bit audio on CDs. But a lot of work goes into getting that good sound and it more often than not starts with recording at 24 bit or even higher resolution.
Bob.
JohnSchell wrote on 5/19/2005, 6:57 AM
Bob,
Just a question...
You say that DV25 over firewire is better than editing the uncompressed DV25 over SDI. Wouldn't it be better to edit the uncompressed DV footage to remove compression losses from doing renders (in DV - uncompress footage, apply render, recompress footage versus uncompressed - apply render)?
I believe SDTI was designed for two reasons - to allow DV to travel 300 Meters and to transfer multiple DV streams at once or a DV stream at faster than real time.
Regards,
John
rmack350 wrote on 5/19/2005, 9:51 AM
Just to get some concepts down...

--DV25 over firewire is a direct bit for bit transfer
--DV25 over SDI involves a transcode. It is no longer DV25 after the transcode
--Outputting your program over firewire to a DV25 to SDI converter involves rendering down to DV25 and then converting to SDI. You could do better if you could avoid that downconversion.

DV to SDI can look very good, almost exactly like the original, but it's not DV25 any more and in Vegas you're opening yourself up to more transcodes.

--Encoding to DV25 involves rendering as 4:1:1 or 4:1:0. Fairly obviously, this is not the best way to handle stills, text, graphic shapes, or even transitions and FX because these things could all benefit by being rendered with better color sampling.

--For Vegas, SDI comes into play when you want to capture non-DV25 source footage and when you want to output a render to a tape media where SDI would matter. What you get by outputting SDI is more color sampling, and I assume, the possibility of 10-bit color output.

--To input/output SDI a Vegas user will need a card that can do it and a codec that both Vegas and the card can access. Currently Blackmagic cards are the way to go. You can render to the codec (which is free and downloadable) without even owning a BMD card.

--10bit color? for Vegas users the issue is moot because Vegas does everything in 8bit. However an analog TV can display the difference between 8 and 10 bit because it's just dealing in a continuous range of voltages. If your DA converter is taking a 10 bit file and outputting 1024 voltage values instead of 256, the TV can handle it because it deals in a continuous range from 0V to 1V. (Of course that analog TV may introduce other problems. I'm not saying it's ideal, just pointing out that you ought to be able to see a difference)
--Can you see the difference? I've certainly noticed banding in gradated colors on 8bit renders. 10bit should look cleaner. There are instances where a 10bit render will be desirable andif you can find a way to do it then it would pay off. (For instance, the film my employers took to Sundance was originated in DV but the final was 10bit, uprezed to HD, and projected in HD. My understanding was that it was a 10bit projection-it certainly looked great when I saw it at the festival.
--You really need to manage how you work if the goal is to print your project in a better medium than you started. If your destination is 4:2:2 then you don't want to render element as 4:1:1 as you work. Never render back down to DV25 unless that's the destination and even then don't do it until you're ready to print to tape.

Rob Mack
farss wrote on 5/19/2005, 2:24 PM
Rob,
your first section pretty much answers John's question. Feeding DV25 down SDI (or component) involves transcoding and that means loss. As was pointed out in an article referenced elswhere if the result is captured to 4:2:2 it may look better due to smoothing but the process is lossy, keep feeding the video through the same process and it'll suffer generational loss.

As to your second point about 10 bit being moot to Vegas users, I'm not so certain you're right. As I understand it, yes Vegas will only output 8bit video however it does read all 10 bits of captured 10bit 4.2.2 and does its calcs in floating point and then converts the result to 8 bit. Using the Sony 4.2.2 codec or uncompressed as the output you should be able to get a better quality result when starting from 10 bit 4.2.2 than if you started from 8 bit 4.2.2 when doing things like color correction.
Bob.
rmack350 wrote on 5/19/2005, 3:45 PM
Bob,

That's interesting to know. I have the bmd codecs on the machine in front of me but I'm mid capture. I'm curious if I can render out to 10bit...

Since everything I've got is 8 bit I'd have to think a bit about where a 10bit render would show from Vegas. Possibly in dissolves and fades if you could have 1024 steps in a fade to black. In dissolves and transparencies, yes, because those would involve new color calculations...

I think the point where 10bit would make a difference is when you are rendering and outputting to a 10bit medium. If you are capturing from a 10bit source and then downconverting to 8-bit, it seems like you'd be losing the supposed advantage.

I'm really not sure where a 10-bit output comes in useful. I think that the HD projection at Sundance was from 10bit media and I'd assume that if you were going to do an HD to film transfer then you'd want to be using 10bit media. So, if you want 10 bit output, vegas would need to render 10bit.

Or, you port the timeline to a program that can do the 10bit render. Hmmm...sounds like one of those high-end add ons that Sony doesn't sell. Just needs to be a Vegas 10bit render engine product.

(Oops! Batch Capture is done and now vidcap has locked up at the point of "logging" the clips. Better go read the forum and see if anyone found a solution to this...)

Rob Mack
farss wrote on 5/20/2005, 7:15 AM
Rob,
as far as I know, Vegas will only render out 8 bit, period. Even going out to a codec that's more than 8 bits it only writes 8 bit values, unless no FXs are applied then it just copies whatever values are in the input to the output. Last bit I don't quite follow how or why this happens but that's what I'd heard.

However only being able to render out 8 bits from 10 bit video isn't a waste entirely. I'm making a few assumptions here and over simplifying it, partly because I don't fully understand just what the 8 / 10 bits precisely refer to and maybe I'm way off track but here goes.

With 8 bits there's only 256 possible values, with 10 bits it's 1024, so in the camera the resolution of digital values that represent the analogue values that the CCDs generate from the light falling on them is 4 times greater with 10 bits than 8 bits. We've got more dynamic range.
Now OK, those 10 bits have to get downscaled into 8 bit values but let's say I want to bring up more detail in the shadows by putting a bump in the bottom end of the gamma curve in Vegas. Say I'm bringing the bottom 25% of the luminance up 1 stop, roughly multiplying all those values by 2. Now with 8 bits of data I've only got 64 possible values to start with, with 10 bit I've got 256, also you can be pretty certain the least significant bit is mostly digital noise, so two things are affected, my noise level comes up and I'll start to get banding.
Of course I COULD have lit the scene to produce the same result and also unless the camera is using 2/3" CCDs to keep the noise in the analogue side of things down then there's not much point in using high cost A->D converters and writing 10bit data to the tape.
The way I see it there's more to be had being able to at least start with 10 bit data and output 8 bit than the other way around. Much audio is recorded at 24 bits and output as 16 bit, I guess if you were mixing a lot of tracks there is possibly some advantage if you're starting with 16 bit rendering out to 24 bit but not much.
Bob.

rmack350 wrote on 5/20/2005, 10:05 AM
I hear what you're saying. I'd say that once it's been converted to 8 bit, it's 8 bit.

I know that some cameras are 10bit internally and that allows you to set better curves if you're adjusting knee and pedestal etc. If the camera has to make multiple adjustments in a 10bit space then that should pay off.

If it were 8 bit in the camera, and the logic had to choose between two different values in that courser 8-bit range of values, you'd start compounding rounding errors faster. So I can see how it'd be better to do all this in camera stuff in a finer 10 bit range before finally converting to 8 bit output.

Once you get into vegas, 10 bit just isn't part of the equation. But, just for hypothetical chuckles, if Vegas could do a fade to black in 10bit space then that fade from white to black could have 1024 steps instead of 256. As long as you can stay 10bit from there on out then it's a good thing.

But it's all hypothetical. Vegas is 8 bit, as are most NLEs. It's just not a real handicap for almost all users. Just kind of fun to ponder, as far as I'm concerned.

Rob Mack
GlennChan wrote on 5/20/2005, 1:10 PM
It would be nice if Vegas could render in 32-bit or 16-bit float, to avoid rounding error and clipping.
farss wrote on 5/20/2005, 3:35 PM
I'd be quite happy if it'd read and write 14 bit Cineon files, so no doubt would my local disk supplier!
This isn't just pie in the sky stuff for me though. I have a potential project that involves scanning 35mm prints, adding in additional trailers etc sourced from video and then outputting the whole show in some format for digital projection in cinemas.
I'd hoped I could do this in Vegas but so far it's not looking too promising.
Bob.
rmack350 wrote on 5/20/2005, 3:56 PM
Well, we get more than we pay for but not that much more.

Rob Mack