32 bit stretches HDV levels

fausseplanete wrote on 11/30/2007, 12:55 AM
With waveform monitor (WFM) on view (under 'scopes), and a project having both HDV (m2t) and SD (DV) media, no FX but some pan&crop, tried playing with the 8-bit/32-bit project setting. For the SD under both 8 and 32 bit, very little change on the WFM (as you'd expect) with levels ranging 16-255 (approx), as is normal for many camcorders. However for the HDV, when 32 bit was selected the levels became stretched e.g. to start from 0 instead of 16 and consequently the preview looked more contrasty, as compared (of course) to the 8-bit processed HDV. Presumably a bug.

Comments

Bill Ravens wrote on 11/30/2007, 6:09 AM
not a bug. if you want studio RGB 16-235, you need to apply a Level FX filter. Use the computer RGB to studio RGB preset.
fausseplanete wrote on 11/30/2007, 10:08 AM
...but why only for the HDV footage, not the SD (which remains 16+ under 32-bit processing) ? Surely the point of 32-bit is to do the same job as 8-bit but with greater accuracy. Implicitly altering the range is not what I call greater accuracy, and doing so only for one format and not the other is an inconsistency.
GlennChan wrote on 11/30/2007, 11:56 AM
1- I do agree there is an inconsistency between the HDV and DV codecs. IMO this is little confusing.

2- In many cases, it is desirable to have black level at 0 RGB (or 0.0f in floating point). You need this to get proper image processing. Although how Vegas decodes levels doesn't really matter as long as the filter knows the incoming levels and can convert them to whatever is appropriate.

But the way Vegas is designed, the filters don't know what the incoming levels are (and in most cases, don't let the user specify via the filter/FX's controls). I am guessing that most filters assume that they are being fed with black level at 0... and Sony is moving codecs over to decoding black level at 0 instead of 16 RGB (or the float equivalent... which might be 16/255).

3- Most vfw codecs want to see 8-bit RGB levels with black at 0 and white at 255 (i.e. computer RGB level). So I suppose that is an advantage of having codecs decode to computer RGB.

4- The reason why Vegas used to decode levels to studio RGB levels is to retain superblack and superwhite information. Video formats do allow for information below black and above white.

In a 8-bit pipeline, decoding to computer RGB levels will clip off these superblack and superwhite values.

In a 32-bit float pipeline, floating point numbers can represent negative numbers and they can represent a very large range. So there is no clipping of superblack and superwhite values.
GlennChan wrote on 11/30/2007, 11:57 AM
A table of what codecs do can be found on my website:
http://glennchan.info/articles/vegas/v8color/v8color.htm
farss wrote on 11/30/2007, 11:59 AM
HDV should decode to the range you're seeing I believe, what's wrong technically is how Vegas decodes it in 8 bit mode.

SD DV is speced to decode to 16 to 235.

Bob.
GlennChan wrote on 11/30/2007, 12:20 PM
Bob... I don't think it's wrong.

For production purposes, if you can get the right Y'CbCr levels in the end then that is correct. (Assuming you're delivering a master that is a video format that stores values as Y'CbCr... e.g. any format that ingests SDI, DVD)

For viewing by the end user, Y'CbCr levels should map from 16-235 Y' to 0-255 R'G'B'. For previewing during production this is the case too. Vegas is capable of doing this, but the default behaviour doesn't always do this. It depends on what your preview method is, what your settings are, AND/or what codecs you are using. All three of those things. And unfortunately Vegas doesn't handle this automagically for you, it's up to the user to get it right.
http://glennchan.info/articles/vegas/colorspaces/colorspaces.html
http://glennchan.info/articles/vegas/v8color/v8color.htm

fausseplanete wrote on 12/1/2007, 1:16 AM
Automagic I can wait for, but consistency I expect.

With the current "feature", if there's HDV present then one can't just flip from 8-bit (eg draft) to 32-bit (eg final) and get better quality, it's necessary to readjust the levels of all the HDV media (only) to get back to what was obtained under 8-bit.

...Then you never know, you might have to go back to "drafting" again for a final final etc... Certainly a practical nuisance, regardless of the theory. Wouldn't want to use 32-bit in the first instance as it's 4 times slower to render (as one would guess, but also I tested it).

In the Vegas context, the reference DV codec behaviour is surely that of the one which is built in to Vegas, having worked that way ("right" or "wrong") for some time. Changing it now would upset a large number of users by losing backwards compatibility. Can't speak for the wider industry, but in the Vegas world we should be looking for consistency with this one.

In any case it is convenient that the Sony DV codec preserves the full range and resolution of levels recorded by most digital camcorders, allowing the user to shrink these to legal range by their own preferred method (clamp, S-curve, whatever suits the particular scene) or even to expand to full range for web streaming to an audience predominantly with standard PC screens. The principle is of allowing user choice rather than imposing both an unnatural restriction ("nature" consisting of both analog and digital equipment) and only one particular method of mapping to it - which in the case of HDV under 32 bit appears to be linear+clipping (=lossy!).
farss wrote on 12/1/2007, 4:40 AM
I appreciate your confusion and certainly one would expect that more precision should give greater accuracy not a very different set of results. However at the same time Glenn is right, there is no right or wrong although I think SCS should have done a bit more explaining.

However HDV with 32 bit processing doesn't clip, it's just using the full range of values in the pipeline which should be better, they're the same range of values that images from a DSC use and I've yet to have that clip so long as I deal with it correctly to fit into the final output. I don't think the intention ever was that you'd switch between 8 and 32 bit, you choose one and stick to it. I think technically if you wanted to switch like that you'd need monitors that stored preset calibration setups or you could get very messed up.

Bob.
Bill Ravens wrote on 12/1/2007, 5:00 AM
It's a complicated business. And if creative choice is to be maintained, the complications get passed on to the artist. That's what separates a dabbler from an artist.(don't want to get into the amateur vs professional arguement). WE can all say our daily thank-yous to the wise folks who establish the specifications(namely ITU) for their straightforward simplicity.
GlennChan wrote on 12/1/2007, 2:14 PM
Automagic I can wait for, but consistency I expect.
I would agree.

And if creative choice is to be maintained, the complications get passed on to the artist.
In other applications, these kinds of levels conversions are handled for you. You also have the creative choice to change the levels if you want. Though only AE supports linear light compositing.

Changing it now would upset a large number of users by losing backwards compatibility.
In 8-bit projects, the behaviour is exactly the same as Vegas 7 and before (to whenever the DV codec changed).

32-bit mode is confusing though. Because normally you would assume that the only thing that changes is the precision used to render; you wouldn't expect the behaviour of some codecs to change, the behaviour of other codecs to stay the same. That is confusing in my opinion.

which in the case of HDV under 32 bit appears to be linear+clipping (=lossy!).
In 32-bit, the negative values do not get clipped. You can use the Levels filter (output start and end) to bring values back into range.
fausseplanete wrote on 12/3/2007, 1:42 PM
Below is explained a way to demonstrate the clipping for yourself and also a further inconsistency involving HD levels.

Bob/Farss, you say "HDV with 32 bit processing doesn't clip" but I just ran a full 0..255 (approx) ramp benchmark in an m2t file, viewed the result on a Waveform Monitor (with neither of the check-boxes checked) and while in 8 bit I get the full diagonal corner to corner, while in 32 bit it looks like " _/ " (I wish there was an overscore character to put on the end of that!). Sure seems like clipping to me. I have successfully used the benchmark to unravel broadly similar problems in other software in the past. Its production is described below, and has to be performed using V7 not V8 (for reasons described later below).

Just to explain the "approx", my benchmark was generated in Vegas as full dark to full white (as "illegal" as it gets!) though I understand from Glenn that some 8-bit storage formats only represent 1..254, the 0 and 255 values being reserved for synchronization.

Try it at home yourself folks! The full-range ramp benchmark m2t file was generated from Vegas 7 (don't use V8 for this - explained later below) by first inserting the Sony Test Pattern "Ramp" as an Event then using pan/crop to zoom in on it to crop out the black frame surrounding the ramp. Ensure the Waveform Monitor (WFM) is visible and has neither of its check boxes checked (in its Properties). Then using Sony Levels FX on that Event, set input minimum to about 0.093 and input maximum to about 0.9, viewing on the WFM at the same time to ensure that a diagonal line results from exact corner (0%) to corner (100%). The precise Levels FX values required depend how well you positioned the pan/crop. Then render it out to an m2t file and also as a DV file. Finally in Vegas 8, import the m2t file as a new event and put the timeline cursor over it, View it in the WFM. Compare what the WFM shows under 8 and under 32 bit. Repeat for the DV file.

Guess what, I just now tried generating the benchmark the same way in V8, but as soon as I set Project Properties to HD (1080 50i) the clipping occurred. Hadn't even rendered the file yet! Repeated it in V7 just to be sure I wasn't imagining things. No, it's true. In V7 flipping the Project Properties between HD and DV makes no difference to the diagonal on the WFM, but in V8, the diagonal is only fully preserved under DV Project Settings whereas it is clipped under HD project settings. Another inconsistency it seems. Please Sony can it be back like it was in V7?

I do like a dabble!
farss wrote on 12/3/2007, 2:37 PM
Well I sure like doing tests however I think your methodology is flawed. Your generated test pattern is not what the HDV cameras record, their luma values go from 16 to around 245.
I've certainly checked what's happening to my HDV video in 32 bit and it's not being clipped. To correctly do these tests though you need to shoot test charts first, then process the footage and check the results.
I'd also suggest that using the Levels FX to adjust your levels might be a source of other unknowns, does that FX even work correctly in 32 bit? Regardless using generated media and the internal scopes when you don't know what they're doing or really measuring can lead you up the garden path, that's why you need to shoot charts.

Bob.
GlennChan wrote on 12/4/2007, 9:20 AM
1- Bob: Units!!! :D Please use units so that everyone is clear what you are talking about. It's like saying the temperature is 20 without saying Celsius or Fahrenheit.

There is a difference between 8-bit Y'CbCr codes and (8-bit) R'G'B' codes. Many problems occur converting between the two, so we need to be clear what we're talking about.

2- Fausse: If you have a 0-255 ramp and render that to a codec expecting studio RGB levels, the resulting codec may interpret 0 0 0 RGB as below black and 255 255 255 RGB as above white. You can create a situation in a 32-bit project where these values get decoded as above white and below white.

There is actually no clipping going on in the 32-bit project. You can use levels to bring these values back into range. See my article.

You *can* get clipping in the sense that levels conversions are done incorrectly, and therefore you get clipping. You need to do these manually. This can be fixed... you might want this feature because other NLEs tend to get it right.

3- Levels: They should work correctly in 32-bit. Actually, I think they fix some of the bugs that were in the old 8-bit implementation. Rounding/truncation errors, and output start having a funky behaviour.

4- The internal scopes are confusing, since you have to set them up correctly and interpret them correctly. You can set them up and interpret them incorrectly.

They do not behave like hardware scopes (i.e. real scopes). Few of the scopes built into NLEs do.
farss wrote on 12/4/2007, 1:18 PM
Units Glenn?

What units?

Volts, Nits, Pascals? These I understand, I worked in a standards lab so I know a fair bit about units, standards and traceablility. If I quoted a value with units I was required to state the make, model, serial number and last calibration date of the instrument used to make the measurement.
As I understand it Vegas uses a system where three 8 bit binary numbers store the RGB values of each pixel, those values are numbers and numbers don't have dimensions. Realistically the only instruments that Vegas provides which can be relied on to tell us anything are the one that display the values of those three binary values. The luma values displayed in the waveform monitor are derived dimensionless values and we don't know for certain how they're derived, SCS don't state the functions used to derive what that display is telling us, we can hazard a guess or do some tests or even state what they should be but we don't know for certain.
As I see it this is the real problem, we're groping around in the dark. We thought we knew how things worked and now this new 32bit pipeline has been thrown into the mix and we're even less certain how things work.
And yes, your statement about scopes being possibly inaccurate is indeed correct, the only time they can be accurate is measuring an analogue signal, if they're measuring values on SDI really all they can tell us is the value of the binary numbers on that feed, they can derive values based on agreed standards for what those numbers represent but if we shift that back to Vegas and how it works again we seem left to grope around in the dark.

I've noticed similar problems with metering on the audio side of Vegas. Since we were given those iZotopes plugs I can pretty easily get the meters to tell me my output is clipping and yet I render the output and look at the sample values in SF and nope no clipping. Again we've got the problem of a digital simulation of an analogue device trying to display digital values, at least SF does state to what standard the meters are hopefully working. Of course life is much easier on the audio side, the 1s & 0s pretty much stay as is, no matter if I output or input from DAT or CD or simply email someone a file.

Bob.
Bill Ravens wrote on 12/4/2007, 3:50 PM
Numbers that denote color information can be a LOT of different things, to whit:
1-IRE, 0-100
2-RGB, 0-255
3-R'G'B
4-Computer or NTSC(studio) RGB
4-YCrCb
and so on
it's not exactly clear and not as simple as you'd like

puts me in miind of an old Monty Python routine...english sparrow or russian sparrow?
fausseplanete wrote on 12/6/2007, 12:49 AM
Farss,

You say "Your generated test pattern is not what the HDV cameras record, their luma values go from 16 to around 245."

This is a very sweeping statement - "the HDV cameras". Are you saying that all models by all manufacturers have exactly the same range? Also it would be good to clarify whether this statement assumes the camera is operated up to 100% zebra stripes etc. or whether it allows for overexposure and highlights.

Prior to relying on my gradient benchmark, I did some real world tests with a Sony Z1 HDV camera, from fully black (lens covered, fastest shutter speed etc) to fully white (max everything and bright light). In fact I checked the same footage again just now. Also repeated this for a Sony TRV33 standard def handicam. In Vegas 7, the raw footage (no FX), as seen on Vegas's vectorscope (both checkmarks cleared), it gave the same minimum (0%) and maximum (100%) limits as my generated gradient (which I also checked again just now). That is why I have confidence in it.

For the generated gradient benchmark, the sequence of values in between these established limits appear on the WFM as linear - which is unlikely in the extreme to occur by accident as a result of any opposing nonlinearities in both Limits FX and the WFM in Vegas. More likely is that the Levels and Scopes both preserve linear behaviour as one would expect. Hence I believe that my test correctly linearly covers the range between the max/min limits exhibited by my camera recordings.

Others raised concerns over whether the Scope settings had been correct. I stated in my original post that both check marks had to be clear, i.e. no 16..235 and no 7.5 IRE, to give full range 0% to 100%. Either this is correct or it is not; expression of general concerns about the possibility of confusion over setitngs just muddy the water of discussion.

You say "I've certainly checked what's happening to my HDV video in 32 bit and it's not being clipped". It's commendable to run one's own tests - take nobody's word for it - but can you describe your exact method (as I did) so that others can repeat it or find flaws in it? For example did you shoot test charts, and did these cover the full recordable range of your camera(s) i.e. not just the legal range? I imagine it would be difficult if not impossible to ensure that, say, a shoot of a stepped gradient covered the whole recordable range unless a reliable vectorscope was used while recording.

Assuming your HDV footage is m2t (not Cineform etc, which I have not tested) and has some kind of variation in levels (I doubt otherwise!), did you try flipping Vegas 8 between 8 and 32 bit, and if so, did it look more contrasty under 32 bit as I had found? This should also be visible as a relative change (stretching) on the Vegas WFM scope when changing Vegas from 8 to 32 bit mode.

It would be good to find some common ground as a reference point. Step one is to establish whether or not there is an effect (and characterize it). Step 2 is to establish whether or not it is a problem (and what to do about it). Let's keep with Step 1 for the present.

Beyond that, I appreciate the point made by others that any legal-levels footage could subsequently be fixed by levels FX (subject to your question over its correct working in 32 bit), whenever moving between Vegas 8's 8 and 32 bit modes, but it's the illegal-levels footage (highlights etc) that I am concerned about. Does it get clipped or is it (or should it be) mapped to a range (PC or TV) that is consistent with Vegas in other respects?

You say "I'd also suggest that using the Levels FX to adjust your levels might be a source of other unknowns, does that FX even work correctly in 32 bit? ":

I stated that the benchmark file was (and had to be) produced in Vegas 7, which of course does not have 32 bit.

If Vegas, a well-established professional NLE, couldn't get a simple Levels conversion at least broadly right in 8 bit then I would be shocked. Also, many Vegas users applying Levels (and Scopes) would in that case have been making professional broadcast-grade projects that risked being out of broadcast range - surely such a (hypothetical) situation would be unlikely to persist, professional users would complain / bail out. So I think that hypothesis is unlikely.

In any case, for the purposes of my relative test, to show that one thing exhibits clipping, another does not, the "test equipment" doesn't have to be industrial-grade, just good enough to show the difference between gradients on a full range 0..255 (approx) or even 0..245 and 16..235 (approx). If any rounding (quantization) errors were present, they would be so much maller than the gross effect I describe that they would be negligible. Their existence would not affect the interpretation of this particular test. Can anyone confirm whether or not the Vegas WFM meets these meagre requirements? I would be really shocked (not just shocked) if it could not !
fausseplanete wrote on 12/6/2007, 1:03 AM
To bring it back to basics, can anyone repeat my simple test and find the same or different result? It only takes 5 minutes. Please just comment on your experience of this single test and its results, not diversify into broader arguments and philosophies. Do you see the clipping effect when you do this? Have you an alternative explanation for it (in this precise test only, not generally)? Be sure to use the exact Vegas versions and Scope settings etc. that I stated in the original post.

If there are any specific flaws in the method, can repeatable tests in Vegas (7 or 8) be devised (and described) to demonstrate them?

A scientific approach such as this should allow even a small boy of limited experience to see what is and what is not there, and for others to see the same. Otherwise we run the risk of the "King's New Clothes" (story) situation; we need to look. at the actual case raised. This new branch of the thread is intended for that sole purpose. Please post your results. Any further test designs, please start a new branch, so that this one kan keep to its intended tight topic.
farss wrote on 12/6/2007, 2:42 AM
Without even running a test, yes I have seen exactly the same thing.

I shot a stage performance on a Sony V1 in HDV, processed in 32bit in V8, yes the levels shifted as you've seen, yes the image had more contrast and yes the colors sure poped on my monitor.

But nothing broke, I downconverted to SD, applied the BC Legal FX and adjusted the smoothing to taste and produced a DVD that I'm very happy with the look of.

As I understand it in 32bit Vegas is processing HDV in rec 709 and feeding that into my rec 601 calibrated monitor it's sure going to look wrong. I've had much more dramatic examples of this in other applications. Your monitors and scopes have to be calibrated to match what you're working with.
Bob.

farss wrote on 12/6/2007, 4:49 AM
I'll try and keep this brief as you've covered a lot of ground there.

HDV uses Rec 709, SD DV uses rec 601. The luma coefficients are different and so is the gamut. SD DV is decodes 0 IRE to 16:16:16 in Computer R'G'B' and 100 IRE to 235:235:235 although most DV cameras go over, I checked this today on the composite output of a PD170 with a Leader wavefrom monitor and if I read it correctly it showed peaks at 110 IRE.

As I understand it HDV should decode 0 IRE to 0:0:0 in computer R'G'B' and 100 IRE to 255:255:255 but I can't find a copy of the spec to know for certain. There was certainly a lot of discussion about how V7 did it compared to PPro. Hopefully Glenn can chime in and set me straight if I've got that wrong. Either way HDV and DV are two different beasts. So are stills from your DSC for that matter.

Vegas lets you throw all of these onto the one timeline and doesn't really use project settings like some other NLEs do. As I understand it that has some good points and some bad points, it might be nice if everything got decoded to the one spec which we defined on the input side of Vegas.

The same problem applies to our monitors. If Vegas feeds Rec 709 to your monitors that are calibrated for SD DV things will look wrong. Today with so many formats becoming commonplace it's very easy to feed monitors with images that they cannot possibly display even if you took the time to calibrate them and when you change to a different format you need to recalibrate them. Vegas does allow you to add an FX to the monitor output, this could be one simple solutio. Again I'd have to defer to Glenn to answer just how effective a solution that could be.

Bob.
GlennChan wrote on 12/6/2007, 3:04 PM
HDV uses Rec 709, SD DV uses rec 601.
This is generally true. Though I believe that HDV with Rec. 601 luma co-efficients might be allowed.

The color gamut differences usually get glossed over. The standards for SD video, nowadays, are the EBU phosphors and the SMPTE C phosphors (depending on what system your country uses). In practice, people gloss over the differences. Even in the high-end. I wouldn't worry about it. (Ditto for the transfer function.)

But anyways... the difference between Rec. 601 and Rec. 709 is not the issue here.

SD DV is decodes 0 IRE to 16:16:16 in Computer R'G'B' and 100 IRE to 235:235:235 although most DV cameras go over, I checked this today on the composite output of a PD170 with a Leader wavefrom monitor and if I read it correctly it showed peaks at 110 IRE.
1- There are two conversions taking place:
From R'G'B' to Y'CbCr, and from Y'CbCr to analog IRE.
In Vegas, the R'G'B'<-->Y'CbCr conversion depends on what codec is used and what bit depth the project is in. e.g. if you encode with the Vegas codec and decode with the Microsoft codec, your levels will get screwed up. As for 32-bit... flip between 8bit and 32bit and you see the levels change for HDV and MPEG2 clips.

The conversion from Y'CbCr to analog IRE depends on what television system your country uses and whether or not your equipment follows standards. For NTSC land (except Japan), black level should go to 7.5 IRE and white level to 100 IRE (so 16 Y' [8-bit] maps to 7.5 IRE, etc.). The majority of NTSC DV equipment does not follow the standard... which leads to confusion.

To recap, what is happening is that you need to be careful about three things:
a- What levels you need to feed your codec to ensure proper Y'CbCr levels.
b- Whether your project is in 8-bit mode or 32-bit mode. (Although you can consider this part of a.)
c- If going to analog composite IRE, what is the behaviour of the analog-digital convertor? Is it appropriate for the country you are sending the tape to?
(Analog component is a different story.)

In other NLEs (FCP and Premiere), you don't worry about a and b so much because the NLE handles that stuff for you. That and they can decode material to Y'CbCr and process it that way. The only arguable downside is that there are sometimes color-related bugs in those programs... though that has nothing to do with design.

The same problem applies to our monitors. If Vegas feeds Rec 709 to your monitors that are calibrated for SD DV things will look wrong.
Not really. The difference between SD and HD is:
a- Luma co-efficients.
b- Primary chromaticities. The exact shade (chromaticity) of red, green, and blue (primaries).
c- Transfer function. Rec. 601 is a power function of 0.45, Rec. 709 is composed of a power function and a linear segment.

In practice:
a is generally handled for you (convert the numbers with a matrix / algebra), though double check that it is. The difference is pretty obvious if you compare the HD to the SD image, or if you look at color bars on a vectorscope. The color bars will be way off. Or try to calibrate your monitor... if you get the luma coefficients wrong, you just won't be able to calibrate your monitor with the blue gun method.

Gloss over b and c and pretend they don't exist. Which works in practice and the difference is way too subtle to notice. If you did take b and c into account, then you need to relay color bars onto any downconverted masters. This would be a pain in the butt.

Things will also be screwy since the de facto standard for high-end QC are the Sony BVM CRTs with SMPTE C phosphors. This is wrong for Rec. 709 HD... you ideally need a monitor with Rec. 709 phosphors. But people gloss over this difference, and it works. (We don't notice small color inaccuracies, and it's not like the real world is very color accurate/consistent to begin with.)
GlennChan wrote on 12/6/2007, 3:54 PM
Ok back on topic...

Fausse,

I believe I have done a test similar to yours and I'd agree with your results. Where I disagree is in the interpretation.

The following articles explain what is going on:
http://glennchan.info/articles/vegas/colorspaces/colorspaces.html
http://glennchan.info/articles/vegas/v8color/v8color.htm

In a nutshell... you tend to have conversions from R'G'B' <--> Y'CbCr. For Y'CbCr space, only one set of levels is ever correct (16 Y' = black, 235 Y' = white). There are two different ways of converting between R'G'B' and Y'CbCr... Y'CbCr levels may either decode to studio or computer R'G'B' levels.

I'd read both the articles, they explain things in more detail.

2- So in your test, you have a ramp from 0-255 R'G'B', you are in 8-bit mode, you render that to a m2t. You bring that back in, change the project to 32-bit, and see that there is clipping.

I would interpret it as this:
In the first step, you generated a m2t file with illegal values in them. The 0-15 R'G'B' part of the ramp mapped to values that sit below black level. The 236-255 R'G'B' part of the ramp mapped to values that sit above white level. In normal signal chains, these values are liable to get clipped.

In a 32-bit project, those illegal values map outside the 0-255 R'G'B' range (since 32-bit mode changes codec behaviour). If you export say a JPG from this point, then you'd have correct levels in the JPEG and those illegal values will get clipped. That is fine, because illegal values will always get clipped in a JPEG.

If you had stayed in a 8-bit project and exported a JPEG of the ramp, then what you normally need to do is to apply a studio RGB to computer RGB conversion... this (more or less, ignoring rounding error) gives you the same end result. In the Y'CbCr file you rendered, the ramp is going from an illegal black (blacker than black) to an illegal white (whiter than white).

2- The change in codec behaviour is confusing, but that is what Vegas does. Would I prefer that Vegas do things differently (and in a less confusing manner)? Yes, yes I do.
farss wrote on 12/7/2007, 12:36 AM
I just reread your two articles again, I find it pays to read anything technical several times over a period of time as more things start to fall into place.

As you say in 8 bit HDV decodes to Studio RGB, in 32bit it decodes to Computer RGB. What I and I suspect others are really trying to understand is why? Is this a bug (unlikely) or is there some advantge to this.

Second question, just so we're all clear.
If we capture HDV, process in 32bit and render back to HDV all in 32bit and print to tape then everything should be correct, assuming no FXs applied, we get back what we had to start with?

Bob.

Bill Ravens wrote on 12/7/2007, 7:45 AM
farss wrote:
"As you say in 8 bit HDV decodes to Studio RGB, in 32bit it decodes to Computer RGB. What I and I suspect others are really trying to understand is why?"

Bob...
I think the answer to your question is both currently appropriate, as well as appropriate for future evolution of Vegas. Adam Wilt wrote about this issue a few years ago; and, I beleive his writeup is still on his website.

Adam's point is, in studio RGB, there are 235-16=219 discrete steps in luminance values. In computer RGB, there are 256 discrete steps. So, working in studio RGB is a compression of (256-219)/219=14.4 percent. To make matters worse, the compression is occurring in the tails of the luminance curve, where detail is critical, ie shadows and highlights(at least when computer RGB to studio RGB remapping is employed). Adam Wilt makes the point that reducing the dynamic range by 14% hurts the quality of the images produced. Granted, the final output is still 8 bit, RGB16-235, but, for CCing, the added dynamic range is significant. Besides, while NTSC display is limited to RGB16-235, web display is not.

In terms of future growth for Vegas, getting users to accept 32 bit processing makes the eventual transition to high bit I/O. This is, of course, just a guess on my part. It seems logical that, as processing technology moves to 64 bit, color space definition will potentially grow from 256 bits to 512 bits, which doubles the dynamic range available for color definition. I see absolutely no reason we should adhere to old standards established in the early days of broadcast, other than legacy issues, which are valid for a time, but, ultimately we all need to move on. If only the USA governing bodies would step up to the plate on these issues, anyway.

GlennChan wrote on 12/7/2007, 10:56 AM
As you say in 8 bit HDV decodes to Studio RGB, in 32bit it decodes to Computer RGB. What I and I suspect others are really trying to understand is why? Is this a bug (unlikely) or is there some advantge to this.
In Vegas 7 and before (but after 4 I think), Vegas would decode certain things to computer RGB levels and other things to studio RGB levels. So there is sort of dual competing sets of levels going on, and this was a bit of a pain in the butt since you have to manually wrangle all your levels conversions.

The reason for decoding things to studio RGB levels is to preserve superblacks and superwhites.

2- Studio RGB computer to RGB in a 8-bit pipeline:
You do lose code values, and how offensive that is depends on:
A- Characteristics of the source. How much noise, how many smooth gradients are there.
B- What dithering implementation is used. With dithering done really well, only your S/N ratio goes up. Vegas didn't do dithering and mostly did truncation instead of rounding (truncation is the lowest quality, and the fastest).

(To do dithering really well you want error diffusion with frequency weighting for the contrast sensitivity function.)

In practice, you usually don't spot any banding problems from Vegas.

3- In 32-bit... there are two different compositing modes. 1.000 and 2.222
In 1.000, all your values are converted to linear light before FX and pan/Crop and FX are applied. The values stay that way for compositing.
For linear light operations to be correct, you want black to be at zero. And it's convenient if you have white level at 1.0 (floating point)... you save a multiply operation if you do gamma and power functions.

In 32-bit/1.000, all your HDV clips get decoded to levels with a range of 0.0-1.0 floating point (I think it's 0-1.0 anyways... looks like it from the manual). To the user it appears as if the levels were 0-255 RGB (computer RGB). [EDIT: used to say 0-256... that was a typo.]

3b- Anyways, in 32-bit mode, Vegas decodes HDV to 0-255 "computer RGB" levels since that makes it convenient to do linear light processing.

Most filters also expect (the equivalent) of "computer RGB" levels... so it does make sense to move everything towards decoding to computer RGB.

4- In 8-bit mode, I think the Vegas team wanted to maintain backwards compatibility. So the HDV and MPEG2 codecs maintain their behaviour.

5- And this is where you start running into stuff that is not intuitive. It would sense if all codecs decoded to computer RGB in 32-bit mode (this is probably an issue of time... it would take more development effort). But they don't... so consult the table on my website.
http://glennchan.info/articles/vegas/v8color/v8color.htm

6- The longstanding issue of having to manually wrangle color space conversions is worse if you want to work in 32-bit.

Second question, just so we're all clear.
Yes.