Fun with Illegal Black in Pro 10

Comments

musicvid10 wrote on 12/16/2010, 1:07 PM
No, When I receive HD that was shot out of gamut, and exists in a 0-255 space, as much of it is (clearly seen in my second graphic), applying a single conversion to S-RGB at the output is all that's necessary (or useful).

There is no double or layered conversion taking place that I know of. The bad footage started that way, bad, and it is being converted one time to well, not good, but better.

I know that working in a controlled shooting situation, it may make it hard to believe that the some of us receive footage that looks like the second graphic, but in my world it is a "more often than not" situation. OTOH, on the relatively few situations when I receive correctly exposed footage that looks like my first graphic, I remain in complete agreement with your method, as I stated.
Laurence wrote on 12/16/2010, 1:24 PM
We must be talking about slightly different things then. I shoot my own video mostly. It is shot with an HDV camera and the colors are all within the sRGB range. When I preview it in Vegas however it looks washed out as my laptop display shows the cRGB range. If I color correct so that it looks good on my laptop, the colors will be oversaturated on TV renders. I get around this by putting a sRGB to cRGB filter on the master video bus. This expands my sRGB video out to the cRGB range. Now I put color correctors on tracks or events and color correct so that it looks good on the laptop preview. When I want to render I put the sRGB to cRGB color correction filter on the master bus into bypass. The colors once again look washed out on the laptop display but they will render perfectly into the sRGB range. This render will look correct on a television and if I upload it to Youtube, their processing will expand the range from sRGB to cRGB range (exactly like the preview filter I bypassed) and match what I was seeing when I tweaked the color with that filter enabled.

What I am doing is working in sRGB colorspace except that I am previewing it through an sRGB to cRGB correction filter.

This would be different if I was shooting with a camera that worked in cRGB space to begin with like a Canon DSLR or one of the Kodak point and shoots. Kimberly specifies that she is shooting with an HDV cam though.

The way I understand the work-flow that you are recommending, you would color correct so that it looks good on your computer display, then put a cRGB to sRGB color correction filter on the master bus for your TV and Youtube renders. What I expect is that with this approach what would happen is that you would end up tweaking the color in a way that would very much be similar to a sRGB to cRGB correction as the source footage would look too washed out as it was. You would then apply a cRGB to sRGB filter to the master bus and this would be a double correction.

I am very careful when I shoot. I also use the Picture Profiles on my Z7 camera (which are like those on the EX1). Quite often my color is right on target. Every so often it needs a little tweaking, but I try to get it right when I shoot it. To me this seems especially important with a format like HDV where there isn't a lot of extra latitude for this type of adjustment in the footage. You could get away with a lot more correction on an EX1 for instance. When I am successful in getting the color right, I can smart-render this footage into the same HDV format it was shot on. This means that the renders are very fast and of original source quality. I couldn't do this if I was putting a stock correction filter on the master video bus.

If you look at the HDV format as a smart-renderable format and really get your picture right when you shoot, then convert your smart-rendered masters into h.264 with Handbrake, it is absolutely amazing how good your Youtube and Vimeo productions can be. If you make this your standard workflow, you can shoot with just a little extra effort, edit and render extremely quickly, use inexpensive USB2 or internal laptop drives, backup your projects to DVD-R because they aren't that big, and be quite happy with a relatively inexpensive laptop as your main computer. This approach isn't for everyone, but it suits me really well.
musicvid10 wrote on 12/16/2010, 1:45 PM
Laurence, I understand completely. If you'll please reread the first paragraph in my post above with the histograms, you'll see that I know that your footage is all shot in 16-235 space.

However, shooting HD, or even HDV, does not guarantee that it will be exposed in a Studio RGB space. In fact, most of the footage I get from various sources is not exposed in a 16-235 space at all, but in a 0-255 space, with considerable clipping at both ends, as in my second histogram example.

In fact, the third example ("Ducks") came from Kimberley's HDV camcorder, and it can be clearly seen that the chroma levels occupy the entire RGB gamut from 0-255, and are "illegal" in a REC 601/709 space.

So, my assumption stands that a single C-RGB conversion at the output with out-of-gamut source is preferable from a time standpoint than adjusting the levels on every separate track, and that nothing useful is to be gained by doing the latter.

I hope this clears up the differences between the HDV footage that you shoot, and the footage that I receive on a more-or-less continual basis, and that two different approaches are desirable from a workflow / efficiency perspective.

Best of the holiday season to you and your family!
RalphM wrote on 12/16/2010, 2:30 PM
I've followed (sort of) this discussion and I now know what I do not know about this subject. Thanks for all who posted here.
Laurence wrote on 12/16/2010, 2:50 PM
However, shooting HD, or even HDV, does not guarantee that it will be exposed in a Studio RGB space. In fact, most of the footage I get from various sources is not exposed in a 16-235 space at all, but in a 0-255 space, with considerable clipping at both ends, as in my second histogram example.

OK, now I understand. Yeah that makes sense. Zebras, ND filters, Aperature, and AE +/- settings: that's all it takes to get it right, but your right, not everyone does.
Andy_L wrote on 12/16/2010, 5:44 PM
No, When I receive HD that was shot out of gamut, and exists in a 0-255 space, as much of it is (clearly seen in my second graphic), applying a single conversion to S-RGB at the output is all that's necessary (or useful).

This seems like kind of a slippery argument to me. Properly exposed HD footage will almost certainly contain values in the illegal range--that's how video cameras work, for better or for worse.

The illegal values are clipped because they're supposed to be clipped, no?

Converting the footage to remap clipped values into legal range is only desirable if you want those values brought into legal range.

If you want to play around with headroom, so be it. But why fool around with a blanket conversion of illegal values unless you want/need to?
musicvid10 wrote on 12/16/2010, 5:56 PM
Yeah, I'm a slippery kind of guy.

What this discussion has unfortunately served to do is to comingle two completely different shooting philosophies into one discussion of methodology:
-- People who purposefully expose their footage, and in so doing choose to either confine their exposure range to 16-235, or to use the headroom afforded by the spec, which is perfectly legitimate; and,
-- Those who do not.

Assuming we are talking about the former, the answers to your three questions, in order, are:

1) No.
2) Yes.
3) Does not apply.

If you are talking about the second type of shoot, again clearly illustrated in my second graphic, the answer to the last question is:

3) In order to accomplish a quick, certain improvement in the outcome, without spending a lot of time attempting to make a silk purse out of a sow's ear.

Bringing as much of the captured detail into range in the final render as possible, without going to a lot of extra work, is generally considered a good thing. And when the footage is already clipped at one end or both, that goes double.
robwood wrote on 12/16/2010, 8:58 PM
"What I don't like about that approach is that while it may be simpler user setup user wise, but it is a more complex setup computer processing wise."

not for me. I work with RGB not YUV footage. there is no second filter. there's only one and its necessary for the broadcast monitor i use to check for DVD output.
Kimberly wrote on 12/17/2010, 6:06 AM
@Laurence:

This is beginning to make sense. So if I change the AE and/or the +/- settings on my Sony HDR HC-3, then more of my footage will fall into the legal range right from the git go?

Any suggestion on which adjustments I should make at the surface for shooting under water? My housing is very simple and allows no exposure changes under water . . . not that I know what those changes should be anyway at my current knowledge level. Maybe some of the more experienced under water shooters can share what they do with their settings?

PS. This has been a great thread for those of us who are not up to speed on legal colors.
Andy_L wrote on 12/17/2010, 8:30 AM
Bringing as much of the captured detail into range in the final render as possible, without going to a lot of extra work, is generally considered a good thing.

I'd agree with that 100% if we were talking about a still camera. Why, the logic goes, would you ever want to automatically discard shadow and highlight detail?

But...video cameras apparently capture so-called illegal values by design. Why anyone thought this was good, I have no idea.

But if you automatically correct those values to bring them into legal range, you are destroying the integrity of the legal values. The whole image becomes washed out because what I'm assuming is an incorrect gamma shift.

Yes, you can then correct the look with curves/gamma manipulation, but if the footage was correctly exposed in the first place, the best you can hope to do is make it look like it already was, prior to the levels conversion.

So my mantra is, if you don't need the detail in the highlight or shadow illegal values, leave it alone. And only go after one or the other--don't try to bring in both at the same time. It does too much damage to the look of the footage.

Again, to me the strange thing is that video cameras work this way in the first place. But that seems to be an immutable fact that we just have to live with (like many undesirable oddities, anachronisms, and lock-ins of the vid world).

musicvid10 wrote on 12/17/2010, 9:00 AM
You see, for video we have colorspace standards, such as ITU-R709 to norm the levels and colors for different modes of playback on conformed equipment.

There is nothing inimical in the levels of captured video that makes them "illegal." They are either on the file or they are not. A careful videographer will carefully set his / her zebras and exposure levels to include as much detail as he / she can or wants. Having some of this information between 0-15 and 236-255 does not pre-empt its usefulness in producing a given finished product, if the creator so chooses. The choice to do so or not is entirely the artistic domain of the content producer, and not of some imposed constraint that occurs after the fact (although that presents the challenge).

In fact, with HD footage, getting as much useful content range as wanted into the RGB gamut (or even wider, in the case of AVCHD), ensures full utilization of the encoding bit-depth available, which is exactly the opposite of flattening as you theorized. It is important to note that the numbers 0-255 do not correlate to light itself, but to what portions of the spectrum are put into luminance / chroma range as a result of manipulating the exposure, whether by auto or manual means.

So, you can still retain and use your still photography philosophy, which if you read Adams, might be paraphrased as, "get as much detail as might possibly be needed on the negative, and endeavor to put as much of that detail as you can or want to on the print." So this metaphor would have the footage as the negative, and the conformed output as the print.

OTOH, if one is shooting with the idea of going straight to print in a 709 space with smart rendering, by all means carefully expose your footage inside the 16-235 boundaries and proceed undaunted. It's another legitimate approach, and one that many videographers favor. Including the guy whose work is represented in my first histogram above, and Laurence also, I believe.

But then, this whole thread was started about footage that was unfortunately shot out of RGB gamut and clipped, which makes this a side discussion. In the original context, bringing as much detail as possible into S-RGB gamut usually makes the most sense, since so much has ostensibly been lost already. But applying the stock Computer- to Studio- correction is about as good as it's going to get in these cases, without doing a lot of fiddling of arguably dubious value.

So, rather than take this thread any farther off topic, I defer to Kimberley's original inquiry, which is a good one.
Laurence wrote on 12/17/2010, 12:33 PM
>OK, now I understand. Yeah that makes sense. Zebras, ND filters, Aperature, and AE +/- settings: that's all it takes to get it right, but your right, not everyone does.

Everyone is a little different, but the way I do it is to set my zerbra for something like 90%. What this does is it lets you know when you are blowing out our whites. Lots of people set it for 70% or so, but I find it frustrating to be looking at that much zebra all the time. I just want to make sure that the talent doesn't have any hotspots on them. I don't really worry about blowing out the sky too much. I just use a screw on polarizing filter outside (which darkens the sky and cuts glare without darkening the good stuff too much) and expose for the talent. Looks pretty good to my eyes.

What the AE +/- is is auto exposure plus or minus whatever value you dial in. I use this quite a bit, especially when it is partly cloudy. What I do is use as heavy an ND filter as I can get away with, using negative gain if I have to so that the iris is as open as possible for shallower depth of field. Then I use the AE +/- value like I would a manual exposure setting. The difference is that as the sun goes in and out behind clouds, the exposure will change along with the ambient light. This lets you hold a certain look through an interview where you are using natural light and because of the clouds, the light is always changing. This also can give you a little brightness compensation as you move the camera around. I set my exposure change rate to the slowest setting so that you don't see abrupt changes. I use this kind of setting a lot of the time, just looking at the faces to make sure I don't see any zebras on the faces (90% so I am just using it to warn against hot spots). If you go full auto, you'll end up with blown out or underexposed talent too often. The AE +/- settings can really hold a look in changing light pretty well.
Andy_L wrote on 12/17/2010, 5:06 PM
MV,

I'm curious if you have a specific strategy for coping with the flattening or de-contrasting that results when you take properly exposed (ie, the midtones are where you want them) footage and apply a computer-to-studio levels correction?
Kimberly wrote on 12/17/2010, 6:29 PM
Special thanks for Musicvid and Laurence for explaining everything so carefully and being so generous with their time.

I'll try some of these suggestions when I starting diving again in January.
musicvid10 wrote on 12/17/2010, 6:34 PM
Laurence is your go-to guy on this one. I usually rent my shooters and then try to deal with it in post.

Also, you should contact Nick Hope through his forum profile for advice on setting exposure and white balance for underwater stuff. He's got as much or more underwater videography experience than anyone here afaik, and has developed some very interesting techniques.