Been reading the extensive threads from Nick, Glenn et al ( just who IS "Al" anyway .. ?), can some kind soul gentle remind me "why" YT does the clamp?
"Try doing something as simple as additive compositing in Vegas and you get the wrong answer. Video black plus video black should give video black. It doesn't if you do it in Vegas, you get grey."
This isn't correct. The math of compositing mode add black to black is 0.0+0.0=0.0. And Vegas does exactly this. All the Vegas compositing modes works precisely and exactly same way they work in other high-end compositing tools.
Using 8 bit camera signals for heavy compositing is actually difficult to handle the more if it needs a remapping from 16-235 to 0-255 because this introduces banding from the very start.
So I want my editing system to leave it up to me to decide if I need a remapping accepting banding or if I can skip the remapping to avoid banding during my editing or even to use mapping only for that part of the signal which helps using a certain kind of compositing.
Fortunately CGI usually is made 0-1 (whatever bit depth used) from the scratch without the need of remapping RGB values.
Because of the banding issue and to avoid any clipping, even if no compositing is involved I would not want my system to remap any camera input from 16-235 to 0-255. The control should be up to me. Unfortunately many mid-range systems take this control away from the user just for the ease of use which is o.k. if all you need is a simple display adaption from the scratch of your production.
I think it is critical to clearly distinguish between the video processing and the possible output options. In all the discussions about how Vegas Pro handles video levels this seems to get pretty much mixed up. I find the video processing Vegas Pro uses is perfectly correct (which does not mean it's always easy to use). The output options … depend on …
To my understanding, its something like this:
First, when importing clips, it analyses the codec used within each clip. From that it knows what levels to expect for that clip and applies it to the event with any necessary expansion on the timeline. This means previews are correct.
Second, when it comes to rendering, it is always rendered as 16-235 (0% IRE to 100% IRE). Ready for any player or web playback.
So straight out the box, your video levels (IRE) are correct. At least as good as they went in.
Vegas on the other hand does none of this as a default. And that creates complications stated in previous posts above to force it to conform.
The actual nuts and bolts of how this happens in Premiere is not for me to speculate. It just does it. Half the time you don't even need a waveform display except for grading. In Vegas however, its never off in case you miss an illegal level.
Well another thing to remember is, Premiere is not a compositing application. Its purely for video. So Vegas has the added complication of trying to be 2 apps in 1.
Its possible to do this, with the right implementation, but fails presently to achieve both correctly. In my mind, Vegas if anything 'should' be a video editing application. And that should come first in terms of operation. Just my opinion of course. But i see no reason why it can't do both well.
"For many years Vegas would let me use Generated Media with the RGB values of 16,16,16. That gave me correct video black (0% IRE). Vegas would display that correctly in the waveform monitor as 0. Now in V11 and V12 the waveform monitor shows it as 1% IRE."
Yes, using the 0 - 1 math for input in the new OFX filters didn't make things easier. However: I can enter "16/16/16" and it will result in precisely 0 % signal on the waveform monitor.
Having nothing better to do on a rainy Monday down here I tried to run some tests.
Yes, looking at footage clearly meant for compositing (Artbeats fuel air explosion) Vegas "sees" the black as 0. I also utterly agree that raw camera footage needs some to a lot of work before one begins compositing, been there done that more than once with candles and oil lamps. Those flames are a challenge, dialing out the pedestal in my EX1 helped as does having a true black background.
I digress.
So I tried comparing Ppro CS5.5 with what Vegas does using some recent footage from a PMW-350 and the same Artbeats clip.
Ppro's preview monitor correctly displays the camera footage, switching between Ppro and Vegas on my ProArt monitor Vegas shows elevated blacks.
Now adding the Artbeats clip to both the blacks in that match quite closely the blacks in the video footage in Ppro but not in Vegas. Now of course it could well be that Ppro is simply clipping the blacks from the Artbeat footage but as far as my eyeballs can detect, no it isn't. I cannot really determine what is happening with any super whites as I had DCC Off in the PMW-350 and it by default does seem to record very "legal" video.
All this is kind of interesting but it is time consumming and I need to devise somewhat more scientific tests. On top of that Ppro doesn't make it easy to peer inside its internals and its waveform monitor displays values as Volts!
As for what video cameras actually do, well my EX1 provides different "cine" gamma curves to accomodate ediing systems that cannot handle superwihites. That ties in very much with what Glenn Chan has been saying for years, the correct Y'CbCr > RGB transform will result in superblacks and superwhites being clipped.
My gut feeling at this stage is that Ppro correctly decodes footage based on the source. That it's way more of a Quicktime app than Vegas may explain this and just checking the Artbeat's footage properties the Compressor is Photo - JPEG i.e. that provides enough information for a NLE to know what the levels are.
Sorry, I had to edit big part of my previous post.
To see what happens if a certain system remaps a signal to 0-255 it is interesting to shoot a gray gradient from black to white and analyze this recorded signal using the histogram.
For such a case I'd expect PPro to introduce banding while clipping the white and Vegas Pro to reproduce the gradient without further banding (8 bit will never do a perfect job there) and without clipping the white.
Now there will be people who prefer first case for the ease of use and there will be people who prefer second case for a matter of quality. And whatever many Vegas Pro users would like SCS to modify - I'd be very happy if this would mean adding options without losing the facility it offers now.
Well, I haven't been around awhile, but I'm with Marco. Vegas is working as it should. What's confusing to most people is the shift from sRGB to cRGB, and which decoders/encoders do what.
Glen Chann's old article on color spaces and guideline on which codecs translated levels and/or not. Most all do what you'd expect. Especially noteworthy is if you comping in a 32 bit mode, and how it translates levels.
Basically, use the scopes, that's what they're there for.
Ok, now I'm confused, because I've never had an issue with Vegas doing color (as far as I was aware).
Took a clip from VHS I happened to have (so it's 16-232). Appears correct on preview & in scope.
So I took a 0,0,0 black. It's ~-9%
So I took a 16,16,16 black. It's 0%, even.
So I duplicated the track and change the top track to "add" composite mode.
The VHS went up in brightness in preview and scopes (black level from ~5% to ~20%, but it's not a solid black so that's understandable).
0,0,0 black stayed the same.
16,16,16 went to ~+9% (makes sense, 16+16)
I agree with Marco on this one: Vegas does it right, everyone else is doing it wrong. In analog there wasn't 16-232, it was 0-100 (never saw 16-232 on my scopes, or in the manuals of the VCR's, etc). Vegas is a digital editor, not an analog editor, it should deal with 0-255, not 16-232. Apple should even be leading the charge with their "CD/DVD/MP3/Firewire/USB/BD/etc. is dead, live on the bleeding edge!" mantra. Vegas was designed to edit digitally for RGB 0-255 displays, not analog 16-232 (0-100) displays. The problem isn't that Vegas is wrong, the problem is that everything else is stuck in an obsolete color space where the solution, because everybody assumed 16+16 should be 16, is to keep fixing it so it looks "normal".
Like how NASA adds colors to the images it shows, Premire, FCP, etc. are doing the same thing to give you the image you expect, not the correct image that's being calculated.
The only thing the "Studio RGB" does in the scope is adjust the scope so you can look at the level in an analog scope, it's still doing all the work digitally.
The only correct solution is to record in 0-255 and deliver in 0-255. Anything else is, at best, taking extra CPU cycles, at worst, degrading the image slightly by being forced to convert for one reason or another (an example would be GPU math, it's all in RGB).
If quality is what matters, even using 16-232 shouldn't be an option, at BEST, just what you render out to as a final format, not what you edit on. We expect great audio (5.1 44.1khz per channel), HD or greater resolution, but you want only 84% of the color provided by the camera. Why not do mono tracks in 640x480 while you're at it if you want to stick with the old color space.
EDIT: A good set of test footage should be CG that uses 0-255, not stuff off a camera. Then see if Vegas/PPro/FCP/etc. displays it correctly. Use a still sequence so you can eliminate any "smart" feature of an NLE that tries to figure out what you want and see what it's actually doing. If PPro displays it as legal, it's modifying the incoming footage to something it's not, for example.
"Now there will be people who prefer first case for the ease of use and there will be people who prefer second case for a matter of quality. "
Ah yes, now you are getting back to a position I held many years ago.
That position of not wanting anything to be clipped got drowned out in a tsunami of chaos and confusion.
Personally I have a decent monitor that is used as the Secondary Display device with the right boxes checked in Vegas so I have no issue that I'm aware of myself handling all this. I have a background in process control where what a voltage or bits and bytes should be taken to mean was always something to be very aware of.
I also do all my complex compositing with AE, horses for courses I say.
The problem is display (and broadcast) adaption. Still too many devices are at least adjusted to match black with RGB 16 and white with RGB 235 (for 8 bit) even if them are capable for a wider display range (which most tv sets are).
So do you need to take care of this adaption when editing? Probably yes, if your product is a final one which does not go to further compositing or corrections.
To me the very point is when this adaption should take place and if it should happen automatically or as an option I'm able to select. For the most cases I would not want it to happen by decoding my inputs (with mapping 16-235 to 0-255). I'd prefer choices for preview and export (which is what I both called "output") while leaving the decoding right the way it works now.
Former user
wrote on 12/2/2012, 8:54 PM
Marco,
How are you entering 16/16/16 in Version12? In Version11, you cannot enter those numbers.
I don't see any USER under C:, I see USERS and my name, but not DOCUMENTS. That is why I didn't install it back then, I couldn't figure out where to put it.
The editing intake application can do one of three things:
1) It can do nothing to RGB preview levels.
2) It can read the encoder flags and apply a stock preview adjustment based on what they "ought" to contain.
3) It "could" scan the source (time-consuming!) and attempt to make an educated guess based on endpoint levels and format dimensions. Crude, blind are the words that come to mind.
I don't know of any other ways, other than informed, purposeful intervention, which is what I think we're "really" discussing here (side issues notwithstanding).
2) and 3) are fraught with pitfalls. We know that most video-capable acquisition devices do not obey the rules. Their operators even less often. So the stock preview adjustment more often than not removes visual information from our sight. So then we make adjustments to include it, and what have we gained? Oh, we're almost back to number 1). Now, number 3) is especially troublesome, because something as simple as an overcast day can fool it into thinking the camera levels are correct. Big mistake. Maybe I don't want true blacks in my shot of a white kitten in a snow storm! Professional auto film printers scan the logarithmic ratios of shadow to bright, not the composite densities. This is an important thing to understand if there is to be any sense at all from approach number 3). Oh, did I say time consuming? And the end failure rate would be so not worth it.
So now we're back to number 1). It shows the full available range in the preview and output. Good. It does not show the mapped-out playback values. Oh, we haven't determined either of them yet! This is "wrong", this is "bad," this is "broken"? OK, so what's better? The Adobe preview serves no magic. It emulates player levels, something that would seem useful only during the last 2% of my normal workflow. Yes, it's the last thing I do.
Well, I've tried my dangdest over the past years not to judge, but instead to say, "This is the way the preview works, now how can I work with it?" I've got some methods I'm really comfortable with, enough so to have shared a couple. Improve on what we have now? OK, maybe a really obvious big "Preview YUV" button is needed on the preview, that would not confuse or trick people into creating rendering disappointments, but help them see what their video will look like on their favorite player? Sure. But take away my full range preview option, Nunca!
There are three color spaces that you need to know about:
-The Y'CbCr family of color spaces. In this color space, the Y' channel contains the "black and white" (luma) component of the image and the Cb and Cr channels contain the "colour" (chroma) component of the image.
For 8-bit formats, the legal range for the Y' values is usually 16-235. The values outside of that range can hold illegal values that you cannot see. The reason why Y'CbCr (for VIDEO formats) is designed this way has to do with analog video. If you convert analog video to digital and your equipment is not perfectly calibrated, then the analog video may be converted into digital values outside the legal range. Because we have allocated illegal values to catch over/undershoot, then we can subsequently fix any errors in the digital domain. We can also do things like have proper digital color bars (color bars have a PLUGE bar which is blacker than black / below black level).
-Studio RGB. Legal values are in the 16-235 range for all three channels.
-Computer RGB. Legal values are in the 0-255 range for all three channels.
Y'CbCr color spaces can represent the widest range of colors... studio RGB can represent a smaller range and computer RGB even less. Whenever you covert between the color spaces, you may lose colors/values that cannot be represented in the other color space.
A second thing that happens when you convert between color spaces is that rounding error can occur.
What color space should an editing system use internally?
First you have to decide what bit depth to use.
A lot of equipment in the professional video world uses 10-bit Y'CbCr to carry video. 10-bit Y'CbCr is more or less sufficient for real world purposes.
If you are working with custom hardware with ASICs and/or FPGAs, you can use whatever bit depth you want internally. Often times you want higher than 10-bit Y'CbCr internally to avoid rounding error.
With a PC things are different. PCs operate fastest when they perform math operations on 8-bit integer numbers. Next fastest is 16-bit integer. 32-bit floating point is the slowest. 10-bit would be extremely slow... what you'd actually do is to convert 10-bit data into 16 bits and that would be faster. So your choices are 8/16/32.
Vegas was originally designed to be only 8-bit. 3rd part filters would receive 8-bit data and output 8-bit data (internally they can operate at whatever bit depth the filter designer wanted).
Originally, Vegas would format everything as 8-bit values with computer RGB values. This is not the best idea in the world as potentially important information from the original video may be clipped. It's almost impossible to pass through proper color bars- the blacker than black PLUGE bar would be clipped.
So the Vegas team made their own DV codec which would convert from Y'CbCr to studio RGB values (and vice versa).
You have significantly less clipping when Y'CbCr is converted to (studio) RGB.
I believe you also have (slightly) less rounding error.
But by introducing the second color space, Vegas introduced a whole new can of worms. Vegas could have been designed so that it takes care of levels conversions for you. IT DOES NOT. Every other NLE on the market does this, but Vegas does not.
A second subtler problem is that you can push either studio or computer RGB levels through Vegas. There are many parts of Vegas that are designed to only handle computer RGB levels (the Video Preview window... track compositing... etc.).
----------
Eventually Vegas introduced the 32-bit floating point mode. Instead of using 8-bit values to send data to plug-ins, Vegas added the capability to send 32-bit values. The nice thing about this is that you can (eventually) avoid rounding error problems caused by Vegas. 8-bit Vegas cannot do that.
With 32-bit values, the Vegas devs also realized that they could do something called linear light processing. They also realized that linear light processing would screw up if you converted from studio RGB to floating point values such that black level was somewhere above 0 and white level somewhere below 1.0. So the so-called "solution" was to change some (BUT NOT ALL) of the video codecs so that they would behave differently in a 32-bit Vegas project. I'm just going to skip over this mess.
----------------------
What other NLEs do:
Some just convert everything into 8-bit Y'CbCr. This is what Final Cut Pro does AFAIK.
In normal SD projects, everything is converted into Rec. 601 8-bit Y'CbCr.
In normal HD projects, everything is converted into Rec. 709 8-bit Y'CbCr. (Some filter results may be slightly different between SD and HD timelines/projects?)
Some just convert everything into 8-bit computer RGB. This is what After Effects does. If your original footage contains superwhites, After Effects will clip that information.
A workaround is to map superwhites into legal range in another application, render it out, and use those video files to feed After Effects.
"To see what happens if a certain system remaps a signal to 0-255 it is interesting to shoot a gray gradient from black to white and analyze this recorded signal using the histogram.
I just sort of tried this.
First off a correction:
The correct decode from Y'CbCr > RGB is explained here. the Y' = 0% to 100" should decode to 16-235. Sorry, I got that wrong previously.
For my tests in PS I created a 0 to 255 gradient, nothing special and in hindsight I should have put more into this just so I could be even more certain of my results.
In Vegas I think we all know what happens, nothing. It is left unchanged as 0 -255. Encoding that to say HDV and the 0-255 is preserved. The Preview monitor appears to display this correctly however as we know if we'd round tripped this through YT it would almost certainly get clipped. Alternatively if we'd adjusted a normal HDV video clip to look right it would very lkely come back to us looking wrong / different from YT thanks to how our display drivers maps 16-235 to 0-255.
So over to Ppro.
Drop the same JPEG into a HDV Sequences. Hm, it looks correct and the YC scope shows it as legal 0.3 to 1.0 Volts. It isn't clipped either, just decoded correclty to match the sequence.
So render that out to HDV, bring it back into Ppro or Vegas and I have legal video, nothing clipped, all good.
I took the "illegal" HDV rendered out of Vegas into the same sequence in Ppro. The YC scope shows levels below and above legal and yes it looks clipped in the monitor. Judging by the scopes Ppro hasn't clipped it as such, just the monitor is doing the clipping which is the same as what happens to the Vegas render if we round trip it through YT.
So just to be sure, I rendered this sequence out to HDV and bought that file back into Ppro and Vegas. the superwhites and superblacks are still there, Ppro did not clip them off.
So in summary:
Another NLE appears to get it correct without the user having to do anything.
No clipping occurs and I did not notice any banding.
Doing the same task in Vegas I would need to do the following:
1) Apply a Levels FX to any JPEGs from a DSC to do a cRGB to sRGB conversion.
2) Add a Levels FX to the preview monitor to do a sRGB to cRGB conversion.
3) Remember to disable that FX when rendering because it is actually on the Video Bus.
So I retract my "Vegas is fundamentally broken." It still is remarkably hard on the user to get things right though.
To me this only says some better, easier to use output options might be fine. I can't see what's wrong with - for processing until it's time to export - using a 0-255 signal just as it is and without dropping 14 percent of the small 8 bit bandwidth just for the display adaption.
Now import this (zipped) AVC grayscale into Vegas Pro and into PPro and take a look at the histogram.
"To me this only says some better, easier to use output options might be fine. I can't see what's wrong with - for processing until it's time to export - using a 0-255 signal just as it is and without dropping 14 percent of the small 8 bit bandwidth just for the display adaption."
I'm slightly lost with what you're saying here.
Something has to be done to get the level from JPEGs to work correctly with video, especially if you're mixing the two in the one project.
If the output intention is cRGB levels I could probably find a sequence setting in Ppro that'll preserve them. What Ppro is doing automatically is no different to what new Vegas users (hopefully) eventually learn to do.
"Now import this (zipped) AVC grayscale into Vegas Pro and into PPro and take a look at the histogram."
OK, 1280x720x12, where does that 12 come from?
Ignoring that both Vegas and Ppro decode it as a 16 to 255 gradient.
Comparing the two monitors after applying a sRGB to cRGB Levels FX to the Preview Monitor both look the same within the limits of my ancient eyeballs.
Vegas's monitor however now displays noticeably more banding than Ppro
I can't find a Histogram in Ppro, the one is Vegas does show the gaps which are no doubt causing the banding. The Vegas waveform monitor shows more staircasing than the Ppro one which correlates to what I'm seeing on the monitors.
I haven't tried to render both of them out, I can tomorrow if you like but I doubt that'll reveal anything startling.
I'm surprised PPro does not map this signal to 0-255 (which means in this case even PPro lacks the display adaption). And I don't know why there's not the same kind of banding after you manually made a sRGB to cRGB conversion on both systems.
I doubt there is a way to avoid this kind of banding when mapping 16-235 to 0-255 simply by the decode process (the way you usually lower the banding in compositing processes is adding noise).
"I can't find a Histogram in Ppro, the one is Vegas does show the gaps which are no doubt causing the banding."
Yes, these gaps I find in any systems or players which decode input signals using the 0-255 mapping. And this is one reason I do not want Vegas Pro to automatically use such a process from the scratch just by the way it decodes my inputs.