RGB numbers are a logarithmic representation, not a linear one.
If you had a pixel at RGB 3,3,3 and reduced by your example, the result would be -7,--8,6 which has undefined values by most colorspace standards. Reducing the shadows by the same linear scale values as the highlights would have a very odd look, indeed!
I find that the GIMP image editing application does a good job in color correcting (auto white balance).
I am able to determine the changes that it does, and I was hoping that I could just apply these to video using Vegas.
That's the reason for the original posting.
I put on a very small grey box on the image (RGB 128, 128, 128), run the auto white balance, then check what colour (RGB) the grey box is afterwards, using the eye dropper)
I can see the change in both the RBG and also the HSL values for the box.
"RGB numbers are a logarithmic representation, not a linear one.
It's... not logarithmic."
It's easy to say what they are not, but not so easy to say what they are.
I suspect every camera will have its own slight variation of the law. I explored the law of my Nikon DSLR recently and found that for RGB values from near zero to about 100 the brightness was approximately log-log (gamma law) and for RGB values from about 60 to about 200 the brightness was approximately logarithmic (or linear against exposure stop values).
I deliberately oversimplified. My point is that the relationship is not linear, so that -10,-11,+3 at one part of the scale is going to have quite a different perceived effect compared to another. Of course the colors are weighted differently and equipment is a huge factor as Peter points out. We use many log scale representations of light (and other) energy in the real world, parallel examples being lumens, densitometry units, EV and f-stops.
Conventional color correction theory would have us anchor one or both endpoints rather than performing a linear y-shift, which is what the OP is proposing.
In its simplest sense, white balance correction anchors at the black threshold (think windshield wiper). It is not an across-the-board shift.
I also wouldn't WB on a gray patch, for reasons of nonlinearity (both reflected and source) touched on in the article.
There are a few WB plugs for Vegas that will make your life simpler. Check F. Bauman's page, also NewBlue. A free one mentioned somewhere on the forums isn't "too" bad. Newer versions of Vegas have the WB tool built in.
The primary color corrector in Vegas has eyedroppers and full RGB control over highs, mids, and shadows if you're willing to dig into it. The Curves tool is even more capable. Both are easy to screw up, even for experienced types.
As a point of comparison, here is Bob's method applied to my "Shirley" showing the unnatural color bias in the shadows, which you probably don't want.
The logarithmic properties of the RGB scale are explained in detail and illustrated in an easy-to-read article here:
I really don't understand that article or what you're saying.
If you need to get work done, I would just stick to the simple answer that I give above. (It's possible that 3rd party plug-ins do a better job... unfortunately I haven't looked into them.)
"I really don't understand that article or what you're saying."
My head nearly exploded trying to ponder the relevance of that to the topic at hand.
After I expunged that and all things "gamma" I think I understand what Musicvid is trying to get at. What further threw me off the scent was I used to used a grey card, in a reference shot so the lab could get a precise color match back in the days when I was taking stills with film.
The problemo is that WB affects the gain of the RGB channels and in theory one could think it has no effect on the blacks and maximum effect on the whites hence the "windscreen wiper" effect. That means knowing the offset values at 50% (from the grey card) will correct the mid tones correctly, the white end of the scale not enough and the black end way to much. You could get the correct values using a linear interpolation, the whole logarithmic gamma thing is irrelevant, the RGB values are linear in effect.
The whole problem is that it's fliggin unlikely anything is really black in a shot, I know, I've tried hard to shoot with an absolute black background even after dialing out the setup in the camera. Good luck getting anything in a normal scene to read 0 IRE.
Further now that I have a monitor with some semblance of calibration I'm starting to realise just what a black art color correction is. I find the "low" wheel in the three way CC tool rarely gets it right so I copy the angle from the mid or high wheel and adjust the gain of the low wheel until my "almost" blacks look correct.
Your reference to film labs using 18% gray as a reference is further complicated by the fact that color film and print curves are necessarily anchored at the positive white end (think clock pendulum), thus the horrendous color errors in photographic near-blacks caused by everything from lighting to old film to leuco cyan in the chemistry. The only way we could tilt curves or bump gamma in those days was with complex film masks, often taking days of tests just for one image, and which usually ended up doing more harm than good.
Any utility that adjusts brightness or does colour correction should first convert the RGB values (0-255) to linear brightness, then scale the channels and then convert back again.
I suspect many tools in many apps take a short cut and don't do it properly, by doing simple operations directly on the RGB values or assuming a simple log or gamma law. As I have said, for my DSLR camera, and I suspect most other cameras as well, the law is not so simple.
I sometimes underexpose when taking photos with my DSLR to ensure that I don't burn out small important highlight details and then need to correct the under exposure (less the highlights) when I get home. My image editor allows me to save a custom adjustment of the tone curve, which I have done for a range of EV values from 0.33 to 2.0.
I have tested and confirmed that a corrected under exposed image is virtually the same as a correctly exposed one for the same scene. Of course they won't be exactly the same down in the lowlights due to quantizing, noise and deviations from the true law in my model.
A typical tone curve (in the 0-255 domain) to correct exposure is the shape of a dog's hind leg, (not a simple curve to describe)! If you are doing colour correction you would have three such curves to contend with.
Any utility that adjusts brightness or does colour correction should first convert the RGB values (0-255) to linear brightness, then scale the channels and then convert back again.
You don't really need to do that.
Suppose that the transfer function is
f(x) = x ^ any exponent like 2.2
You can just multiply away here. The following two equations can be equivalent:
f(x) = constant * x^2.2
f(x) = ( differentConstant * x^2.2 ^1/2.2 ) ^ 2.2
Suppose the camera uses the Rec. 709 transfer function. The result will be so similar that you don't worry about the difference.
2- The problem in real life is that almost all cameras apply some type of knee algorithm to the highlights. So that is why I point out the simple + practical solution.
And yes some cameras do funky things at the bottom end of the tone scale to deal with noise. And other cameras may apply some type of s-curve so that the image looks better. In practice you just this slide... unless you can shoot RAW.
Quite right, you don't, because as I said in my post I have set up my image editor to adjust brightness using the 0-255 values, and that the necessary curve was shaped like a dog's hind leg (not an obvious shape). That is how you would implement it for speed but in order to derive the curve you need to know the relationship to linear brightness, and that is not usually a straightforward log or gamma curve. I derived mine empirically from measurements.