I apologize in advance - I know this this is quite a redundant subject - but I read about it what I could find, only to end up just as confused as before. The fact that my new Eizo monitor has sRGB, Adobe RGB etc presets doesn't make it easier. I can actually get a normal ( good looking ) picture when setting it to sRGB but have the preview set to computer RGB - I'm just not sure if that make any sense. I just want my footage to look good...
So, this is what I think I understood so far:
In Vegas, I work in computer RGB, and my files from the camera (these days mostly) come in the 0-255 range.. . right(?)
Vegas displays that exact range and lets me use superblacks and superwhites, making things look nice and full.
Now, I understand there are devices that can't reproduce that range but stop at 16 and 235, making my nice and full footage look rather washed out and grey.
SO: if I work and grade in sRGB, narrowing my visible range down, my footage looks flat, and I have to somehow compensate for that by trying to still have a nice and full and contrasty image although my range is narrower?? Or do I just "visually" grade my cRGB footage in sRGB to make sure my 0 and 255 don't look too bad in case they are viewed in sRGB, but actually keep the levels 0-255?
So say I render in sRGB, when the rendered files are viewed on computers, aren't they displayed 0-255 anyway?? If so, why should I reduce things to 16-235 only to have them reexpanded to full range afterwards? Why should I cut off my superwhites and superblacks if the camera delivers them in the first place? Isn't SRGB an outdated standard by now? Until now, I never even bothered as I have never seen a significant enough difference in my renders when viewed on different screens or the internet.
Question, questions... boy I really don't get it..
My neurons feel rather entangled right now , but I've ploughed through forums and articles about this, and I'm still stuck , so any help is greatly appreciated.
Thanks!
So, this is what I think I understood so far:
In Vegas, I work in computer RGB, and my files from the camera (these days mostly) come in the 0-255 range.. . right(?)
Vegas displays that exact range and lets me use superblacks and superwhites, making things look nice and full.
Now, I understand there are devices that can't reproduce that range but stop at 16 and 235, making my nice and full footage look rather washed out and grey.
SO: if I work and grade in sRGB, narrowing my visible range down, my footage looks flat, and I have to somehow compensate for that by trying to still have a nice and full and contrasty image although my range is narrower?? Or do I just "visually" grade my cRGB footage in sRGB to make sure my 0 and 255 don't look too bad in case they are viewed in sRGB, but actually keep the levels 0-255?
So say I render in sRGB, when the rendered files are viewed on computers, aren't they displayed 0-255 anyway?? If so, why should I reduce things to 16-235 only to have them reexpanded to full range afterwards? Why should I cut off my superwhites and superblacks if the camera delivers them in the first place? Isn't SRGB an outdated standard by now? Until now, I never even bothered as I have never seen a significant enough difference in my renders when viewed on different screens or the internet.
Question, questions... boy I really don't get it..
My neurons feel rather entangled right now , but I've ploughed through forums and articles about this, and I'm still stuck , so any help is greatly appreciated.
Thanks!