I have an EOS R so I'm trying CLOG out. It looked decent after editing, but the result seemed kind of muddy. Are there specific project settings I should change log files? Also, I'm rendering for youtube. What would be ideal render settings?
You will work in 32bit float project space with an appropriate C-Log LUT. Note that an 8 bit render will introduce some shadow noise that can safely be clipped out, fortunately.
@jd-g I was thinking about that camera myself a little while back and noticed they reference Gamma Transfer 1D LUT and Gamut / Gamma Transfer 3D LUT lookup tables here:
You probably want to use the one for CanonLog if that's the version your camera uses, which if I recall is 4:2:0 color and which personally I prefer over 4:2:2... it's in this directory of the zip:
You will work in 32bit float project space with an appropriate C-Log LUT. Note that an 8 bit render will introduce some shadow noise that can safely be clipped out, fortunately.
Thanks. Upon more research people are saying it's not wise to shoot log in 8 bit anyhow, so I may invest in an external recorder to see if that improves things.
They contain extra information to unpack the flat logarithmic curve to ten bits.
Unless I'm missing something, I'm referring to things I've read/watched like this:
"An 8 bit camera captures 28 shades per channel. That's about 16 million colors in total.A 10 bit camera captures 210 shades per channel, which is about one billion colors. That's 64 times more color information.If you play the footage straight on your monitor without grading, you won't notice the difference because your monitor is 8 bit only. But when you grade your footage, you start to lose color information. The goal is to end up with 8 bits worth of color. If you start with 8 bits before grading, there is no way to end up with 8 bits at the end. So you will always see the destruction caused by grading on 8 bit footage, while you won't see it at all on 10 bit until you start grading it extremely heavily."
"Having 4:2:2 will help you pull better keys. If you record 8 bit don't record in Log or else you will see a lot of banding artifacts."
I don't recommend delivering 1.0 log files either. That's not what they are for. I also suggest you stick with 8 bit 4:2:0 from door-to-door until you are delivering HDR 10 bit. You are also reading some things into what you saw that just aren't there.
This is an 8 bit box (at 2.2 gamma). It can store its maximum density of bits at 1.0 gamma (y=x), which by definition is a straight diagonal line, and as high as 10 bits per pixel in an undesirable, flat viewing space.
The green curve is expressed as gamma 2.2. That is your viewing gamma, not storage gamma. Because you may not understand logarithms, the formulas y=x^2.2 and y=1/x^2.2 should make some sense to you.
In order to unpack 1.0 gamma LOG files into viewable delivery files, they must now be rendered into a 10 bit box, or else limited in some way (clipped or compressed) to deliver normal 8 bit 2.2 gamma.
That's what it is, and a more or less reliable list of Vegas and video tutorials is here.
I still have a lot to learn, thanks for the info. So are you saying if I purchased an external recorder that records 10-bit 4:2:2, I wouldn't notice much of a difference grading the LOG footage?
I still have a lot to learn, thanks for the info. So are you saying if I purchased an external recorder that records 10-bit 4:2:2, I wouldn't notice much of a difference grading the LOG footage?
If you mean will you notice the difference looking at the rendered output, then the answer is no. The difference between the different color sub-samplings is based on human perception which is most sensitive to gradations of green and about 50% less sensitive to variations in red and blue. Which corresponds exactly to 4:2:0 which reduces bandwidth by 50% by leaving out gradations you cannot perceive. 4:2:2 oversamples in excess of human perception to but less than complete 4:4:4 (one r, g, and, b sensor element per pixel) to reduce bandwidth by 33%. Where you will notice the difference is when you play and edit because the extra load created by 4:2:2 will slow down rendering and viewing quite noticeably. And the 4:2:2 material generally looks worse on display previews, typically more grainy, though it looks just as good as 4:2:0 in the final render. The completely stupidity of 4:2:2 and above formats is that every single one of the camera senors in current production has its physical elements in 4:2:0 layout and the only way to generate the data that isn't there is to guess. Which is exactly what the fancy smoke and mirrors debrayering algorithms do. Now if your audience is the Bluebottle Butterfly, all bets are off... but the bad news is they have 3x more green cones in their eyes than we do.
Ok Thanks. I kind of see what you mean.
So if you take a look at this clip : , besides the fact that the white balance/exposure was inconsistent, something just doesn't look right. If it's not the fact that it was shot 8-bit LOG, then I'm thinking it's either 1. Youtube's compression 2. My render settings 3. My own improper exposure of log footage.
I still have a lot to learn, thanks for the info. So are you saying if I purchased an external recorder that records 10-bit 4:2:2, I wouldn't notice much of a difference grading the LOG footage?
If you mean will you notice the difference looking at the rendered output, then the answer is No. The difference between the different color sub-samplings is based on human perception which is most sensitive to gradations of green and about 50% less sensitive to variations in red and blue. Which corresponds exactly to 4:2:0 which reduces bandwidth by 50% by leaving out gradations you cannot perceive. 4:2:2 oversamples in excess of human perception but less than complete 4:4:4 (one r, g, and, b sensor element per pixel) to reduce bandwidth by 33%. Where you will notice the difference is when you play and edit because the extra load created by 4:2:2 will slow down rendering and viewing quite noticeably. And the 4:2:2 material generally looks worse on display previews, typically more grainy, though it looks just as good as 4:2:0 in the final render. The fact is that every single one of the camera senors in current production uses a 4:2:0 physical layout (alternating g-r and g-b pairs) and the only way to get higher sub-samplings is to guess at the data never collected in the 1st place... which is exactly what in-camera debrayering algorithms do. But if your target audience is the Bluebottle Butterfly, all bets are off. 😀