Do I really need to de-log video?

Rich Parry wrote on 4/19/2016, 1:47 AM
I believe the following is true:

Camera manufactures have developed various methods to generate flat (low contrast) video to give the most CC (Color Correction) flexibility in post. Sony has S-log, DJI drones have D-Log, Canon has C-log, GoPro had Protune, etc. etc. etc.

The camera manufacturers and third parties also offer LUTs and tools to “de-log” the footage. For example, GoPro offers GoPro studio, DJI drones offer DJI Transcoding tool, etc. These tools may convert the original 4:2:0 footage to 4:2:2 color space and expand 8 bit depth video to 10 bits, and increase the file size 5x or 10x. They may also output to ProRes or some other high quality intermediate codec.

I’ve played with de-logging tools and LUTs, but find bringing the original flat footage into Vegas and CC (Color Correcting) using Color Curves gives me satisfactory results without adding steps to my workflow.

I think I am missing something. Folks I respect here on the forum talk about the importance of LUTs and transcoding to other codecs (MXF Long or Intra or MOV ProRes, etc.). What am I missing when I import “logged” (flat) video straight into Vegas and use Color Curves to de-log and CC my video?

I realize a LUT or tool that was specifically designed to de-log a video may provide optimum results, but even in those cases, the video needs color correction tweaking.

So the question is, why use LUTs or conversion tools? Why not just use Vegas to color correct, especially since you tweak the video after de-logging anyway. Why add extra steps and extra generations of video?

What am I missing … comments welcome,
Rich

CPU Intel i9-13900K Raptor Lake

Heat Sink Noctua  NH-D15 chromas, Black

MB ASUS ProArt Z790 Creator WiFi

OS Drive Samsung 990 PRO  NVME M.2 SSD 1TB

Data Drive Samsung 870 EVO SATA 4TB

Backup Drive Samsung 870 EVO SATA 4TB

RAM Corsair Vengeance DDR5 64GB

GPU ASUS NVDIA GeForce GTX 1080 Ti

Case Fractal Torrent Black E-ATX

PSU Corsair HX1000i 80 Plus Platinum

OS MicroSoft Windows 11 Pro

Rich in San Diego, CA

Comments

musicvid10 wrote on 4/19/2016, 9:44 AM
The whole idea is to pack more dynamic range (bits) into the container. Extracting that data is not "converting"; it is more like unpacking a parachute.

You can work with native (flat) S-log or Protune footage on the timeline and add some pop back to it, maybe even get something resembling compliant 2.2 gamma in the output. But if your delivery is 8 bit 4:2:0, why even shoot flat log formats in the first place? It's a waste of time and space because the extra data that was gained in shooting is all lost.

98% of us don't need this stuff yet anyway, and it's only " better" if we have the means to deliver and play 10 bit 422 natively. Although those numbers are growing, it's many years away from becoming a mainstream format, and many more before we all will have fiber streaming at home.

Unpacking flat-log video and recompressing it to 2.2 gamma for normal 8 bit delivery is about as gratifying as p**ing into the wind. That is an opinion.
wwjd wrote on 4/19/2016, 10:27 AM
what he said.

shoot LOG ONLY if you are making a movie, OR plan to do lots of color grading where you want to keep the dark details captured by LOG.
Rich Parry wrote on 4/21/2016, 1:07 AM
I NEVER expected responses that using "log video" was unnecessary in most/many cases. I thought everyone was shooting some form of log video, then de-logging with LUTs and transcoding to intermediate codes. My work is strictly amateur grade for Viimeo, sounds like I was making work for myself.

thanks,
Rich

CPU Intel i9-13900K Raptor Lake

Heat Sink Noctua  NH-D15 chromas, Black

MB ASUS ProArt Z790 Creator WiFi

OS Drive Samsung 990 PRO  NVME M.2 SSD 1TB

Data Drive Samsung 870 EVO SATA 4TB

Backup Drive Samsung 870 EVO SATA 4TB

RAM Corsair Vengeance DDR5 64GB

GPU ASUS NVDIA GeForce GTX 1080 Ti

Case Fractal Torrent Black E-ATX

PSU Corsair HX1000i 80 Plus Platinum

OS MicroSoft Windows 11 Pro

Rich in San Diego, CA

farss wrote on 4/21/2016, 2:53 AM
[I]"But if your delivery is 8 bit 4:2:0, why even shoot flat log formats in the first place? It's a waste of time and space because the extra data that was gained in shooting is all lost."[/I]

It gives you more exposure latitude and more to work with when grading for 8 bit 4:2:0 delivery.
Of course it only makes sense with a camera whose sensor has lots of dynamic range. On the cheaper cameras it's not just a waste, unless you're very careful the outcome can be worse than going with a normal gamma.

Bob.
Serena Steuart wrote on 4/21/2016, 3:06 AM
You don't need to shoot log if you're happy with the results you get with a more or less linear gamma. Some people like images that don't have blown highlights and choked shadows and they capture using and extended gamma curve able to record a dynamic range of 11+ stops which, while still much less than the real world dynamic, captures a lot of detail in both shadows and highlights. It's all about capturing enough data to render good images, and the greater the bit depth the more flexibility you have in processing. Log is a particular form of an extended gamma curve.

For display that curve has to be reconfigured to Rec 709. This you can do using curves or a LUT, which is the point at which you choose how your final 8bit images represent the original scene. You don't have many options if you haven't recorded the data.

Wolfgang S. wrote on 4/21/2016, 4:42 AM
"I’ve played with de-logging tools and LUTs, but find bringing the original flat footage into Vegas and CC (Color Correcting) using Color Curves gives me satisfactory results without adding steps to my workflow.

I think I am missing something. Folks I respect here on the forum talk about the importance of LUTs and transcoding to other codecs (MXF Long or Intra or MOV ProRes, etc.). What am I missing when I import “logged” (flat) video straight into Vegas and use Color Curves to de-log and CC my video?"

First of all: yes you can develop the x-log footage with tools like color curves or tools like levels in Vegas too. There is nothing wrong about that. The manual development of x-log is something that is mentioned as valid way in the Color Correction Workbook too, what I think is a standard for Colorists. But you can also use camera LUTs.

Second: it is easier to work with camera LUTs (we are not talking about LUTs to devlop a look). Within Vegas there are some plugins available if you wish to work in a way where you apply a LUT in the 8bit modus. One is the free Vision Colors plugin. The only disadvantage is that its playback performance is low. The second one is integrated in Magics Bullet Looks 3 - much better then the Vision Color plugin. The third and with the highetst playback performance that I know it the LUT plugin delivered with Hitfilm.

I use these LUTs for the whole track typically. And have for the events in addition the filter level and color corrector - both to adjust the luma to the range 0-100% (16-235) with the waveform monitor.

Another way would be to use the Vegas internal LUTs that are available in the 32bit floating point mode, or also the approach of Balzar. The major issue here can be that the preview performance of the 32bit floating point mode requires a lot of PC-performance, and even with my 8core system I tend to use the LUT-plugins and tend to do the editing in 8bit (and switch to 32bit to render the footage out).

Third: if you work with x-log or not - well that is up to you. And you should not follow mystic rules that state that this is good for Cinema only!

Today we see that a lot of cheaper prosumer cameras are equiped with x-log (the GH4 or DVX200 with v-log l, or the Sony 6300 with s-log3 but many more to come). So while there are valid reasons that state that you should shoot the footage with 10bit to use x-log, what is possible today with Cameras like the FS7 or combinations like the GH4/DVX200 with recorders like the Shogun), it can be done with 8bit footage too, but you will not have the comfort of 10bit or more.

The reason to shoot with x-log and use LUTs is quite simple: you will end up with a better use of the dynamic range of the sensor, compared to shoot to rec709 directly. But there is nothing for free. x-log requires work in the postproduction, what takes time and effort.

If you use external recorders like I do with the Shogun, you have also the issue that Vegas will not decode ProRes as 10bit - but as 8bit only. So what I do is either to convert the footage to 10bit Cineform by using TMPGenc6, and edit that in Vegas. Or I transcode the ProRes footage with TMPGenc 6 to H.264 10bit 422 (what exists in TMPGenc), go into Catalyst Edit to grade the footage, and export that to XAVC that can be edited in either Vegas or Edit.

So if you do that or not is up to you I think. A lot of users have moved to Resolve to grade their footage, some will move to Edius to do that in Edius, and some do that in Catalyst Prepare from SCS (especially the FS7 user I think). And for sure there are a lot of other tools that support that (Adobe for sure but also the other large NLEs).

For the future it may become even tougher, since with the upcoming 4K raw plugin for the FS5 and the recorders like Shogune Inferno/Odysee 7Q+ with raw plugin a lot of users will tend to follow the FS7 raw workflow (record the raw with 4K 50p/60p to ProRes). The mini-URSA tends to change the game too - but these users will tend to choose the Resolve path I think. The Canon user may follow that too more and more, a new version of c-log was published at the NAB. And Panasonic has announced new Cameras like the UX180 where we will see UHD 50p/60p too, but we do not know by now if they will use again v-log l for this units.

These are the trends that we will see today, and that is not for Cinema only. But will touch also smaller image videos, will end up in smaller advertising spots, but may be used also by prosumer like me who tend to do that just for fun. :)

So your decision.

Desktop: PC AMD 3960X, 24x3,8 Mhz * GTX 3080 Ti * Blackmagic Extreme 4K 12G * QNAP Max8 10 Gb Lan * Blackmagic Pocket 6K/6K Pro, EVA1, FS7

Laptop: ProArt Studiobook 16 OLED (ProArt Studiobook 16 OLED (i9 12900H with i-GPU Iris XE, 32 GB Ram. Geforce RTX 3070 TI 8GB) with internal HDR preview on the laptop monitor. Blackmagic Ultrastudio 4K mini

HDR monitor: ProArt Monitor PA32 UCG, Atomos Sumo

Rich Parry wrote on 4/24/2016, 2:46 PM
Wolfgang,

I just want to acknowledge your response, I'm a little overwhelmed. I'm fairly technically savvy (electrical/software engineer), but I have a hard time wrapping my head around video.

I'm happy with the amateur video I shoot, but that may be more due to ignorance than expertise. As I learn more and know what to look for, I see areas where my videos could have been better.

Thanks again, you and others have given me a lot to think about.

Rich

CPU Intel i9-13900K Raptor Lake

Heat Sink Noctua  NH-D15 chromas, Black

MB ASUS ProArt Z790 Creator WiFi

OS Drive Samsung 990 PRO  NVME M.2 SSD 1TB

Data Drive Samsung 870 EVO SATA 4TB

Backup Drive Samsung 870 EVO SATA 4TB

RAM Corsair Vengeance DDR5 64GB

GPU ASUS NVDIA GeForce GTX 1080 Ti

Case Fractal Torrent Black E-ATX

PSU Corsair HX1000i 80 Plus Platinum

OS MicroSoft Windows 11 Pro

Rich in San Diego, CA

musicvid10 wrote on 4/25/2016, 11:50 AM
To all who would unflatten the video with Vegas curves and your eyes.
It would seem a better idea to start with a LUT that approximates standard viewing gamma and work .10 in either direction,, than to start at flatline. You can create some nice stuff on your monitor, but it won't translate well at all to the billions of average screens out there.