ACES and ACEScc Intro and Tutorial for Vegas Pro

Comments

IAM4UK wrote on 7/22/2015, 6:29 PM
balazar, Thank you for this EXCELLENT resource and the tutorials!

I have been experimenting with these color space resources, and have a couple of questions and an observation:
Q1. To switch between working with ACES and working with ACEScc, is it sufficient to overwrite the .ocio file? It appears the contents of the ACES and ACEScc \lut subdirectories are identical.
Q2. Render times are high when using these, as you noted. Can one edit with unaltered color space settings, render intermediates (such as Sony .mxf), then sequence those intermediates in a final-render timeline and only apply all the color space settings/adjustments/corrections on those intermediates in that final-render timeline -- and get the same or similar output results?

Observation: For the Canon Vixia camcorder I have, the only Media Color space that provides usable output is "Rec.709 video file full_range." I trust that result is common for "generic" cameras, but did not see the "full_range" qualifier specified in the recommendations for general camera use.
balazer wrote on 7/22/2015, 6:58 PM
Thank you, IAM4UK and ushere, for the kind words.

After deploying one of my custom configs, you can switch between ACES and ACEScc just by changing the .ocio file. All of the luts are the same.

To improve response time while editing, my suggestion is to just turn off ACES by setting the view transform to 'Off'. Color space settings will have no effect when ACES is turned off. You can set the project to 8-bit to speed things up even more. If you've applied color correction filters in ACES, you might want to bypass them while editing with ACES turned off by clicking the split screen view button at the top of the preview window, with "FX bypassed" selected in the pull-down. Once your editing is done, you can turn ACES back on and re-enable video effects to do your color correction and final rendering. Don't forget that you can also reduce the preview resolution to Preview Half or Draft Full, to improve response time during editing.

You can also do as you've suggested, editing in a non-ACES project to intermediate files, and then assemble a final ACES project with your color correction. But I find that approach to be tedious, since you lose the boundaries between video events when you render out to an intermediate file. I usually like to apply color correction to each individual video event.

Please check Nick Hope's topic: Survey: What min/max levels does your cam shoot? for survey results and instructions on how to check the video levels. With ACES disabled and the project pixel format set to 8-bit or 32-bit floating video levels, use a video scope to see the range of RGB values for your camera. If the range is 16-235 or 16-255, use a regular Rec.709 (not full_range) color space for the input media. If the range is 0-255, then use a full_range color space for the input media. If the camera is recording to AVCHD, it's almost certainly not full_range. You can also use ffmpeg -i to check the color space: yuv420p is not full_range, and yuvj40p is full_range, at least if the camera set the full_range flag. These days most cameras are not full_range. GoPro, Apple, and older Canon EOS cameras are the few I know that are full_range. What exactly did you see when you used a non-full_range instead of a full_range color space? A non-full_range color space as the input media color space should show more contrast.
IAM4UK wrote on 7/22/2015, 8:59 PM
Here are screencaps of the non-full_range and full_range tests:
https://dl.dropboxusercontent.com/u/96782274/nonfullrange.png
https://dl.dropboxusercontent.com/u/96782274/fullrange.png

ACEScc, exposure compensation, color balancing, and contrast adjustment settings identical in each of those two renders.
IAM4UK wrote on 7/22/2015, 9:20 PM
Another question: The Panasonic Lumix GH4 has a Luminance Level selection, allowing
0-255 or
16-235 or
16-255

Which of these should be the selected setting on the camera for recording, to get correct use of the GH4 profile in ACES or ACEScc? I assume it should matter, but perhaps not?
IAM4UK wrote on 7/23/2015, 12:46 PM
These tools are potentially very valuable, and I'm experimenting with them to learn more. It appears a new workflow may be required, something like this:

Create a .veg project for clips that require compositing; configure it for ACES colorspace.
Create a .veg project for clips that require or may use filters; configure it for ACEScc colorspace.
Edit all those clips in accordance with the tutorials.
Render those clips to intermediate files using something like Sony XAVC S with Sony Cine1 (hypergamma 4) colorspace selected on the project tab. [EDIT, per feedback from balazer: Use ACESproxy color space used in an XAVC Intra or Long file.]
Create a .veg project for assembly of all clips; configure it for either ACES or ACEScc and the matching colorspace of the intermediates.
Assemble the edit.
Render the final project output, using the colorspace on the project tab that meets your needs.


This might be cumbersome, and one must remember to use the correct .ocio file for each case; however, it allows an editor to overcome the trade-offs about compositing and filters (which are very, very real).
balazer wrote on 7/23/2015, 3:30 PM
IAM4UK, without knowing how that Vixia camera renders its images and how you did the color correction, it is hard for me to say if full_range or non-full_range is the correct color space for your camera. I'm still inclined to believe non-full_range is correct for a Vixia. What I can say is that when you choose Rec.709 as the input space and the output space, color values are passed through nearly unchanged, except for the level remapping that happens when you choose a full_range input space. In a non-ACES project, if you don't apply any levels filtering to remap levels (such as the "Computer RGB to Studio RGB" preset), it corresponds to working in ACES with a non-full_range input color space. The output Rec.709 video file color spaces in my configs all comply with Rec.709, using a Y' range of 16 to 235, and Pb and Pr ranges of 16 to 240. The view transforms use an output RGB range of 0 to 1, which gets converted to 0 to 255 in the preview window. On a full-screen monitor preview it's 0 to 255 also, and you should not check the "Adjust levels from studio RGB to computer RGB" box in the preview device preferences. All of this info about each of the color spaces is in the descriptions inside the .ocio file.

Your Vixia may be utilizing the superwhite range above Y'=235. The ACES RRT Rec.709 input color spaces will not access those extended color values, since they are not defined in Rec.709. You may instead wish to try another input color space, such as the Panasonic GH4 color space, or the non-RRT Rec.709 color space, which are defined up to Y'=255. When you choose the standard non-RRT Rec.709 color space as input and output, it preserves color values exactly and uses the superwhite region up to Y'=255. What it doesn't do is map accurately into the ACES color space, since your camera likely has some highlight roll-off starting a bit below Y'=235 and extending up to Y'=255 that is unique to the camera.

Every camera's contrast curve and highlight roll-off are different, and since I can't build input transforms for every camera under the sun, it's a good reason to choose a camera with a well-defined color space such as Sony HyperGamma 4 or S-Log2.

The Panasonic GH4 color space was created from a profile of the camera set to 16-255 luma, Standard photo style, and contrast set to 0, which are the camera's defaults. The color space won't be correct for any other settings.

You can't use ACES and ACEScc in the same project. If you need both, for example to do compositing in ACES and color correction in ACEScc, you would need two projects and an intermediate file in between. For intermediate files I recommend the ACESproxy color space used in an XAVC Intra or Long file. HyperGamma 4 is a camera color space, and not really appropriate for ACES intermediates.
IAM4UK wrote on 7/23/2015, 3:36 PM
Excellent insights, balazer. I will test some clips using your specific advice, so I can further learn how the various choices affect the results.

Many thanks to you for providing these resources and for responding to such questions.

[applause]
Wolfgang S. wrote on 7/25/2015, 2:49 AM
"The Panasonic GH4 color space was created from a profile of the camera set to 16-255 luma, Standard photo style, and contrast set to 0, which are the camera's defaults. The color space won't be correct for any other settings."

What would you recommend to use as color space for the GH4 with other settings? Cinelike D curves for example?

Desktop: PC AMD 3960X, 24x3,8 Mhz * GTX 3080 Ti * Blackmagic Extreme 4K 12G * QNAP Max8 10 Gb Lan * Blackmagic Pocket 6K/6K Pro, EVA1, FS7

Laptop: ProArt Studiobook 16 OLED (ProArt Studiobook 16 OLED (i9 12900H with i-GPU Iris XE, 32 GB Ram. Geforce RTX 3070 TI 8GB) with internal HDR preview on the laptop monitor. Blackmagic Ultrastudio 4K mini

HDR monitor: ProArt Monitor PA32 UCG, Atomos Sumo

balazer wrote on 7/25/2015, 3:47 AM
I wouldn't recommend using any of the GH4's other photo styles for ACES. They're made-up color spaces without published specs, and I haven't profiled them.
Wolfgang S. wrote on 7/25/2015, 8:21 AM
Hmm, a lot of people use the GH4 with Cinelike D profiles (so do I). And since I shoot with the Shogun to ProRes and tranform that to the Cineform codec, I use 16..235 - because using 16..255 the luma range would result in clipping beause the luma range is stretched during the transformation to Cineform, done with TMPGenc. The transformation is done at all because Vegas crashes with 20-25 Shogun UHD files, and since Prores is decoded as 8bit in Vegas only, also within a 32bit floating point project.

I am aware that this is a very specific workflow that I use. Hopefully we will see the implementation of v-log in the GH4, what will change again how we use the GH4. It would be great if you could develop a profile for v-log when we will have that in October hopefully.

Desktop: PC AMD 3960X, 24x3,8 Mhz * GTX 3080 Ti * Blackmagic Extreme 4K 12G * QNAP Max8 10 Gb Lan * Blackmagic Pocket 6K/6K Pro, EVA1, FS7

Laptop: ProArt Studiobook 16 OLED (ProArt Studiobook 16 OLED (i9 12900H with i-GPU Iris XE, 32 GB Ram. Geforce RTX 3070 TI 8GB) with internal HDR preview on the laptop monitor. Blackmagic Ultrastudio 4K mini

HDR monitor: ProArt Monitor PA32 UCG, Atomos Sumo

balazer wrote on 7/25/2015, 4:41 PM
Wolfgang, I would suggest converting your ProRes files to uncompress v210 AVI using ffmpeg. Vegas will read those files with full precision, and you could even use Vegas to recompress to XAVC Intra or Long to save space.
ffmpeg -i input_file.mov -vcodec v210 output_file.avi


Another option is to have the Shogun record to DNxHD MXF files, and use Catalyst Browse or Catalyst Prepare convert them to XAVC Intra or XAVC Long, which Vegas can read.

Both of those options will preserve Y' values up to 255, and maintain 10-bit precision. But you should check also that the GH4's HDMI port and the Shogun's recording are mapping levels the same way that the GH4's internal recording does.

In a pinch, ffmpeg can convert from ProRes to 8-bit h.264 with excellent quality. You won't miss the extra two bits. It's much higher quality than what you get when you try to read ProRes files directly in Vegas.
ffmpeg -i input_file.mov -vcodec libx264 -x264opts qp=14:ipratio=1:pbratio=1:keyint=30:colormatrix=bt709:colorprim=bt709 -pix_fmt yuv420p -acodec aac -strict -2 -b:a 320k output_file.mp4


If Panasonic ever gets around to adding V-Log L to the GH4 and they publish a spec for it, I'll create an IDT for it. I'm not holding my breath. There's already a published spec for V-Log, but that's different from V-Log L.
Wolfgang S. wrote on 7/25/2015, 6:52 PM
"If Panasonic ever gets around to adding V-Log L to the GH4 and they publish a spec for it, I'll create an IDT for it. I'm not holding my breath. There's already a published spec for V-Log, but that's different from V-Log L."

That will be great! Thank you for that. Let us hope that Panasonic delivers the v-log l in an upcoming firmware update.

For the other points - sure, I could also convert the ProRes to v210, I know that Vegas will read that in 10bit. Would result in huge files and be a two step conversion again to XAVC - even if XAVC would be great since it can also have 10bit - but I have not found any converter yet that can do that from ProRes.

The Shogun can record to DNxHR files (not HD for UHD) - but as far as I seen that is not decoded as 10bit in Vegas but in 8bit only. And maybe I am wrong, but I think that the Prepare is not able to import DNxHR too. I have to check that again.

Well I would not like to go back to an 8bit codec. The Cineform is not a bad choice since it is implemented in Vegas native - the only drawback is that you have to limit yourself during shooting to 16..235.

Thank you for the ideas.

Maybe another idea could help - must check that too: convert to v210 and - if that can be imported in the Prepare - convert that to XAVC. Is also a two step process. I think it is a pitty that SCS has restricted the import formats of the Prepare as we have it now.... ;(

Desktop: PC AMD 3960X, 24x3,8 Mhz * GTX 3080 Ti * Blackmagic Extreme 4K 12G * QNAP Max8 10 Gb Lan * Blackmagic Pocket 6K/6K Pro, EVA1, FS7

Laptop: ProArt Studiobook 16 OLED (ProArt Studiobook 16 OLED (i9 12900H with i-GPU Iris XE, 32 GB Ram. Geforce RTX 3070 TI 8GB) with internal HDR preview on the laptop monitor. Blackmagic Ultrastudio 4K mini

HDR monitor: ProArt Monitor PA32 UCG, Atomos Sumo

balazer wrote on 7/25/2015, 7:07 PM
Catalyst Prepare can read DNxHD MXF files and convert to XAVC.
Wolfgang S. wrote on 7/27/2015, 4:44 PM
But the Prepare cannot import the UHD DNxHR files from the Shogun - unfortunately not. I would like if that would be possible, really. But I have tested it and it does not work.That are mov-Files, and renaming them to .mxf does not help either.

Desktop: PC AMD 3960X, 24x3,8 Mhz * GTX 3080 Ti * Blackmagic Extreme 4K 12G * QNAP Max8 10 Gb Lan * Blackmagic Pocket 6K/6K Pro, EVA1, FS7

Laptop: ProArt Studiobook 16 OLED (ProArt Studiobook 16 OLED (i9 12900H with i-GPU Iris XE, 32 GB Ram. Geforce RTX 3070 TI 8GB) with internal HDR preview on the laptop monitor. Blackmagic Ultrastudio 4K mini

HDR monitor: ProArt Monitor PA32 UCG, Atomos Sumo

balazer wrote on 7/27/2015, 5:34 PM
I did not realize that the Shogun can't put DNxHD or DNxHR in an MXF file. Too bad. You can try some of my other suggestions. Uncompressed V210 AVI files are of course huge, but if you can fit them on an SSD, processing time won't be too long and you can convert to XAVC with Vegas or with Catalyst. ffmpeg's conversion from ProRes to V210 AVI is lossless.

And really, if you just want to get into the ACES workflow and see how it is supposed to work, don't sweat over 2 bits. Use the camera's 8-bit recordings, or convert from ProRes to 8 bit .MP4, and be happy. I seriously think you will not notice the difference between 10-bit and ffmpeg's conversion from Prores to 8-bit .MP4, especially for video from the GH4, which is not HDR and does not have any log color space. The GH4's shadow noise is more of a limiting factor than 8-bit shadow precision is.

If you have some fear of 8-bit video looking bad because of the way MOV files look in Vegas, that is not 8 bits. It's worse than 8 bits, because Quicktime for Windows does a very poor job of decoding and transforming the video to RGB. It must be truncating the Y'CbCr video to 8 bits, and then doing a conversion to RGB using 8-bit integer math, or worse. Vegas itself does a much better job handling the 8-bit formats that it reads natively, such as MP4 and MTS. Vegas is doing its transformations using 32-bit floating point math. There is no loss of precision. fmpeg converts from 10-bit to 8-bit by keeping the video in Y'CbCr and dithering, so it's a very high quality conversion.

If you want to use the GH4's internal recordings in Vegas, don't use MOV files. Shoot in MP4, or use ffmpeg to remux from MOV to MP4. Quicktime for Windows, besides having terrible precision, adds a gamma shift to the video. That will totally screw up the transformation to ACES.
Wolfgang S. wrote on 7/28/2015, 5:50 AM
" I seriously think you will not notice the difference between 10-bit and ffmpeg's conversion from Prores to 8-bit ."

That is why I run a workflow where I record 10bit 422 with the Shogun, convert the ProRes files with TMPGenc to Cineform 10bit 422 and edit that in Vegas within a 32bit floating point project. So I want to stay within a 10bit workflow, since that is one of the most important advantages of the combination of the GH4 and the Shogun.

Desktop: PC AMD 3960X, 24x3,8 Mhz * GTX 3080 Ti * Blackmagic Extreme 4K 12G * QNAP Max8 10 Gb Lan * Blackmagic Pocket 6K/6K Pro, EVA1, FS7

Laptop: ProArt Studiobook 16 OLED (ProArt Studiobook 16 OLED (i9 12900H with i-GPU Iris XE, 32 GB Ram. Geforce RTX 3070 TI 8GB) with internal HDR preview on the laptop monitor. Blackmagic Ultrastudio 4K mini

HDR monitor: ProArt Monitor PA32 UCG, Atomos Sumo

IAM4UK wrote on 7/28/2015, 7:59 AM
I've been experimenting more with ACES, and have refined a workflow that should suit my current purposes.

I use ACESproxy RRT for rendering the intermediates, then of course set those intermediate clips' "media" value accordingly. They look exactly right after setting them properly in the "assemble" timeline, but I'm surprised how very different ACESproxy looks upon initial render.
IAM4UK wrote on 8/3/2015, 12:08 PM
A question for balazer or others:

Would there be any reason to avoid a workflow like this one?

- Set up for ACES colorspace editing
- Insert all necessary original video files into timeline, make regions for clips
- Set all "media" properties appropriately
- Make color adjustments in accordance with balazer's tutorials for ACES
- Render intermediates with script for rendering regions with name -- use final output colorspace settings, likely Rec 709 for HDTV
- Set up Vegas for normal 8-bit colorspace editing
- Make a new .veg for the assembling and editing of those color-corrected intermediates
- Edit, and use whatever compositing or filters you may need, since you're now editing in "normal" 8-bit colorspace within Vegas
- Render to your chosen final CODEC

If there are pitfalls to such a workflow, please explain. Thanks!
balazer wrote on 8/3/2015, 12:45 PM
I don't see any problem with that workflow, though I would use a 32-bit project instead of an 8-bit project. Working in 8 bits causes some loss of precision due to the conversions between Y'CbCr and RGB.
IAM4UK wrote on 8/3/2015, 3:23 PM
Well, darn. I was hoping that this workflow with the color corrections done in 32-bit colorspace and the remainder of editing/compositing/filtering done on in 8-bit would help me get past some of the slowness* of 32-bit rendering, as well as eliminate the need to distinguish between ACES and ACEScc.

*for the intermediate render batch, I'd not need to be present at the computer
IAM4UK wrote on 8/3/2015, 5:19 PM
I tested it anyway. I used ACES RRT Rec 709 for the intermediate file (Sony XAVC), and for the Assemble file (.veg) I used 8-bit with the Default colorspace.

Even applying some compositing on some clips and some Boris FX filters on another clip, the colors I got in the final render were the colors I expected and wanted. So, I will put this workflow through some more paces, but I think I have a workable solution for a very time-critical project coming soon... the 48 Hour Film Project. (And I had chosen Rec 709 because of the way I know these projects will be presented at the local venue that will be playing them.)

When time is not a critical factor, I'll stick to 32-bit workflow throughout until final render; however, I won't be able to allocate as many hours to rendering during this project.

Thanks again for the excellent color management tutorials and feedback, balazer.
set wrote on 8/3/2015, 7:13 PM
Recently just noticed this article from DaVinci Resolve FB user group:
http://www.dcinema.me/2015/08/what-is-a-color-space/

Still learning the term 'Color Space'. For additional knowledge.

Set

Setiawan Kartawidjaja
Bandung, West Java, Indonesia (UTC+7 Time Area)

Personal FB | Personal IG | Personal YT Channel
Chungs Video FB | Chungs Video IG | Chungs Video YT Channel
Personal Portfolios YouTube Playlist
Pond5 page: My Stock Footage of Bandung city

 

System 5-2021:
Processor: Intel(R) Core(TM) i7-10700 CPU @ 2.90GHz   2.90 GHz
Video Card1: Intel UHD Graphics 630 (Driver 31.0.101.2127 (Feb 1 2024 Release date))
Video Card2: NVIDIA GeForce RTX 3060 Ti 8GB GDDR6 (Driver Version 551.23 Studio Driver (Jan 24 2024 Release Date))
RAM: 32.0 GB
OS: Windows 10 Pro Version 22H2 OS Build 19045.3693
Drive OS: SSD 240GB
Drive Working: NVMe 1TB
Drive Storage: 4TB+2TB

 

System 2-2018:
ASUS ROG Strix Hero II GL504GM Gaming Laptop
Processor: Intel(R) Core(TM) i7 8750H CPU @2.20GHz 2.21 GHz
Video Card 1: Intel(R) UHD Graphics 630 (Driver 31.0.101.2111)
Video Card 2: NVIDIA GeForce GTX 1060 6GB GDDR5 VRAM (Driver Version 537.58)
RAM: 16GB
OS: Win11 Home 64-bit Version 22H2 OS Build 22621.2428
Storage: M.2 NVMe PCIe 256GB SSD & 2.5" 5400rpm 1TB SSHD

 

* I don't work for VEGAS Creative Software Team. I'm just Voluntary Moderator in this forum.

Jas4 wrote on 8/31/2015, 10:17 PM
Jacob, thanks for your generosity and clear instructions. Coming from a photography background, I'm somewhat familiar with color spaces and their issues.

Video is a different ballgame. After spending good money on video cameras, I was distressed to see the footage I shot in Vegas look like crap without having the tools to understand why, especially when the camera during the shoot was connected via HDMI to a decent monitor.

I will say that ACES CC and 32 bit mode have solved a multitude of evils providing more accurate color and remarkably less posterization in the highlights. Now I'm beginning to enjoy looking at the footage and it is more accurately representing what I saw on my monitor during capture.

I decided to try Adobe Premiere since I have that option at work, and while I enjoy the workflow and the ease of control, as well as the speed in rendering, I could not duplicate the look I got with ACES CC. I tried very hard and a friend picked the ACES corrected footage every time over the Premiere footage.

I don't know if I can be of help since my knowledge is a bit limited, but I could perhaps offer FS100 and AX100 files.

Anyway, I want to thank you again.
balazer wrote on 8/31/2015, 11:17 PM
Thank you, Jeffrey.

It should be possible to profile your cameras and build IDTs, but I think you can probably get good results with some of the IDTs already available.

On the FS100, set the gamma to ITU709, and set the color mode to ITU709, Standard, or Pro. (Experiment to see which color mode gives the most accurate color) You will also need to disable the knee. I don't believe the knee can just be turned off, so you would need to set the knee to manual mode, and then set the manual point and slope to the maximum possible settings. Then in Vegas, set the media color space to the standard Rec.709 video file color space - the plain one without the ACES RRT. When shooting you'll probably need to underexpose a bit to protect highlights from clipping, and then compensate in ACEScc, since plain Rec.709 doesn't have much highlight range.

I'm not sure about the AX100, but I would speculate that the NEX-VG20 color space would be a good match for its default color mode.


Experiment with different output color spaces. These days I'm not liking the look of the ACES RRT, so instead I use a standard Rec.709 output color space and apply my own knee using a curves filter in ACEScc.