I have found that editing in 8-bit (full range) mode has advantages for my workflow over 32-bit floating point (video levels) mode. I use Vegas Pro 19.
There’s just one problem with 8-bit full range mode. I shoot and edit 10-bit 422 footage and I definitely see more noise and banding in MP4s rendered out in 8-bit mode vs. the same exact footage rendered out in 32-bit floating point mode.
So that suggests I might switch from 8-bit (full range) to 32-bit floating point (full range) just for rendering final projects. I have done this successfully (without unexpected changes in the final rendered video) by making sure I set the composting gamma to 2.222 and view transform to “off” when making the switch.
However, several strange behaviors I’ve noticed in 32-bit floating point (full range) mode make me wonder if I’m not setting myself up for unpredictable troubles. This is the funny behavior I’ve noticed so far:
I change a project from 8-bit full range to 32-bit full range and everything appears fine. The jagged edges on the histogram smooth out, confirming I’m now working with the higher bitrate. However, if I then open up the color grading panel, the image in the preview window will at that moment have an unexpected color shift. This shift doesn’t happen to all images, but it does happen to any image on which I’ve used the “color curves” adjustment in the color grading panel.
So then I close the color grading panel and the unexpected color shift remains. If I go up to the edit menu to explore, I see that “Undo ColorSpace Updated” is now an option. If I click "undo" the colors go back to normal. So apparently the ColorSpace is being updated when I open the Colorgrading panel in 32-bit full range mode?
This is all mysterious to me. Does anyone understand what’s happening and why? More importantly, is my method of switching from 8-bit full range to 32-bit full range, touching nothing else, and then rendering, a reliable way to go? Or might other unexpected program changes be lurking?