Vegas Video 32bit video levels vs 8bit video levels

Comments

ALO wrote on 7/19/2021, 4:01 PM

"I believe every significant video editing suite *other* than Vegas uses 32-bit float for internal processing regardless of source content"

I doubt this is the case. I know some offer float point options (HitFilm Pro does, too, but its default is 8 bit integer) but some of them even claim to use float point processing from in to out but actually it isn't true. My simple test is to let such software import this EXR graphic file (unzip to get the EXR file) and let it lower the luminance level. Only if this recreates the text in the graphic it's true float point processing.

interesting. Yes, Resolve, Premiere, FCX, etc at least claim to process internally as 32-float. Have not tested. But I suspect none of them use Vegas' default 8-bit pipeline. :)

Marco. wrote on 8/1/2021, 11:33 AM

I just see even Lightworks works same way as Vegas Pro does. It defaults to 8 bit integer and gives options to set it to 16 bit integer, 16 bit float point or 32 bit float point.

Howard-Vigorita wrote on 8/1/2021, 4:39 PM

@Marco. One of the main difference is integer math versus floating point math. The 32 bit property of the float point mode in Vegas Pro probably is the most misinterpreted one. It's not like 8 bit, 12 bit 16 bit integer color bit. 32 bit is the processing precision of the float point math. A good point to start investigating.

Out of curiosity, I supplemented a handful of test encodes I ran through ffmetrics with project pixel info and it was a mixed bag if I limited it to Vegas 18 renders. Depends allot on which quality metric you believe in. Here's the Vegas 18 section sorted on vmaf where renders of 8-bit int pixel format projects are at times right up there with 32-bit float and limited range ones. All use a lossless reference pulled from lossless-raw footage that preserves the sensor's chroma sub-sampling (sometimes called partial debayering). Full chart is online in my signature.