Comments

Shergar wrote on 9/18/2007, 9:16 AM
..in a nutshell from Glennchan...

When you change between 8-bit and 32-bit, Vegas changes more than the bit depth. Certain codecs like HDV, SonyYUV will decode differently depending on this setting. Other codecs like DV and Cineform do not do this.

In 8-bit, those codecs will decode to studio RGB levels.
In 32-bit, those codecs will decode to computer RGB levels.

full thread here
http://www.sonycreativesoftware.com/forums/ShowMessage.asp?ForumID=4&MessageID=546644
CoolBlue wrote on 9/18/2007, 9:36 AM
So basically if you are just working with DV, you should use 8-bit? Because 32-bit doesn't look right to me with DV.
Shergar wrote on 9/18/2007, 10:02 AM
In practice it's much simpler to stick with 8 bit at the moment unless there are specific 32bit tricks you want to achieve.

What tricks? Well - solving banding problems with applied effects and compositing, getting bright regions to glow in a more filmic way (surely there must be others...) check www.glennchan.info for more info, and also in depth explanation of why 32 bit looks "wrong" in 32bit

I'm working on HDV and my target pipeline is 32 bit but the performance hit is so bad it's unusable for me - maybe 8.0a will be better.


rmack350 wrote on 9/18/2007, 10:32 AM
it'd help to know what type of media people are starting with and what they're rendering to.

I just did a set of renders from NTSC DV to 8bit Sony YUV, Uncompressed with alpha, and uncompressed without alpha. These were done in 8bit, 32bit/1.000, and 32bit/2.222 modes. I'd consider these to be pretty much baseline renders to workhorse formats.

All of the renders match the originals as long as I'm in the same mode that they were rendered in, suggesting that you can't switch modes mid-project.

The reason I just did these renders is that when I tried it last night at home I got very different results. The Uncompressed with and without alpha showed color shifts, sometimes up, sometimes down, depending on the presence of an alpha channel. I don't know what the difference is between the two render sessions. Operator error? (Probably) Athlon64 Vs Pentium? Could the Athlon have a floating point bug? I doubt it.

Rob Mack
GlennChan wrote on 9/18/2007, 12:12 PM
See
http://glennchan.info/articles/vegas/v8color/v8color.htm

You can get different results between the 3 modes in Vegas:
8-bit (only 2.222 compositing gamma allowed)
32-bit / 2.222 compositing gamma
32-bit / 1.000 compositing

1.000 versus 2.222 compositing gamma will give different results if you have FX or compositing going on.

32-bit versus 8-bit will affect the behaviour of certain codecs.

2- I think uncompressed 32-bit float is just plain buggy. (To render uncompressed 32-bit float, you have to render a Video for Windows AVI from a 32-bit project. These uncompressed files are 4X bigger than 8-bit uncompressed.)
Grazie wrote on 9/18/2007, 12:47 PM
This sounds familiar Glenn?

Best regards

Grazie
GlennChan wrote on 9/18/2007, 12:53 PM
Hmm I think we discovered something new... that the uncompressed codec is screwed up.
rmack350 wrote on 9/18/2007, 1:39 PM
Well, the funny thing is that in the set of tests I ran this morning uncompressed was fine in all modes and the sizes were NOT any different coming from a project set for 8 bit and then set for 32-bit. But last night on an ath64 at home the renders were wonky. Maybe I just don't have enough time to look at what I'm doing very carefully.

Turning tracks on and off with the scopes up, everything looked about the same. Sony 8-bit YUV looked like it was dropping the very topmost value on the histogram, Uncompressed at 32-1.0 showed a very tiny change on scopes, Uncompressed at 32-2.2 looked identical on scopes.

I didn't see any indication of what the bit depth for uncompressed would be. No choices for it but I suspect it stays 8-bit and can't do 10-bit.

I didn't try the sony 10-bit YUV codec. In fact, I didn't try a lot of codecs. Sounds like the most important are cineform, MPEG2, HDV, things like that.

This is a good time for batch rendering!

Rob Mack
Bill Ravens wrote on 9/18/2007, 1:53 PM
Dunno if what I'm about to post is observed by others, or not. Based on the first post, I experimented with renders, starting from the Sony YUV codec. It seems as tho' there's a significant increase in depth(saturation?) of colors in 32 bit mode, over 8 bit mode. A similar shift in "saturation" does NOT appear with the Cineform Intermediate codec. Note: I'm refering to color in a single track, no compositing or special FX

Edit: Nevermind....I realized I did a resize.
GlennChan wrote on 9/18/2007, 9:31 PM
Bill:
This article covers the behaviour of the Sony YUV codec (and why Cineform doesn't have the color shift).
http://glennchan.info/articles/vegas/v8color/v8color.htm

2- The "uncompressed" codec/option is NOT the same as the SonyYUV codec. It's a separate option.
TimTyler wrote on 9/18/2007, 9:40 PM
Well, it sounds like a render-related problem to me.

I mean, regardless of whether you originate with a 32 bit project or "convert" an 8-bit project to 32-bit, what you see in the preview window should resemble what you'll get in a render, and that is obviously not the case here.

I think Sony has some 'splaining to do.
farss wrote on 9/18/2007, 10:00 PM
"what you see in the preview window should resemble what you'll get in a render,"

Not so certain about that, I'd assume the Preview will show you according to the Project settings, these may or may not be the same as what you render to.

I think there's another speedbump on the road to linear light nirvana. What Vegas is trying to do is convert from what the camera recorded to what the imager in the camera saw i.e. undo the 2.22 gamma curve back to a linear function. Problem from the little that I know about this is the camera may well not have used a simple transfer function. We now have all manner of other curves (e.g. cinegamma) that the camera can use to make nice looking images. Trying to reverse those curves back into linear light is probably impossible to get 100% accurate. I don't think this part of the problem is a Vegas problem, it affects any application trying to work in linear light.

Bob.
GlennChan wrote on 9/18/2007, 10:07 PM
Vegas will assume a 1/0.45 transfer curve. Because it can't know every transfer curve out there.

In cameras, they play around with the transfer curve especially with the video knee. In the knee section, the function is really a 3-D function since what's in the red channel affects the other two channels.

2- For SD the standard transfer function is Rec. 601
For HD the standard transfer function is Rec. 709 (this is different)
For computer imagery theres the sRGB transfer function.

Vegas will assume 601 even for HD footage AFAIK. So it's not textbook correct. But in many high-end systems the difference is glossed over.
TimTyler wrote on 9/19/2007, 7:37 AM
> these may or may not be the same as what you render to.

Of course, but when rendering from the timeline to an uncompressed AVI they should look the same, shouldn't they?



farss wrote on 9/19/2007, 7:53 AM
You'd think!
But without a flow diagram or some explaination from SCS I think we're all groping around in the dark. I think Glenn's done an excellent job of trying to fathom this by sort of reverse engineering it but I think his task would be way easier if someone who speced the code was to chime in.
Bill Ravens wrote on 9/19/2007, 8:36 AM
Amen! Bob.What we need is a little formal documentation.
GlennChan wrote on 9/19/2007, 12:18 PM
The flow is:

File -->
Decoder (decoder behaviour depends on project settings) -->
possible conversion to linear light (if compositing gamma is 1.000 and project is 32-bit) -->
MediaFX --> eventFX--> Pan/Crop --> eventFX--> trackFX and track Motion and compositing --> video output FX -->
possible conversion undo'ing linear light conversion (if compositing gamma is 1.000 and project is 32-bit) -->
Encoder (behaviour depends on project settings) -->
Preview device (preview device color space sometimes depends on settings and what codec is being used if it's DV or SDI out)