"Purest" SDI 10-bit uncompressed solution?

Comments

Coursedesign wrote on 5/3/2010, 2:35 PM
To the best of my knowledge all NLEs decode that to RGB for internal processing. Vegas does an exceptionally good job of this and encoding it back to Y'CbCr. Tests done almost a decade show zero loss after 100 generations. Avids systems show significant loss after 1 generation dur to their unavoidable use of chroma smoothing.

Vegas is the only NLE I can think of that works only in RGB, like After Effects, etc.

This is generally a good thing, and the future is with all-RGB workflows.

ARRI Alexa (snif, I want one!) can output ProRes YUV or RGB, and we'll see this increase significantly.

Still, it will be a while before broadcast is in RGB...

The generational loss you're referring to is all about the DV codecs used, and Vegas had the best one.

Avid significant loss after 1 generation due to their unavoidable use of chroma smoothing

Where did you see that? Chroma smoothing in Avid MC was done to get a decent 4:2:2 video from 4:1:1 or 4:2:0 DV input, and this generally looked better than what came out of the camera.

Are you referring to Avid Express trying to polish a post-digestive product and staying in 4:1:1 on the timeline?
rmack350 wrote on 5/3/2010, 2:45 PM
ProRes would be an "Apple" standard since it's proprietary. Not exactly an industry standard unless your industry is an Apple monoculture.

As far as Vegas' 32-bit pipeline giving you "untouched" SDI thoughput, you'd want to test it but I think if you don't touch your clips then they'll be untouched. It's a processing pipeline, after all.

Rob
farss wrote on 5/3/2010, 2:53 PM
"Vegas is the only NLE I can think of that works only in RGB, like After Effects, etc."

Are you 100% certain of this??
No one actually says anything about how the code works internally.
This subject was done to death sometime ago here and the consensus was that all NLE's internal pipelines are all RGB.

"ARRI Alexa (snif, I want one!) can output ProRes YUV "

Indeed which bought many complaints for not using an industry standard codec.

"Are you referring to Avid Express trying to polish a post-digestive product and staying in 4:1:1 on the timeline?"

Yes, after I posted I remembered I should have qualified that in reference as only relating to DV25. As far as I know they were doing the same thing with PAL DV, not just NTSC DV. Avid used to achieve this by insisting that DV should be captured via component. This appliec to Avid MC as well I believe.

Bob.
Coursedesign wrote on 5/3/2010, 3:07 PM
I did cuts-only 10-bit in Vegas, and it stayed 10-bit in the BMD codec.

As soon as you color corrected, did fade transitions, or just about anything else to it, the "10-bitness" was lost though, due to the output of Vegas' filters being only 8-bit at the time.

Today Vegas has 32-bit for those who want the ultimate in quality (and rendering time :O).
Laurence wrote on 5/3/2010, 3:19 PM
Correct me if I'm wrong but that is 32 bit with some caveats:

1/ Video truncated at 8 bits (two least significant bits dropped).
2/ 8 bit truncated video processed at 32 bits.
3/ results of the 32 bit calculations truncated again at 8 bits for 8 bit outbut (that can exist as 10 bit video but with the two least significant bits having values of 0).

Thus it really is still just 8 bit. 8 bit with a little more precision after color correction, but still just 8 bit none-the-less. I feel so certain about this but I would absolutely love to be wrong this time around.

Edit: One extra little observation: I believe that RGB is a little more efficient than YUV. Isn't 8 bit RGB color about the same precision as 9 bit YUV? Maybe that's why some are seeing more than 8 bit resolution when converting to the YUV codec. I just kind of half understand this stuff. I'd love to really clarify it in my head.
Laurence wrote on 5/3/2010, 3:44 PM

Also, I really feel like I turn out higher quality color correction when I use FirstLight than when I color correct within Vegas. My understanding is that Firstlight is color correcting at 10 bits RGB, which Vegas truncates down to 8 bits, which ends up looking like 9 bits YUV. That color correcting in FirstLight keeps everything at 10bit precision right up until the end where Vegas may throw away the ending two binary zeros, but at least this is only happening after the gradients have been calculated and so it still looks pretty good. That doing CC in Vegas means truncating, color correcting the truncated colors at up to 32 bits, then truncating again. That's taking for granted that you are using the 32 bit mode (which is incredibly slow on my Core2Duo laptop. If you are using the 8 bit mode, Vegas truncates to 8 bits and calculates at 8 bits for even less precision and more banding.

By using FirstLight, I am color correcting a complete 10 bit color number at 10 bits, and only truncating after the new gradients have been calculated. Even more importantly, the very important cRGB to sRGB conversion that all Canon DSLRs (and my SX-1 IS) need is done on conversion to intermediate. This means that the 8 bit color of the Canon is converted to a 10 bit number which does not have zeros in those final two bits. This 10 bit color is then color corrected outside Vegas at a full 10 bits and only truncated to 8 bits at points where Vegas does not smart-render the Cineform footage. That in spite of the fact that I'm not working with uncompressed, have simple USB drives instead of a RAID system, and am working with a lowly core2duo CPU, I'm actually getting less banding in my gradients than people working with uncompressed or other high quality codecs but doing the color correction within Vegas.

If I'm wrong about any of this, please let me know.
rmack350 wrote on 5/3/2010, 3:52 PM
I'm not challenging this. It'd be nice if SCS produced a white paper to explain things. Then we could all just point to it. Anyway, maybe this is testable...

1/ Video truncated at 8 bits (two least significant bits dropped).
Okay, the assumption here is that a 10-bit file (1024 values per channel) gets truncated to 256 values/channel. Are you saying that vegas would write a 10-bit file with 8 bits of data? As in, values of 1021-1024 all get truncated to just 1024?

If Vegas' scopes are 8-bit then you wouldn't see combing in the histogram. So you need a good 10bit scope.

BTW, is this properly called "truncating"? Or is "Rounding" a better term?

2/ 8 bit truncated video processed at 32 bits.
Regardless of whether truncation is happening, we know that the processing is 32-bit. And we also know that uncompressed AVI gets recorded as 128-bit files when created in 32-bit mode. That's 32-bits/channel. So we know that the 32-bit chain really outputs 32-bit data, and if you output to a 10-bit file you have to assume that the data gets rounded from 32-bit to 10-bit.

3/ results of the 32 bit calculations truncated again at 8 bits for 8 bit outbut (that can exist as 10 bit video but with the two least significant bits having values of 0).
Okay, I think you're describing a 32-bit calculation with an end result being rounded to 8-bit values and then written into a 10-bit file.

Let's go back to that 32-bit/channel uncompressed AVI. When you put Vegas into 32-bit mode and then render an uncompressed AVI it writes it as 32-bit/channel. You don't get choices, this is just the raw output. It seems to me that if Vegas were rounding it's 32-bit output to 8-bits then it'd write an 8-bit/channel uncompressed AVI. It's not doing that, so logically your 10bit file should be created from 32-bit/channel data, not 8-bit.

Now, just because it's logical doesn't make it so, but I think we've got good reason to think that Vegas isn't rounding it's output to 8-bits and then stuffing that into a 10-bit media file.

Bob's example of scopes and combing seems a good test but you need a scope that can see 10-bits or more, don't you?.

Rob
farss wrote on 5/3/2010, 3:56 PM
I don't quite understand why you're jumping through so many hoops.

1) Very few cameras record 10 bit.
2) As I pointed out and have tested Vegas does read all 10 bits from a 10 bit source file, put through a 32bit pipeline it is as good as it gets.
3) There was a bug in Quicktime, since fixed, that caused footage from the 5D / 7D to be truncated to Y'=16 to 235. That's why so many were setting the camera to low/medium contrast. All this is fixed. Vegas will preserve all the full values from the camera. What you do with them as they're way outside 'legal' values is upto you.

4) Banding occurs because Vegas doesn't dither.

5) Totally agree, if you've got a slow machine processing 32 bit float is a serious render hog / source of crashes and red frames. A 10 bit pipeline in Vegas could be a good thing. Then again as I said so few cameras record 10 bit and for those that do you might generally want more than a 10 bit pipeline anyways. Well shot 8bit is pretty darn good and for most optics and camera setup has more impact than anything else.

6) I think a lot of what you're reading from Cineform comes from Ppro which does seem to clip at 16 to 235.

Bob.
Coursedesign wrote on 5/3/2010, 4:09 PM
Isn't 8 bit RGB color about the same precision as 9 bit YUV?

Effectively, yes.

Working in Vegas with high quality footage, CineForm is a no-brainer (where there is a format fit).

Internal operations in Vegas are mostly at max. precision, then the output is 8-bit.

That is still very helpful, even when the output is 8-bit.

My guess would be that the Vegas team is cranking on a VfW-free version that will totally rock. That in turn will allow all kinds of good things to be done, and allow more format freedom.

In the beginning there was only DV in, today....

farss wrote on 5/3/2010, 4:18 PM
"Bob's example of scopes and combing seems a good test but you need a scope that can see 10-bits or more, don't you?."

Not really and that's why my test was a bit convoluted. Divide by 4, render, multiply by 4 again. If the values are being truncated you'll see that on an 8 bit scope.

Pretty certain I did this test using both 8 and 32bit processing. With 8 bit you can see the truncation, no truncation in 32bit.

I'd be very pleased for others to repeat my tests, I'm not infallable :(

Bob.
Coursedesign wrote on 5/3/2010, 4:20 PM
So we know that the 32-bit chain really outputs 32-bit data

Sure, but any operations other than straight cuts are output at 8 bits/channel.

Very few cameras record 10 bit.

Such as the OP's DigiBeta camera.

I started posting here about the specific benefits of a 10-bit workflow back in 2004. As usual, it was poo-pooed for a few years before it was accepted.

10-bit isn't a panacea. It makes no difference unless you set up your camera to really use the bits at its disposal, or shoot 8-bit but want to avoid truncation between multiple operations (each of which may be processed at 32 bpc within Vegas).

That means that it is a waste of time for the majority of users.

But for the rest, it is great.
farss wrote on 5/3/2010, 4:33 PM
"In the beginning there was only DV in, today...." YouTube :)

Seriously though. Yesterday I went to the CS5 Roadshow. Kudos to Adobe to putting together that 'mashup' movie, shot on a RED and other cameras. The thing that immediately grabbed my eye was the extras standing on the ridge jumping and waving their arms in celebration. They should have hired extras that could friggin ACT!

After a decade of being a self flaggelating measurabator I might finally be starting to understand what it takes to make a good movie.

Bob.

Laurence wrote on 5/3/2010, 5:11 PM
I just wanted to reiterate that if you use a Canon DSLR and convert the footage to Cineform, you may have started out with an 8 bit number, but the colors on the intermediate are using all 10 bits. It's a good thing because the conversion from cRGB to sRGB is subtle and it would be easy for things like the gradients in a typical sky to look banded otherwise. This is one of the reasons I think that people like Vic Milt speak so highly of using Cineform for their DSLR footage.
rmack350 wrote on 5/3/2010, 5:25 PM
I'd be happy to repeat this this evening if I find time. I don't seem to have much lately and with my current "mouse elbow" spending more time at a computer has been less appealing lately.

I think the fact that you do see the truncation or rounding in 8-bit mode is a good sign and I'd trust you're right on this.

My assumption is that if a filter is 8-bit only then the image does drop down for processing and then back up to 32-bit. Obviously not good when that happens. I take it Vegas still uses a mix of 8-bit and 32-bit filters?

Rob



rmack350 wrote on 5/3/2010, 5:30 PM
After a decade of being a self flaggelating measurabator I might finally be starting to understand what it takes to make a good movie.

Content is still king, but beautiful content is very nice.
rmack350 wrote on 5/3/2010, 5:35 PM
Laurence, you're bringing up something that I'm curious about, but it veers a little farther OT. Is it just the Cineform HD product that does a cRGB to sRGB color conversion? I've been wondering about this in relation to my GH1.

Actually now that I look at this, I think the GH1 is set to work in sRGB by default, so maybe this isn't an issue for me.

Rob
Laurence wrote on 5/3/2010, 6:52 PM
cRGB to sRGB conversion is only important on still cameras with video modes.

There is a little more about this subject in http://www.hdmom.com/forum/cineform-software-showcase/99286-more-cineform-questions-regarding-8-bit-vs-10-bit.htmlthis[/link] thread.

The post that caught my attention was the one from David Newman about half way down:

We disabled the 8-bit encoding option some time ago for good reasons. Even though Vegas is an 8-bit application, its use of video systems RGB benefits for increases encoding precision. 8-bit RGB is approximately equivalent to 9-bit YUV, and most compressors like CineForm use YUV for internal data storage (it is more effecient.) So converting 8-bit RGB to 10-bit YUV and compressing that guarantees a very accurate construction back to RGB when needed.

Am I correct in the following understanding then:

Vegas works in RGB at 8 bits. 8 bits RGB is the equivalent of 9 bits YUV. The conversion from RGB to YUV happens within the programming of the codec itself, so codecs like Cineform and Sony YUV will take the 8 bit RGB data that Vegas gives them and write it to 10 bits within the YUV codec. As the YUV codec does this conversion it actually makes use of 9 of the ten YUV bits. Thus Vegas's 8 bit limitation is only about half as bad as I thought it was.

This would also explain why Lars was having better luck with multiple generations of mxf than he was with Cineform in terms of color shift. What he was actually observing wasn't so much the quality of the codec itself but rather the quality of the RGB to YUV conversion within the software that was a part of the two codec packages.
__________________
Coursedesign wrote on 5/3/2010, 7:13 PM
Vegas likes Computer RGB, but can also work with Studio RGB of course.

sRGB on the other hand is the color space of ye olde 1990 CRTs.

apit34356 wrote on 5/3/2010, 8:08 PM
"10-bit isn't a panacea", just a heads up about 10bit. Not all cameras that offer 10bit output do so in a linear conversion, some output in a log conversion to offer a wider dynamic range.
Jeffery Haas wrote on 5/3/2010, 8:30 PM
This is not a valid solution for this forum.
REASON: This isn't an Apple forum, it's a Sony Vegas forum.
Coursedesign wrote on 5/3/2010, 9:17 PM
This is not a valid solution for this forum.

You're late to the party. Read post #8 above.


This isn't an Apple forum, it's a Sony [...] forum.

I'll keep that in mind before posting any info on editing video shot with Panasonic or Canon cameras.

And of course, no one is allowed to interoperate with users of other NLEs, even if your mortgage depends on it, and any mention of After Effects will be severely punished because Vegas can also do compositing.

Look, I'm not trying to mock anyone, it's just that it is helpful for anyone to have the information needed to know what compromises are needed with any tool. For someone who only uses Vegas, it can be worth it to spend 5-6 hours on something that can be done in 5-6 minutes using AE, FCP or Avid. For someone who has all three, it is IMPORTANT to know when Vegas can do in 5-6 minutes what takes 1-2 hours with the others (this has been demonstrated at FCPUG meetings, attendees were floored!).


"ARRI Alexa (snif, I want one!) can output ProRes YUV [&RGB]"

Not industry standard? A healthy percentage of the 1.5 million users of [the numbers-leading industry standard NLE] use ProRes.

And looking at the measurebating numbers, it has a higher PSNR than DNxHD (see the White Paper linked in a previous post above), but you gotta give Avid extra points for providing both cos and decs for free.

But as an important aside, there are still uses where CineForm is superior. They can justify the super high prices for their top codecs very easily.


"Are you referring to Avid Express trying to polish a post-digestive product and staying in 4:1:1 on the timeline?"

Aaaahhh, I had forgotten about that. This was subject to much debate during the DV era, but it was easy to show that capturing via component made the video look more pleasant, thanks to the chroma smoothing.

Today it is of course easier to do that within the NLE.
rmack350 wrote on 5/3/2010, 9:53 PM
Thanks Laurence.

The GH1 definitely has a video mode. I checked the settings and the default color space is sRGB so I suspect that conversion isn't required, which is good since I don't see anything about it in NeoScene anyway. Yeah, I know the HD product is better because of the color processing...my wallet says wait. Oh, and this is a still image setting in the camera but seems to also bear on the video.

'nuff about that. Since this is an OT fork in the thread I'll leave it there.

rmack350 wrote on 5/3/2010, 10:43 PM
I think the original question is boiling down to one thing - what NLE is going to keep 10-bit media 10-bit all the way through CC and filtering and then out to the render?

FCP appears to do this and if I needed to make a quick spending decision without doing much research then I'd probably opt for FCP. Apple is very good at convincing people that everything works.

SCS has made little effort to clear up the 32-bit questions for users. It's poorly documented and poorly promoted.

If you look in the online help system and search for 32-bit you'll find this statement:

"Video plug-ins and media generators that do not support floating-point processing are indicated by a blue icon in the Plug-In Manager and Plug-In Chooser with this icon (a blue icon) in the Video FX and Media Generators windows."

The comment in italics is mine. Almost all of the video FX in Vegas are 32-bit compatible. None of the media generators are 32-bit compatible.

Vegas appears to be working in 32-bit/channel mode when you set a project's prefs this way, and this is more than enough to work with and preserve the 10-bit-ness of 10-bit media. It really appears to me that there's no problem with doing this in Vegas, except for the lack of hand holding and reassurance. There is a bit of a question about whether Vegas will keep you in the same color space (rec. 601 or 709, I guess) but I think this is manageable and maybe the issue has been dealt with.

As far as ProRes being "standard", maybe the better way to describe it is common. It's commonly used on Apple platforms and not used at all on other platforms. It doesn't conform to a standard and has not been adopted by a standards body. Because it is proprietary there's no particular escape path if you decide to bail out of the Apple platform. Never-the-less, it's a fine codec and is very commonly used (on Macs).

On the SDI front...SDI feeds data from your source to the PC, where it's then encoded using a codec of your choice. Technically this is not exactly a pristine process because you are not necessarily re-encoding things exactly as they came off the source. As an extreme example, for years now we've been taking DV25 from a deck via SDI and then recording it in a 10-bit codec. Obviously, if we wanted an exact copy of what's on the tape then we'd take the video in over firewire, not SDI. The point here is that SDI can actually entail quite a bit of translation. It can be of excellent quality, but it's not "pure".

Rob Mack
Coursedesign wrote on 5/4/2010, 6:29 AM
Rob,

Very well put!

The OP's DigiBeta is only compressed 2:1 (very close to lossless) and is full 10-bit 4:2:2.

When you ingest that to a high bit rate 10-bit codec such as Uncompressed 10-bit 4:2:2, CineForm, DNxHD, etc., you're not going to see any loss from the original footage, which is what I think the OP meant.