32 bit Capture

bigrock wrote on 11/8/2007, 11:04 AM
Q:Where do you turn 32 bits on or off within Vegas?

Was answered File>Properties but this is on a per project basis. What about for capture? It doesn't seem to make any difference what the project is set for, it's always the same. Are their any 32 bit settings to be set for capture or will it always reflect whatever the original is (like HDV which is by definition 8 bit).

I ask this because I capture all my media once and then reuse it from project to project. If I can capture in format for 32 bit I would prefer to do that.

Comments

Chienworks wrote on 11/8/2007, 11:20 AM
Capture from a digital source will always be an exact copy of the digital source. At present, digital sources are 8 bit or sometimes 10 bit. There aren't any 32 bit sources. What's on the tape is what you get.

There is absolutely no advantage to capturing more bits than exist in the source. In fact, there would be a (literally) huge disadvantage in that 32 bit files would be 4 times as big as 8 bit files, and would take correspondingly longer to move around and process.
rmack350 wrote on 11/8/2007, 4:33 PM
"What's on the tape is what you get."

Well, kind of. When you capture via SDI it's usually a bit of a transcode and you can transcode just about anything into a 10-bit codec this way.

Similarly, if you're ingesting HDV and converting to another codec to work more efficiently, you could convert that to a 10-bit codec. Similar process to using Cineform (Does CF have a 10-bit option that Vegas can use?)

There's another way to work, and Vegas is very good at this. Just use the camera-original footage and always render upwards. Since Vegas doesn't really do prerenders (in the rigid way other systems do it) you're pretty much processing everything as if it were uncompressed. So, edit, and render to what you want. Render to Sony's 10-bit YUV format, or a 10-bit BMD format, heck, render to 32-bit uncompressed. Good luck using that last one, but Vegas will do it.

Rob Mack
Chienworks wrote on 11/8/2007, 7:23 PM
"32 bit uncompressed" is actually 8 bit. It's 8 bits per color channel and 8 bits for alpha. So 10 bit video is bigger than 32 bit because it uses 30 bits. Go figger.
rmack350 wrote on 11/8/2007, 8:51 PM
Kelly, put Vegas in 32bit mode and render an uncompressed AVI. It's a 128-bit file, according to Vegas. That's 32-bit/channel, with alpha, of course.

I should have said 128-bit but I wanted to stay in the same frame of reference: 8-bit, 10-bit, 32-bit.

Rob
farss wrote on 11/9/2007, 5:31 AM
Now if only someone could make a camera with the dynamic range that can be recorded into that.

Bob.
rmack350 wrote on 11/9/2007, 7:31 AM
Ooof! Let's see. If a camera recording to an 8bit/channel format was 14-bit internal then I guess the camera that that records 32-bit per channel would be 56-bit per channel internally?

But of course the internal depth isn't all of it. It's whether the camera can capture detail in the brightest and darkest ranges, and then compress it down into the recording format.

Rob
farss wrote on 11/9/2007, 7:47 AM
To look at it another way, you can arguably fit the dynamic range of film into 14 bits, very roughly 14 stops. Every extra bit adds a stop so 32 bits is close to 32 stops, guess we wouldn't need an iris anymore. Nice dream but not even a 65mm SD sensor would come remotely close to that dynamic range.

Bob.
bigrock wrote on 11/9/2007, 8:46 AM
Ok I don't think we got the point.

What I was getting was my belief that 32 bit has nothing to do with capture and is only relevant during render. I can see no way to capture to 32 bit files, does anyone disagree?
Bill Ravens wrote on 11/9/2007, 9:00 AM
I think you guys are confusing the bit depth stored in a particular camera's stream capture with the mathematical precision used by Vegas 8. 32 bit float is shorthand for 32 bit floating point math, which is the precision Vegas 8 can use to do its mathematical number crunching. As a general rule, one always wants a higher mathematical precision than than the data precision to minimize round off errors. In the case of video, 8 bit data streams processed with 8 bit mathematical precision yields the banding we all know and love. It's rather easy to process a lower bit precision on a higher bit processor. The trick is to take the higher bit results back down to lower bit output. Does anyone care to talk dithering?
rmack350 wrote on 11/9/2007, 9:25 AM
To my knowledge there's no way to do it, but the method you'd have to use would be to convert the incoming stream to 32-bit data. For video ingested via 1394, this defeats the purpose, which is to have a direct data copy off the tape. For SDI, this is a slightly more indirect system, and your capture application could pick a codec to write to. This could concievably be a 128bit uncompressed AVI file, such as what Vegas seems to write when in 32-bit float mode.

What I was trying to say is that there's no value to doing this, especially with Vegas. Yes, you want to capture what was recorded, but there's no real advantage to upconverting your 8-bit/channel media at capture time to something higher. If you are using Vegas in 32-bit float mode it's quite capable of doing (almost) all it's processing at a higher bit-resolution and making a final usable file in 10-bit, and unusable files at higher bit-resolutions.

Vegas is not like other systems where you'd have to choose a 10bit mode when you first create the project file, and then render transitions and FX as you go along. Vegas generally does it's rendering on the fly with uncompressed frames living in RAM for a short time (longer if you set Vegas' RAM cache higher). Upconverting the original media won't really improve it. Upconverting your final renders would, but I think for must usage you get better bang out of converting from DV to a 4:2:2 or better codec. Theoretically, you should get better rendering in 32-bit float mode, but you don't need to try to upconvert your original footage.

Rob
rmack350 wrote on 11/9/2007, 9:44 AM
Ye Gawds No!

I understand what you're saying, though.

Personally, I think Vegas' 32-bit float implementation is still immature, mainly in that people seem to be walking into some uncomfortable situations. I'd be a little more impressed if they actually just warned you that this mode is still experimental.

Rob Mack
bigrock wrote on 11/9/2007, 8:30 PM
OK we're still not getting through here.

I am not asking for opinions about 32 bit processing. I am not asking for your opinion on whether 32 bit capture is better or not.

I am asking simply if there are any capture settings related to 32 bit processing or not, or is it as it always been. I can't find any at all and some people claimed there was when 8 came out.
Chienworks wrote on 11/9/2007, 8:37 PM
Hmm. I thought i answered that up above.

When capturing from digital there is no advantage and no reason to use any higher bit depth than the original source. You probably won't even find a capture program that will allow this, since the capture is really just a file transfer process.

When capturing from analog you can use the maximum bit depth allowed by the conversion hardware. At present, unless there's something really esoteric i haven't heard of yet, 10 bit is as high as it gets. And, as with above, there's no reason nor any advantage to storing more bits in your file.
farss wrote on 11/9/2007, 9:59 PM
Perhaps at this juncture it's worth pointing out that at least for DV Vegas's capturing is completely external to Vegas. Vidcap doesn't care at all about your project, set your project to PAL and if you have a NTSC tape, you get NTSC captured.

Bob.
bigrock wrote on 11/10/2007, 8:55 AM
You said "Hmm. I thought i answered that up above."

Umm sorry you did not answer the question, I am not asking for opinions or information on other subjects.

The question is simply are there any settings in Capture for HDV that are related to 32 bit. There is no requirement to debate the relative merits of this or that. Are there any settings, yes or no, I can't find any and people here claimed there was.
Chienworks wrote on 11/10/2007, 9:35 AM
Well, i can tell you right off that there aren't going to be any for DV or HDV. They're not 32 bit codecs, so it's not possible to capture 32 bits per channel from them. Don't even bother looking.
GlennChan wrote on 11/10/2007, 12:57 PM
Bigrock, I think the question was already answered: no.

You can search through the manual if you want to double check.

2- (Somewhat esoteric point here...)

There might be a subtle difference between capturing DV via Vidcap, and capturuing DV via SDI.

If you capture DV via Vidcap/firewire, the material will go into Vegas' DV codec. This codec decodes to studio RGB color space, with no negative values (or values above 255). This color space is a subset/smaller than what some DV cameras record... so you have some clipping happening.

While I don't have SDI equipment (and haven't tried this), what might happen is:
If you capture via SDI into Vegas' SonyYUV codec, that codec might decode with negative values (and values above 255) in a 32-bit project. So you aren't clipping any information that your camera recorded (although those values are pretty illegal).

2b- Capturing via SDI doesn't capture in 32-bit.... the difference is in the signal processing / implementation.

2c- There is another difference in capturing via DV/firewire and via SDI in the via chroma upsampling is handled, and the different frame sizes. Those are different issues.
rmack350 wrote on 11/10/2007, 1:11 PM
The answer is No. And there's no need.

Rob
rmack350 wrote on 11/10/2007, 1:55 PM
We've been shooting in DV for at least the last 7 years that I know of, certainly longer but I don't really know when my workplace adopted DV. It was originally conceived as an interim step to better formats but we just never found a driving demand for more than that in industrials. We've always captured it via SDI, largely because we had the SDI infrastructure set up for Betacam SP, including an edit system (Media100) that had it's roots in analog acquisition.

We've since moved to Premiere Pro+Axio, running screaming from the Mac platform and an edit system that seemed to have run into a dead end. We'll probably run screaming away from PPro+Axio very soon, back to the Mac and FCP.

Just background.

I spend a lot of my time at work pulling stills from footage using Vegas. I generally just recapture my own footage via firewire. I'm slowly getting to a comparison here...

DV footage brought into Media100 via SDI (back then) would be 8-bit, 4:2:2, and 720x486. Media100 would leave the picture area 720x480 and add 6 lines of black. Stills pulled from Media100 would look about the same as from Vegas (Studio RGB).

What we capture now (same DV deck with SDI output) through the Axio card into PPro looks a bit different. I'm pretty sure that we're capturing via SDI to a 10-bit 4:2:2 Matrox codec. Color values look to me like Computer RGB (when I look at the footage in Vegas the blacks look crushed). For some reason, the Axio hardware blows the DV footage up to 720x486, there are no black bands at top and bottom. To my mind, the Media100 behavior was correct, the Axio behavior of blowing up the picture is in error.

In both cases it's been simpler for me to recapture the DV media using Vegas. The media100 media was unreadable in Vegas, and the Matrox Axio footage just doesn't inspire confidence (plus it takes more space and I like to be able to take the footage home when I need to keep working over the weekend).

The good thing about SDI is that it gives you a digital signal that you can then encode as you like. SDI allows you to use Panasonic DVCPro 50 and Pro HD cameras with Vegas without worrying about using the native panasonic footage. In fact, it allows you to capture from anything with an SDI output, be it a deck or converter. So in a way it's a great equalizer, it just doesn't give you camera-original footage. And the media usually requires more disc space and throughput.

Rob
farss wrote on 11/10/2007, 2:10 PM
"If you capture DV via Vidcap/firewire, the material will go into Vegas' DV codec."

Is this statement entirely correct as I'm reading it?

I and I think many others are under the impression that the 'encoding' part of the process takes place in the camera, capturing DV over firewire doesn't change anything. Capture in Vidcap, WMM or SCLive and you get the same thing. The wrapper might say it's a different codec but the video is identical as no encoding is going on.
Where the Vegas codec kicks in is in the decoding of the captured material. I think even on the Mac capturing DV you get the same thing, admittedly in a different wrapper but I've never noticed any difference opening an AVI file in FCP captured by any of the PC apps and opening a mov of DV captured in FCP.

I know this sounds like splitting hairs but the subtle difference would have far reaching implications.

Bob.
rmack350 wrote on 11/10/2007, 2:15 PM
I'd expect anything capturing via 1394 to work this way. It should just be a data capture from the source without any sort of conversion, whether it's DV, HDV, DVCPro, whatever. But Vegas takes the project/media disconnect a bit further. Although you do start projects by selecting a template that conforms to a media type, vegas doesn't then use that to define how to do renders. This is quite different from FCP or PPro which both seem to assume that you're project template = output format.

There's good and bad about both approaches, and perhaps Vegas suffers by leaving things too open-ended, but it's a very flexible program.

Rob Mack
GlennChan wrote on 11/10/2007, 7:35 PM
Bob- yes, you're right. What I was trying to say is that footage captured that manner usually ends up being decoded by Vegas' codec.