32bit float... hmm.. somethign to consider maybe..

DJPadre wrote on 9/3/2007, 5:11 AM
ok, well weve all heard about how spanky 32bit can be for cleaning up m2t.. Fair enough the original is 8bit to begin with and we're rendering back to 8 bit.. this might baffle afew, but for those of us doing alot of colour work, it will come in handy.. i know more than likely id end up using it to try to push Dynamic Range as far as i can.. sadly, im restricted to my source material, but the idea of 'fix it in post" bears a very heavy burden of truth here.. In turn, the idea of fixing it in post will actually become a reality. A heavy resource requried reality, but a reality nonetheless

The idea of having the freedom to select 32 float or 8 is a welcme addition, this way we can at least allocate a variety of processing methods basd on any specific scene we might be using it for.
As an example, if were doing heavy colour work, or trying to fix blown out backgrounds, 32 would come in handy, if however were just doign some basic CC, then we'd stick to 8.
As were outputting to 8 and then converting again down to a smaller bitrate for DVD delivery, the benefits themselves im sure, would be questionable.

I dont doubt that there wil be pixel pushers who will argue this issue, but u get that in this profession..
People focus too much about the what ifs as opposed to using the tool for the actuals..
which brings me to teh next bit..

In the real world, im sure the benefts would be relative to the beholder. Im yet to see the app in action, so my comments are pretty much theory, based on previous experience with members here and the general public of editors who would go out of their way to diss an app as opposed to actually learning about it..

One thing i am yet to try to fathom through all this, is frame rate conversion, such as slowmotion.

What benefits on frame rate conversion will we see if using 32bit float?
Considering 1080p is fast becoming an acquisitional option, Vegas is in sore need of rectifying the frame interpolation with slowed progressive scan footage.

To this day, i prefer to use Dynapels slowmotion as its far more accurate and authentic. As slowmotion not only emulated frame, but also shutter response, slowmotion its by far teh biggest used feature of any NLE bar none.
.
With Interlaced, Vegas is perfect on any resolution, with progressive scan, its pretty much garbage on any res for anything less than 85% speed

So aside from colour, and obvous interpolationary benefits we will see with filtering using 32, would the same be said for frame rate?

THE FIRST
COnsideirng slowmo is still a basis for colour reproduciton on newly generated media, one would assume that it SHOULD not only increase accuracy, but also offer added benefits such as luminance accuracy for said generated frame in comparison with source frame before and after.

THE SECOND... considering some of us will be working exclusively in 32float, will we be given the option of allocating 32bit float during capture? If not, does sony have a script which will allow us to allocate this feature on each clip in the media pool?
If not, im sure therel be a script to do this (much like the "change aspect on all media" script and save us having to go throughEVERY clip to allocate said 32bit/8bit choice...

Audio..
is the AC3 encoder still only using 1 thread?
Also im REALLY surprised there is no option for TrueHD or DD plus... consideirng HD is now a tool which we use to upsell our products with value added, one would have liked to have thought that the full benefits would be provided..
Thats just a gripe and has nothing to do with 32bit float..

SCALING
considering HDV is 1440x1080 and BluePrint is 1920 the need to scale footage to 1920 to adhere to the requirment of BD Consortium (in turn, EVERY BD authoring tool must check the res before even allowing the program to auhor the disc)
So aside from in cam scaling which many HDV cams are doing, the need to scale again is required through the standards set by BDC.
Now with this, some say its just a change of the aspect.. i beg to differ. it might jsut be a change of aspect, but it still requires rendering... and thats my point..

Theres also the issue of frame rate, as 1080 25p is not yet a standard for BD delivery.. in turn, if i was to shoot 1080p with a PAL A1, my natively captured footage would be ascled down to 1440 to suit HDV standard.
Now, once in the edit, i must then scale that BACK UP to 1920.. its not an issue of aspect, its an issue of scale, because the aspect doesnt actaully change to the eye.. Aside from that, i will also requir to convert the frame rate from 25p to 24p
Now i can do this myself, and know that what im getting is what i want.. but, if i was to simply import my footage to DVDitPRoHD, it would check the file to ensure it meets teh BDC standards and force a conversion (ala DDVA style) to make it so.
Again this all takes time.. and teh HD workflow again is puking itself even though were dealing with M2ts (to save THAT step) and not intermediates..

Now it seems that to me, the need to scale is one of the main issues we face as producers, because aside from the 25p not being a standard, we also then (if were in PAL land) need to covnert to 24p if we intend to deliver in 1080p at all..

Fair enough there are ways around this,but to produce PROPERLY, we must stick to the rules.. You will be surprised at how many production houses have their work rejected... this is usually becuase people do not know WTF theyre doing and for what purpose theyre doing it

THE QUESTION.. Considering HDV 1440x1080 requires upscaling to 1920x1080 (be it actual or aspectual) to adhere to the standard, is there any benefit to this by using 32bit float?

Obviously the scale itself will take a long time, and more than likely with the numbers ive seen 32bit float will take about 4x that
Removing the time based factor, will the newly generated upscaled frame benefit with 32bit float?

The Reason i ask is that its seems 32float seems to be a nuance of its audio based cousin (which i really dont doubt)
In turn interpolation of colour (from the examples weve seen so far) are pretty much all weve seen.. OK, we know it can benafit colour grading. what then?
Am i asking for too mcuh? Am i getting this all wrong?
Are people understanding what im talking about here?

The question however raises afew concerns based on the requirement of the file output itself based on 32float.

That fluctuation of algorythm will play a part in this interpolation of frame rate and scaling.. (think about it.. how good does slowmo look with Progressive DV? Consider the same with long gop structure source in HD progressive scan... see where im getting with this?)

Theres also the issue of scaling and slow mo happening at the same time.. and we all know that if your off by a pixel, youve thrown vegas down teh gurgler...

This processing is maths crazy, but its something we need to hink about.. unless were lookin at CF alternatives with no scaling...

BUT considerng slowmotion and scaling is a form of newly generated media, in addition to the interpolated nature of the way float processing works, one can only assume that it WOULD benefit these uses duiring the processing on output.. phew..

Thing is, these issues havent been discussed as yet..

I think i have the fundamanetal questions asked here, but im hoping to hear something from Sony themselves about the other benefits of float processing, as theres more to video than colour grading or fixing up a questionable acquisition format in post IMO..

With the scaling requirements most of us need to consider, it would be nice to know if the added benefit of 32float will inheranty occupy the entire process (and how) , not just colour, but frame and scale


Comments

Chienworks wrote on 9/3/2007, 5:26 AM
Any modification of the image requires resampling the data. 32 bits will give a smoother resample than 8 bits will. However, since the image has to be output at 8 bit i'm not sure how much of that smoother resample will make it to the final output.

Capture at 32 bit float? I don't think this makes any sense. The incoming video will be 8 bit. Converting it to 32 bit while capturing won't gain anything other than waste a whole lotta disk space really fast.

I think what a lot of people would have preferred over 32 bit float would be the ability to import and export 10 bit.
DJPadre wrote on 9/3/2007, 5:54 AM
to some, space is not an issue...
32bit conversion upon capture might actually benefit many and save that process during the smartrender. if its already done, one wont have to think about it at all...Whats the point of smartrender if your source is 8, but you want to process in 32? Every clip which bears the tag of 32bit will need to render, however if the clip is processed during capture, youd be saving yourself ALOT of time..

as for Vegas, its seems that its now purely being pushed for long gop structure editing.
AVCHD is another format which we are yet to be advised on, but for now, it seems Sony are adhering to the need of the HDV masses and preparing for the XDCam adoption, which i can tell u NOW, will change the way event videography is acuquired.

Dont doubt that msot DSR300+ used wil jump ship.. this is the cam theyve been waiting for, in the end, its stil long gop format, irrespective of bitrate, and long gop nuances are seen here, as well as to its lil cousin being HDV.

As for 32float, even though many assume an element would use the full 32, the actual float pretty much means that it fluctuates. Much like variable bit rate mpg encoding, with low highs and averages. Taking a lef from float processing with audio, more than likely depending on the intensity of the work itself, render times/processing will fluctuate with it..

I can only assume (and i might be wrong) that the float will cycle through from 8 to to 32bit, with 32 being reached on frames which require additional processing resources such as fast motion or water which inherantly push the codec HDV itself.
For basic stuff i would then assume that it would remain at 8.. push up to 10 or 12 even... but on average motion stuff i dont see it going over 24.. like i said i could be wrong..

As for output, the question here becomes another valid point for intermediates as lets say we started with M2t, output using 32float to lets say Huffy/Sony YUV 422 (huffys faster), THEN compress/encode to BD or DVD...
Wouldnt that then offer a "closer to 10bit" output? Considering were not outputting to 8bit..

Aside from being able to bring 4:2:0 closer to 4:2:2, the output itself needs to benefit with this as in the end, despite the middle man interpolation of 32, your still left with the nuances of the acquisition format, that being 8bit.. slightly richerr in colour but 8bit nonetheless. Cosnider that commercial DVDs source is either film/uncompreesed HD or 422. Then consider the quality of the resample frm HD 444 fown to DVD MP2 420. Its still MPg2, but it still makes native 420 look like poo...

By changing the output format, wouldnt that then theoreticaly allow us to "bump" colour space up?
Now the argument would then stem across to the "you cant have whats not there" mentality but clearly we've seen "whats not there" and now with V8, it obvioulsy is..

And the idea of fix it in post, again, raises its head as its in the post prod stages which we're extracting MORE from the acquisiton material than we even knew was there in the first place.
Once people start to really push the colour will they begin to film with PP corection/tweaking in mind...
Not everyone is a shoot and cut editor..

Now, dont doubt that people will begin shooting with this Post CC tweakbility in mind.. I do it now and have been doing it for about 6 yrs.... yes my footage is good to begin with, but now i have more flexibility within it..

The question though is with M2t long gop, what we're considering for BD delivery is long gop rewrites of entire packets of frames.
Not individual frames like DV/DVCproHD... (which aside from colour is why these scale so well IMO), but consideirng the nature of the GOP and the total reliance on the adaptiveness within the 32float process.. it boggles the mind..

Glad im not a programmer.. but still.. id love to beta test..

farss wrote on 9/3/2007, 5:56 AM
PJ, you raise many interesting questions here and your analogy to audio isn't entirely unreasonable. That leads me to add one more question, when is Vegas going to have 32bit float audio processing. It seems strange that what started out life as an audio only app has since I joined the show had no improvements in the audio department. OK, one, we can finally import BWF and a big thank you to SCS for that. However consistently Vegas has been shown to lag behind in basic audio quality and the lack of attention to that side of the process is kind of odd.

But getting back to the questions that you raise. I don't believe that the shift to a 32 bit pipeline is going to help in the areas you're concerned about. To improve spatial resolution requires either more pixels, better things in the pixels or better interpolation. The last one I don't know if there's any better ways to do it than Vegas already uses. Although better (read expensive both in time and money) de-interlacing can help dramatically. The rest I fear is really upto the quality of the image in the first place and that's where I do see an issue. We're starting to see cameras that record RAW RGB becoming more commonplace. What is still unclear is if Vegas can read the data from them into it's 32bit float pipeline and output into codecs that do justice to what went into that pipeline.
But things get messier than that. The complexity of these newer cameras brings with it the need for NLEs that are quite different to what the traditional model has been. The codec itself becomes the engine, the NLE just the shell. Vegas with it's new 32 bit RGB pipeline would seem to be in an ideal position to handle this new world, question is, will it be allowed to. To date 3rd party support has been scant and not for lack of desire either. From all that I've been able to glean they've been deliberately locked out by design and this isn't a good thing. Just today I was talking to a client who'se almost entire workload in greenscreen using FCP. He's now seriously considering jumping ship just to get a better keyer. In the end a NLE is a kind of pedestrian tool, snip, snip, glue. The real cutting edge stuff moved long ago from the problem of moving frames around to what was happening in those frames and that's why plugins are the big thing and in the audio world as well, name a high end plug that doesn't run in Protools!
As to frame rate conversion for progressive material. Well first I think we have to compare apples to apples. Trying to slomo 25p is very different to trying to slomo 50i, there's double the temporal data in 50i. The only way to improve that is again using complex pixel tracking, slow and expensive. The good news might be that high speed cameras are becoming more common place. Sure the top shelf stuff is still very expensive but a lot cheaper than it used to be and nothing beats having more fps in the first place.

I really wonder where the NLE business is headed. How much more can be crammed into any of them to keep us forking out more money. A basic tool like VMS is more than enough for much of what I do, I've seen people win contracts with big clients using only iMovie. The next few years could be very interesting.

Bob.
MarkWWWW wrote on 9/3/2007, 6:14 AM
To the best of my knowledge Vegas has always used 32-bit floating point for its internal audio machinations.

There was some talk a while ago of moving to a 64-bit FP internal scherme but that has not happened yet, as far as I know.

Mark
farss wrote on 9/3/2007, 6:23 AM
You're right, my bad. SF uses 64bit FP, Vegas 32bit FP.

Bob.
DJPadre wrote on 9/3/2007, 6:30 AM
I hear you bob!

As for audio, i believe Sony have left afew things out which are found in Acid and SF deliberatly simply to coerce sales down that track.
Obviously they have a business to run, but I mean, when u look at it closely, there is no reason why you cant do what you can do in SF within a vegas trimmer environment...
Looking at Adobes take on Cooledit and what theyve done to evolve it to Audition, Audition is now a pretty decent hybrid of SF and Acid combined. Fair enough it doesnt come close to either, but its a useful tool for many.. consideing Cooledit was less than a hundred bux and is now sold for over 700.. it makes you wonder..

Getting back to SF within Vegas.. The plugin engine, GUI, and workflow flow are virtually identical... so i dont see why we cant have SF WITHIN Vegas itself.. in turn alowing us to create second takes and run non realtime audio plugins (ie non enveloped plugs from 3rd partys like Beatmodel, TCNative, Waves etc etc) for each event, as opposed to track filtering as we se it now
I mean they bought in Cinescore.. i wonder if Cinescore is also workable as a plugin within DVDA? I dont see why it cant be..

If not why not? How many hours do people waste in remastering a soundtrack down to 1 or 2 minutes to make it fit for a simple menu? LOL

See where im getting at? The real world uses of these apps and their intergration HASNT been considered.. I dont think so anywya.. not int eh chain of events im looking at..
And i think this is where Adobe trumps everyone because theyve tied EVERYTHING in with each other...

In turn, for me to master a basic dvd audio soundtrack, id need to import my footage to vegas, mark where i want each button to trigger based on my time scale and then compose my track (and video edit) based on that time scale, then reimport that soundtrack, then realign my buttons to the track to tighten the compositon.
Why not allow me to tweak my audio as i work on my menu? Add a crash here a backspin there.. give me a 909 kick drum there..

But what if i want to trigger button motions based on musical hits, grabs and beats? What if i wanted sound effects as each button being highlgihts and clicked on?
Half this stuff is basic, the other half requires alot of Post work

Its like trying to hit start and record at the same time on 2 different tape decks.. do one thing to sync in another.. in my old midi days, we used to use Rewire for midi control and VAC (virtual audio cable) and route the output of our softsynths back into the Sequencer
We dont even have that option here..

Am i making sense?

so with all this, we can see how the lack of integration to the SMS brethren hasnt occured, be it 64bit float audio options (were at 32 now) or even plugin interaction between apps is non existant..

your right about one thign though.. there are guys out there using some really basic shit and making a killing, so why are we busting our balls trying to compete?

I mean arent we in this business to make money?
But are we working smarter?
I love the artform of what we do, but im still here to make a living with these tools.. but if lil johnny down the road is making as much money as i am, with less hassles and less fuss, then why shoudlnt i lower myself to join him to at least have teh ability to compete? Its less fuss and less i have to thnk about.. but im making the same money.. arent i?

Are we here to work smarter or harder?
HD it seems has increasd our workload to almost double our post processing time.
Clients cannot see this until its thrown intehir faces, in fact, i even went so far as to post on sevral wedding fora to simply educate the few who have misconceptions about the format and its delivery.. i guess we do what we have to to survive in this profession.

And for what? an extra grand a pop? If that?
Chienworks wrote on 9/3/2007, 3:56 PM
"As for 32float, even though many assume an element would use the full 32, the actual float pretty much means that it fluctuates. Much like variable bit rate mpg encoding, with low highs and averages. Taking a lef from float processing with audio, more than likely depending on the intensity of the work itself, render times/processing will fluctuate with it..

I'm not one of the developers so i can't speak definitively, but my own personal opinion is that there isn't an ounce of sense in those statements. I think you might have to read up on what 32 bit floating point processing is. It has nothing to do with data flow, motion, compression, bit rates, or image complexity. It will only involve the accuracy of the color calculations at each pixel. As far as what the pixel next to that one contains, or the ones in the frame before or after, 32 bit floating point processing has no bearing or effect.

There would be no reason for Vegas to switch between various amounts of bits while processing the video. If it is going to work in 32 bit floating point then it will do every pixel of every frame in 32 bit floating point. There wouldn't be anything gained by deciding that some sections need fewer bits. In fact, the having this sort of decision take place would probably slow the rendering process down considerably.
winrockpost wrote on 9/3/2007, 4:20 PM
the more I'm reading and talkin with some techno buddies trying to wrap my brain around this 32 bit float stuff ..the more the ad "Surpass traditional 10-bit standards with 32-bit floating point video processing." is buggin me..
ain't the same 10 bit stuff some have been after,, yeah,, brilliant observation i know.. a little slow here,,
but....lookin forward to see what it may do for ME though
RBartlett wrote on 9/3/2007, 5:14 PM
Float32 has been mentioned as an optional project setting. It isn't about image supersampling, pixel oversampling or frame rate interpolation. While it may have a knock on impact if you dig deep into those processes, it would only be the side effect of a scaling with adjacent pixel calculations. But this is guesswork at this stage.

I appreciate Float32 from this approach to the thinking:

8bits per channel RGBA has values for each channel between 0 and 255.
If converting Y'CbCr to RGB or vice versa then some truncation or rounding will occur between these formats. Float32 sets out to allow fractions of levels to be remembered between each number in the 0 to 255 range. Where you work with Float32 in this way you restricting how many decimal digits occur before the decimal point.

Color correction may skew pixel levels against your favorite or specific color biasing curve. In these cases the extra latitude of all these in-between values are valuable to save unnecessary steps between adjacent pixels. These blemishes can be seen as aliasing, quantization-noise or contouring.

Float32 doesn't have a way out of the pipeline. Or at least not one that has been discussed. Also, how Vegas will handle 10bit Y'CrCr or 10bit RGB formats from files, supported capture cards or out to external preview interfaces is yet to be discovered. One might expect that SonyYUV would be expanded to Float32 support but then really, if you don't use sub-projects or pre-rendered footage then the benefits of re-acquiring footage in Float32 (when it may well have come in or be going out to 4:2:0 8bit compressed) is somewhat moot.

The whole approach of 10,12 and 16bit per channel formats is tackling the requirement for increased latitude from a different perspective. Instead of having fractions of levels at your disposal - the 10bit way is to multiply up the levels and have integer values in between. 10bit isn't readily converted to 8bit by truncating the register as the range (including illegal top end values) still goes from 0-1023. Float32 is rounded by using the calculator chip (the floating point or math unit) to deliver the target range, optionally by pulling the bits in a way that truncates without heaps of CPU/FPU cycles. Float32 also has more levels when you consider the merits of the 10bit vs Float32 format. So if you are selecting what representation you want to offer, if you can afford Float32 - then you should use it. Sony have.

Float32 also allows more decisions to be made on what numbers are available whereas fixed integer representations can be difficult to normalize in a white/black level and balancing way. That is, if you are writing an autowhite-balance filter or something like that.

Support for 10bit codecs , using various wrapper types (AVI, MPEG-2, MOV or otherwise) is still unexplained in either 8bit or Float32 Vegas project modes.

Float32 is a very different representation to 32bits per channel. Float32, for an 8bit centric tape/disk environment is the best of both worlds. More latitude but without too much effort to manipulate the figures.

Here is a 10bit example just quickly.
Say you have a 10bit 'R'ed sample that says it is of value 999. Remember that 10bit goes from 0-1023. If you divided this value by 4 for it's equivalent 8bit value then you'd get 249 in integer maths. In Float32 you'd get 249.75 which would round to 250 if you were outputting to an 8bit format downstream from the pipeline. Now in the real world you'd be doing some more maths on the 'R' with the other channels to gain the Y'CbCr vector representation. However the point still stands correct that Float32 is more accurate, but requires more than 10bits to achieve this. Hence why acquiring or rendering to Float32 isn't the current imperative. Processing footage in Float32 still needs more resources but you might only witness this impact in RAM. AFAIK.

winrockpost wrote on 9/3/2007, 5:19 PM
man am i confused.... I'm gonna shut up ..cause its over my head ,and wait till 8 ships and Maybe figure it out :)
DJPadre wrote on 9/3/2007, 8:53 PM
this truncation ... or in other words, interpolation between decimal actuals is the key to this IMO

we strt with 8, we mess with curves etc etc proces in 32bit then output back to 8

Now this inbetween step is smoother, cleaner, more accurate and less prone to aliasing as the curve efect (as an example) is no longer combing the output. In stead, these "combs' you see in teh scopes would be smoothed out.
For those who have no idea what im babbling about, run colour curves and mess around withthem, open ur histogram and watch the response fo the filter and you will see how the colour begins to stretch in a series of steps. These steps look like vertical combs.
Now basically it seems that on the soutset, this cmobing or stepping is truncating or interpolated when using 32bit

Ok that is understood and in turn we are left with a beter looking image..
now..

wht if we want to output to uncompressed? are we still stuck at 8bit?
what about CF 422?
What if were starting with HDV, then scaling down to SD 16:9 to achieve a near 422 colour rendition?

In addition, when scaling or slowing down, the entire proces in 32 has now changed the inherant behaviour of 8bit colour.
So that truncating in between, considering its A FORM if interpolation, should in theory also improve the slowmotion and the scaling consideirng these two are fundamentally colour as a basis for media generation.

Am i wrong in thinking this?


Chienworks wrote on 9/4/2007, 4:20 AM
You're getting closer. Being able to input and output in more than 8 bits is probably far more crucial than working with 32 bits internally. No matter how well the color is calculated internally, outputting 8 bits will still subtect the image to banding. It may not be as bad when calculations are done in 32 bits, but it will still be unavoidable.

I'm not sure why you are hung up on the slow motion and scaling. The change to 32 bit processing affects any and all color calculations. It affects slow motion and scaling in exactly the same way it will affect color curves, color correction, compositing, titling, opacity, transitions, crossfades, etc., which is to say, some, but not much.

"Ok that is understood and in turn we are left with a beter looking image..

That's exactly it there. Input and output at 8 bit are still going to be the major hurdles. That's why all the folks asking about 10 bit care more about that than 32 bit internal processing. Not to say that 32 bit internal processing won't help, but the input and output limitation is far more serious.

It's the same thing as all the audio folks talking about recording in 24 bit, editing and mixing in 24 bit, then complaining that the audio still has to be reduced to 16 bit when burned to a CD. Yes, the 24 bits helps make the editing more accurate, but all those extra bits are chopped off on the final output.
RBartlett wrote on 9/4/2007, 7:22 AM
MPEG-2 output from Vegas is a special case as the codec is shipped with Vegas (mainconcept) and Spot has already shown that the Float32 pipeline benefits the Vegas operator immediately, without CC, blur or other related treatments being involved.

For other codecs, the important snippet is to know that 8bit Y'CbCr coming out of our typical source compression formats (be they HDV 4:2:0 or AVCHD) are going to enter into the pipeline and be colorspace converted with Float32 math. Then to go through your list of treatments and manipulations to finally go back to Y'CbCr again with the math involved being capable of the accuracy at Float32 levels. This is what we want, in and out to be in Float32 and ideally to present most compression codecs with either Float32 or 8,10 or more bits of Y'CbCr whether 4:4:4:4,4 4:2:0:4, 4:2:0:4 or 4:1:1:4.

Then the final codecs compression and writing to MPEG-2 or H.264 will have a good chance of continuing this quality through the entire process.

We'll have to see whether Sony can work this out with 3rd party codecs generically or if they have a list of 'honey-codecs'. History suggests that MPEG-2, CineForm and DV will be the honey-codecs for the time being (other than RGB24 and SonyYUV). Although depending on the windows video subsystem interfacing, or perhaps tunable render parameters (like Satish provides on ingress into his FrameServer) we may be able to determine this without having to wait for Sony, beta testers or official documentation. Although magic like this is usually hidden away and we as operators sit and assume that we have the quality issues already thought out for us. Big mistake!

I still go along with the idea that we didn't have it so bad with Vegas and the RGB24/8bit pipeline. For many, I'd recommend they stick with that in Vegas8 unless they can't find any other way of slowing the NLE down on their bleeding edge machine! If you render overnight or over a weekend, then Float32 can correspondingly be left on. Again, I'm suggesting this with no first hand knowledge, only a techie appreciation for what we should have available when VegasPro8 is in the wild.
rmack350 wrote on 9/4/2007, 9:55 PM
While DJPadre's posts are making my head spin (and not in a good way), I think the point about histograms starting to look like combs is important.

When I'm doing color correction in Photoshop I like to work in 16bit color until the final output. I find that photos tend to look a little less noisy if I work this way.

Take adjusting Levels as an example. If you take a dark shot and brighten it up with Levels you'll see a histogram that looks a lot like a comb, with gaps in it where you stretched 127 levels over 255 values, for example. This *could* have the effect of increasing banding if you were doing this to a simple gradated image, or it could look noisy in a more complex image.

You really don't see this sort of noise or comb effect as much if you work in a higher bit depth like 16bit. (You don't see it even in the 16bit image because your display card is still outputing an 8-bit image, but the final 8-bit output looks just as good)

Let's take this example and apply it to Vegas. Suppose you've made a chain of 3 or 4 effects like Levels, Color Correction, etc. It's probably a common situation. Now, suppose that each of these effects was going to have this comb-like effect on the image, and that it's cumilative. Now perhaps you can see how taking that 8-bit video, runing a stack of filters on it in 32-bit mode, and then saving the render in 8-bit mode would still give you a big advantage.

Now, one of the reasons people liked the idea of 10-bit video formats is that they have a very similar effect on stacks of filters. The image holds up a little better because of the increased range of values the filters can make calculations on.

I'm making a guess here but I'd think that running stacks of filters in 32-bit float mode accomplishes the same things people look to 10-bit codecs for, but it does it in the processing chain rather than at the codec level - you don't write 32-bit float files. What you get is less accumilated noise in your 8-bit render

I'd be pretty sure that Vegas' having a 32-bit float capability also allows it to work with 10bit codecs like the free Black Magic codec. You certainly couldn't work with a 10bit codec when Vegas only supported 8-bit processing. My guess is now you could edit 10-bit media in 32-bit mode and make 10-bit renders if you wanted, but you might just find that 8-bit renders look just fine and you don't need to use 10-bit.

It's interesting that you can turn 32-bit processing on and off. I'd assume this allows you to edit in a more spritely 8-bit mode and then do renders in 32-bit mode.

PPro has evidently been 32-bit internally (but not quite the same as what Vegas is going to do, evidently) for a little while now, and also gives you the option of 10-bit color mode for projects. But it might be good to remember that PPro does things very differently from Vegas. PPro prerenders the timeline to gaurantee playback. To do this, you have to tell PPro what the final output type will be so that it can make the prerenders. So you have to tell it that things will be 8-bit or 10-bit. Vegas doesn't work this way. Instead, it renders things as you preview them and it caches frames in ram in order to gaurantee playback. Essentially, Vegas seems to give you uncompressed frames out of memory rather than prerendered frames off the hard disc. Maybe the point here is that Vegas is actually a little more flexible for not having to conform your project to predetermined output settings, but the preview reliability suffers in comparison.

Rob Mack
Spot|DSE wrote on 9/4/2007, 10:09 PM
Without making your head spin, look at the three recent VASST YouTube uploads for Vegas 8 vs other uploads. No color correction or other processing, you can see a huge difference in the HD originated content. Deep, rich, contrasted, and for web vid...darn good.
DJPadre wrote on 9/4/2007, 11:47 PM
i htik the issue of colour has been touched upon and some posts have verified my thoughts on teh issue of scaling and slowmotion..
now if poeple are wonderign why im focusing on these elements is becuase basically, these 2 functions are prolly the most common anyone will be using when editing.
Scaling, you either looking at upscaling, or predominately for most of us, downscaling to SD DVD. Thats the first reason i bought this up
the scond is slowmotion on MPG2 long gop formats.
Aside from the XDCamEX and JVC202, we tightasses still need to rely on post processes to get the slowmotion we want.
Reason i mentioned interlaced vs progressive was in the hope that this 32bit process woud improve the existing colour and luminance issue found within newly drawn frames (when slowing from progreesive source) in addition to frame accuracy within the interpolation itself
To be frank, progressive slowmotion is unacceptable in Vegas if you go beyond 80% if that... interlaced, well as mentioned by another posters, allows for temoral fields, in turn field interplolation is far more accurate..
What gets me though, is that cheap apps like Dynapels slowmotion as well as higher end stuff like PremCS3 CAN in fact perform this kind of interpolation

So the question was raised to see if not only 32bit would not only assist in colour, but whether or not that 32bit would be throughout the entire process (consideirng the colour itself forms a part of said process)

one person also assumed that it would be.. and i have to say that judging by the spec it should be..

why am i so hung up?
I guess its because i didnt notice anyone else taking these factors into consideration.
rmack350 wrote on 9/5/2007, 10:43 AM
I think the best presentation of codecs I've ever seen is on this site:
http://codecs.onerivermedia.com/

Something similar could be done to show the difference between 32-bit and 8-bit rendering, I'd think.

Rob Mack