Digital Fusion 5, wow!

farss wrote on 5/20/2006, 5:35 PM
Finally found the time to work through their courseware and load the unlimited full demo.
Felt like I'd entered another world (yes, the price of admission is 10x that of Vegas), so many times that over the years working with Vegas and other NLEs I'd thought "if only they did this" and there it was, something that did it. Probably something to do with the sun or the water, Eyeon did start life down here, then moved to Canada.

I haven't a clue how it stacks up against AE or Combustion however now that my brain is tuned to working with Vegas and I find 99% of everything else I look at oddball or difficult to get my brain around I felt right at home with DF.

Even if you'll never be able to afford this product it's still worthwhile working through or just watching some of the courseware, a lot of the methods they use can be done in Vegas, I'm not talking just the basics but rather how they go about getting output that looks better, yes it's way easier in DF, that's what all the dollars pay for, but with a bit of thought you can get Vegas to come close. One of the things though that should make life WAY easier in DF is motion tracking and image stabilisation. Need to rotoscope something that's moving around a lot. Simple. First stabilise it, do the rotoscoping and then apply the tracking path back again to restore the original motion, DF will preserve the whole frame from the stabilisation by wrapping the pixels, no pixels get hurt.

Two things that did stick out relevant to recent discussions.

DF will do GPU renders via OpenGL, however they do point out that the quality is almost always lower than their software renderer, fine for fast preview but not for final output. It does do background rendering, while you think it works!

They use feed forward rendering. In other words FXs in the chain don't feed a rendered frame to the next FX unless they have to, just feed parameters. This means the frame is if possible only rendered once thus minimising quality hits. Also the degree of sub pixel rendering is configurable. Sub pixel rendering helps avoid nasties when pixels are moved around.

Like I said , I have no clue how DF stacks up against the other heavy weights in the compositing world, one thing that is attractive about it is you can go from the video only version to the full blown film version by only throwing money at it, no new tools to learn.


Bob.

Comments

DJPadre wrote on 5/20/2006, 5:48 PM
hey bob, do u have a link to their courseware? (as u call it)
im interesting in fusion 5 as an app and maybe as a distributor if i can get it to play nice with everything else..

combustion is an awesome app, always has been, but it a little too system intense for many NLE based workstations.. This is why AE works so great as it doesnt require too much HW and ur system doesnt have to be on steroids to get good results, but with AE, were talking Adobe and thats another poblem altogether. (yes ppl, i did call it a problem... being that many MANY ppl dont like the Adobe GUI)

But yeah any more info on trainign and id be intently interested.
cheers
P
farss wrote on 5/20/2006, 6:08 PM
Think this will work:
http://www.eyeonline.com/Web/EyeonWeb/Techniques/courseware/courseware_ed2.aspx

I got most of it on a DVD at NAB but I see there's even more available online.

One thing with their products. They seem to get cheaper just prior to the next release and at that point the price of buying the current version + the upgrade is way cheaper than the price of the next version when it comes out. Kind of kicking myself that I didn't buy it after the last NAB, would have got 5 for around half price effectively.
FrigidNDEditing wrote on 5/20/2006, 11:11 PM
at 5K it better be substantially better than Combustion or AE, considering the 1/5th the price situation. However - I do see that There are student versions for 1500 or so, I wonder if they have a student to full commercial upgrade plan at all?

Dave
Coursedesign wrote on 5/20/2006, 11:27 PM
Their 8-bit only version is less than $1K, while the high bit depth version is 5 times that.

Fortunately nobody can see the difference between 8-bit and high-bit video. At least that's what many have said in this forum.

Doesn't match my experience at all, but they may have have secret methods to bypass the laws of nature.

:O]

farss wrote on 5/20/2006, 11:46 PM
8bit or 10bit video displayed on a 6 bit LCD, of course no one will see the difference!

Try dialing in 4 stops of gain in the shadows of 8bit video and a 14bit DI and I doubt you'll find anyone who can't see the difference even on a 6 bit display.

Then again if you can afford a 14bit transfer and enough disk space to store it, $5K for some software would be coming out of petty cash.
apit34356 wrote on 5/21/2006, 12:31 AM
"Fortunately nobody can see the difference between 8-bit and high-bit video. At least that's what many have said in this forum." I found that interesting too, but since this is the vegas forum, 8bit video defines the world to most users. 16bit for "stills" was a hot topuc for many photographers. But I have found that, in the beginning, few actually used 16bit vs talk about of how critical it was to have it. I hope sony lets the vegas play in the big kids league, but vegas is very small product compared to other sony products and their market interest.

Just for fun and to piss off MS, sony and IBM should buy part of Apple......
FrigidNDEditing wrote on 5/21/2006, 1:16 AM
"Doesn't match my experience at all, but they may have have secret methods to bypass the laws of nature."

LOL @ Course, that's like saying that the 1-10 scale Ansel Adams devised is totally useless because no-one can see the diff. between a 3-8 and a 1-10. HA HA HA HA!!!
Higher bitrates are going to improve your dynamic range, and the more dynamic range I can get, the happier I am. If I'm shooting a grey wall, and a man in a grey suit, all evenly lit - I wouldn't care, but usually a well composed shot is goign to contain some contrast, shadows, etc... and of course if I'm shooting at 10 or 12 bit (like I have the kind of money/need right now, let alone disk space) then Vegas is a shot in the foot. Now that doesn't mean that 8 bit is bad, or unuseable like some almost *seem* to think (not refering to folks in this thread at all). 8 bit, if used properly can be VERY good. But that's like comparing film to the naked eye - (at least last time I checked) a good human eye can outperform the best film has to offer by sevearl stops. I think the human eye has a round 14 stops of dynamic range, where as the highest dynamic range film can offer is in teh 10 stop range, could be wrong, it's been a few years and I could be recalling numbers wrong.

Dave
Coursedesign wrote on 5/21/2006, 9:50 AM
There are many ratings on the dynamic range of film, usually 14-18 stops (although I think the latter is way over the top and even the first needs some skill to truly use), depending on the film stock of course.

The dynamic range of HD is still quite a bit less, which means light has to be managed much more carefully than when shooting with film, or you end up with blown-out highlights.

If you know Ansel Adams' Zone System, then you'll really appreciate what high-bit video can do even for video shot in 8-bit (which is most of it, even though the practical NTSC spec says 10-bit*).

How so? When you do processing on video, even something as simple as a fade-in or a fade-out, the end result will look completely different if calculated in 16-bit or better yet, 32-bit float, compared to 8-bit. This difference in the final output is very highly visible even on a 6-bit LCD screen.

This has nothing to do with just banding, as some people think (although that can present an additional problem in certain situations).

The easiest explanation may be that you can make the video visually behave like film in going between shadows, mids, and highlights, in a way that cannot be compensated for with "film treatment" or S-shaped curves in 8-bit video.

The best is working in 32-bit linear float, which doesn't crunch the luminance into logarithmic steps, providing for "better looking math" when calculating changes.

Final Cut Pro, Motion, Premiere Pro, After Effects now all have high-bit support at a relatively low price point, as do some lesser known players.

My guess is that Vegas will have it in V7, as there is just no workaround for it (other than round-tripping to After Effects, etc.), and you are soon going to see even low end vendors bragging about 32-bit linear space support.

Heck, even some video games and consumer gaming video cards already have it.

They use the term HDRI (High Dynamic Range Imaging) instead, coming from the world of 3D modeling apps where they discovered the same major difference in realism working with 32-bit lin space.

This also means that the GPUs on inexpensive ultra-high performance video cards can handle this very effectively.

Even the very inexpensive Motion program (now bundled with Final Cut Pro) handles all effects in HDRI, and the output result of that has been deemed good enough for many Hollywood films....

=========================
* From The Discovery Channel:

NTSC/ Standard Definition Specifications:
Vertical Blanking 20 lines
Horizontal Blanking 10.7 microseconds, +0.3;-0.2
White level 700 mV (940 DEC/ 10 bit) (100 IRE)
Black Level 0 mV (64 DEC/ 10 bit) (7.5 IRE)

It can also be noted that 100% of broadcast video A/D and D/A converters are 10-bit.

GlennChan wrote on 5/21/2006, 1:43 PM
I think when talking about higher bit depths, you should be careful about what exactly you're talking about.

1- Capture/ingest: What bit depth the image is captured at.
Also important is the scaling used, as the scaling can make the most out of the bits available. i.e. log encoding is good for film

2- Output: What bit depth the output is in.

3- Rendering (inside a filter): This is generally up to the filter. Vegas can do 32-bit rendering in this sense. Vegas could support linear light processing within a filter.

4- Rendering (from filter to filter)

5- Linear light processing: Rec. 601 video is captured with gamma compression, which scales the bits so that they are close to the scaling of human vision. By taking the values to an exponent of 2.2, this removes the gamma compression.

When taking values to an exponent of 2.2, they get a lot bigger. 255^2.2 = ~196964. You'd generally want to work with 32-bit floating point numbers to avoid rounding error in the calculations (if working with floats, you might as well use 32-bit floats instead of 16-bit floats since there's little performance penalty for doing so).

---

IMO, #5 makes a very, very noticeable difference. #1, #2, and #4 are a little asinine in my opinion... they make a very small numerical difference and no visual difference (noise masks the subtleties). But a point could be made for #1 when cameras start capturing 10-bit or higher, and #4 (regardless of what cameras capture).
#4 does result in a performance hit in Final Cut Pro I believe (compared to the default 8-bit render).

And keep in mind that just because a NLE supports higher bit depths on ingest + output, it doesn't necessarily mean that it supports linear light processing like After Effects does.
farss wrote on 5/21/2006, 3:21 PM
I'd have to disagree about #2, yes it's not visually noticeable however when the output is going to be further processed it'll make a big difference.
I've not struck this problem myself as I've never really worked with analogue formats however there's been a few here that have abandoned Vegas because of issues with it handling BetaSP.

We also believe this is an issue with Sony decks such as the J30. It'll happily squirt SP down SDI however it seems it's only an 8 bit A->D conversion, that's fine for preview but process that and PTT back to SP and things don't look too flash. There's certainly other issues that can affect image quality going down that path.
Conversly digitising SP as 10 bit using 14 bit A->D converters and keeping 10 bits through the pipeline and printing back to tape from 10bit seems to lessen the pain of repeated digitisation.

Bob.
Coursedesign wrote on 5/21/2006, 4:22 PM
Glenn,

You're certainly right about bit scaling when recording, and non-linear scaling has already crept down to prosumer cameras (although to less benefit than in a broadcast camera with a wider D/A and/or a pre-knee circuit that reduces the analog video signal from the highlights, before it hits the A/D).

For output, we have certainly done OK with 6-bit and 8-bit displays, but once you have seen a high bit display (very pricey still) it's a different world. I saw this at Siggraph and NAB, wow! This difference is not about smoother grayscales, but HDRI.

Yes, Vegas, like most if not all NLEs, uses 32-bit math inside each filter/effect, etc., but as soon as you stack them you can get truncation errors.

About light compression, there's an interesting article about video projection in the latest issue of Broadcast Engineering, including the good news that if you cut the cd/m2 light output by 50% (by choosing a cheaper projector for example), the human eye perceives only a 24% reduction in "brightness" (the meaning of which is a big subject...).

In SD, DigiBeta captures true 10-bit, as does my humble Sony DXC-D50 camera.

A lot of the higher end HD cameras can capture 10 bits and beyond of course, and I think even a Canon XLH1 can do it with analog capture to a 10-bit A/D converter (their SDI out is apparently net 8-bit only).

For linear light processing, I think you really have to do 32-bit. This gives you a 24-bit significand, compared to presumably an 8-bit ditto in a 16-bit float format.

I liked Adobe's demo of linear light processing on their tour a few months ago. Very effective in showing the vast difference in blurs especially, and compositing is also affected substantially.

These are good days for visual quality geeks!

GlennChan wrote on 5/22/2006, 3:43 PM
You could do a linear light blur in Vegas, since the filter can operate 32-bit internally.

Of course, some operations you wouldn't be able to do without having 32-bit support from filter to filter. i.e. if you have a moving video/image with motion blur, you need 32-bit support in the motion blur + compositing algorithms. Or if you want to add a filter to a HDR image/video, you probably don't want vegas to kill precision by passing the image in 8-bit form.

2- However... I think the best approach would be for Vegas to integrate tightly with After Effects, Combustion, or other effects packages like them (AE would probably be the best because it's so common). When you get into that level of detail, you might as well take it into one of those packages. Vegas is an editing program and probably doesn't need to go into really advanced stuff like that.

3- What I've seen is that image noise and perceptual effects hide the difference between 8-bit and 10-bit ingest (where 10-bit is not doing HDRI, but merely less rounding error / quantitization noise). Perhaps something else is going on in the hardware... i.e. it adds less noise to the signal to begin with, and that just happens to correlate with 10-bit versus 8-bit???
Definitely a test would show what's true and not true.

There are a lot of people who believe 10-bit is better than 8-bit... which does make a practical difference if you're trying to market your product.
farss wrote on 5/22/2006, 6:00 PM
Any analogue to digital converter always has an error in the LSB, that adds digital noise in itself. However most cameras run the A->D at more than 8 bit for processing and then convert to 8 Bit. However I'd suspect that dithering is added and that's basically noise.

It's much easier to hear the difference of bit depth than to see it.

12 bit audio is almost passable for final output, particularly given the proliferation of mp3 players.

However record 12 bit audio and then dial in some gain and/or add some compression, then you'll sure hear the difference between 12 bit and 16 bit. Of course this is a linear encoding scheme.

I suspect though that using a log encoding scheme runs the risk of other errors creeping into things when the image is heavily processed.

One thing that we were very interested to see at NAB was the Silicon Imaging camera. This thing is recording RAW data, even setting white balance has no effect on what's recorded, only metadata is changed to indicate where the white point should be in post, that can be changed to any value you desire. Same goes for all the other in camera tweaks. The creative possibilities in post are huge.

But there's another side to this, is so much creative control desireable? At least one other player thinks not, their camera lets you select from a range of standard film stocks and once recorded the image is pretty well locked. I can see the logic in this, given unlimited possibilities one could well spend forever deciding the 'look' instead of just getting on with it.

I think the next few years are going to be very interesting, we thought HDV was going to be the DV thing all over again yet I think recent advances in technology are going to open up a pandora's box of possibilities, Just how the NLE codesmiths keep up will be quite a challenge. Just what'll be left standing when the dust settles I haven't a clue.

What worries me is the welding of post systems to specific camera manufacturers. I'm just hoping that Vegas continues to be as generic as it has in the past.

Bob.