Greater Creative use of Video and Stills!

Grazie wrote on 8/18/2008, 11:39 PM
This has some truly massive value for NLE work. Not just the remedial work, which is just plain witchcraft, that is on offer, but the creative opportunities this can proffer are presently shaking-up my brain cells.

And I thought Izotope RX was VooDoo . . . .

http://www.engadget.com/2008/08/16/video-tech-uses-photos-to-enhance-alter-shots-its-the-photosh/Amazing![/link]

I'll be listening out for your jaws dropping onto the pine Studio flooring.

Grazie


Comments

Grazie wrote on 8/18/2008, 11:44 PM
Just had a further thought. What would happen IF a camera could make one or two HIGH reso stills WHILE shooting, to be immediately available on capture/loading into an NLE?

Grazie
Spot|DSE wrote on 8/18/2008, 11:55 PM
Been around for a while, search the forum and you'll find quite a thread on what they've been able to do. Seems like about a year + ago? Search for University of Washington, photo on Youtube and you'll find some amazing video quality in the stream.
DJPadre wrote on 8/18/2008, 11:58 PM
I WANT IT!
Grazie wrote on 8/19/2008, 12:00 AM
Yes it is amazing work! Love it.

Grazie
FilmingPhotoGuy wrote on 8/19/2008, 1:04 AM
BTW how does one capture a single video frame to JPG using Vegas. I know Pinnacle has this useful option.

Craig
farss wrote on 8/19/2008, 1:17 AM
Click the disk icon in the preview window.

One tip. If you're video is a bit noisy and the shot is static you can get better results if you add some motion blur to the video buss master.

Bob.
rs170a wrote on 8/19/2008, 4:04 AM
Set the Preview window to Best/Full before grabbing the frame.

Mike
TheHappyFriar wrote on 8/19/2008, 5:41 AM
Just had a further thought. What would happen IF a camera could make one or two HIGH reso stills WHILE shooting, to be immediately available on capture/loading into an NLE?

some can! :) My HD100U does this (6.1mp while in HD or DV mode). Pretty sweet! Only three per scene though, then I can get another three after I stop/start recording video.
fldave wrote on 8/19/2008, 7:08 AM
I posted this 5 days ago, i guess everyone has me on "Ignore" Ha!

This has the link to Washington.edu:


http://www.sonycreativesoftware.com/forums/ShowMessage.asp?Forum=4&MessageID=608636
CClub wrote on 8/19/2008, 8:35 AM
Does anyone know the direction they will be taking with this? Is this for public consumption, possible NLE plugin/integration? Or is this just initial research that makes us drool but we can't functionally use for 5-10 years?
DGates wrote on 8/19/2008, 8:52 AM
Pretty interesting. But damn, they need a different narrator. WAY too gay.

johnmeyer wrote on 8/19/2008, 9:22 AM
I was going to post about this when Engadget featured it a few days ago. However, I looked at the entire demo and it is pretty clear that this is a lab demo and very unlikely to ever be usable commercially. There are so many things that have to be done correctly in order to make any of these magical fixes to work that it is unlikely you'd be able to have them all set correctly. Also, unless they have truly discovered a whole new way of doing motion estimation (one of several underlying technologies needed for all this magic), most of what they are doing will break down with real world footage.

Having made those negative comments, the math required to make this work is staggering, and even though it is likely to remain a laboratory demo, it is nonetheless stunning.
Grazie wrote on 8/19/2008, 9:47 AM
Oh, bottom! - G
jmeredith wrote on 8/19/2008, 10:31 AM
I read about this a month or so ago on an Adobe Blog...

From an Adobe Blog

Adobe Labs

It'll also be really cool to see this succeed - VideoTrace
johnmeyer wrote on 8/19/2008, 10:36 AM
Grazie,

I spent almost a year at a well-know VC firm back in 1991 and got to see all sorts of technology demos. I then consulted with startups for the next dozen years. I've been lucky enough to see all sorts of amazing things. Now, even with that experience, I often get it wrong, so I may be wrong here. I sure hope I am, because being able to do all this stuff without manually doing motion tracking would be fantastic stuff. However, my "nose" tells me that this will have a tough time transitioning out of the lab and into the real world.

One good evidence of that is the motion estimation part of the equation. If you look at one of your favorite tools -- Deshaker/Mercalli -- these stabilization programs use motion estimation to determine where the entire frame needs to be shifted in order to move everything "back" where it should have been if you hadn't moved the camera. As you know, they don't work all the time. What these tools try to do is move the entire frame. By contrast, what you see in these demos requires moving each and every "object" in the frame and in some cases every pixel, and do so independently of every other pixel. This is similar to what a program like Twixtor does when doing slow motion. Watch this simple test, using a similar program to Twixtor. As the car goes around the corner, notice how it "breaks apart."

Slow Mo Test