For motion created inside of Vegas (generators, pan/crop, track motion, FX, transitions) it makes Vegas sample at higher than the output framerate. It adds interframe motion blur to these types of motion. It can be expensive to compute, and only seen in preview if preview is in Good or Best mode.
///d@
Supersampling is argueably the most amazing tool in VV4. I attended the NLE shootout in Cleveland yesterday and Douglas Spotted Eagle conducted the VV demo, during which he showed us a poor quality video clip and what VV4 was able to do with it. You can read about it and see before and after shots at http://www.sundancemediagroup.com/tutorials/supersample.htm
Interesting!
I suppose one approach could be to render single events or sections (like a still with some pan/crop or motion on it) with supersample to an avi, which in turn is imported to the timeline (and finally rendered normally with the rest of the project). I usually must render a few times before I'm happy with the result, so it would be good to be able to use supersampling when neccessary without slowing down all the renders all that much.
Tor
I've used supersampling with pan/crop but so far I can't see ANY difference in applying it to "bad" source files. Am I doing something wrong?
The procedure I followed:
1. drop a low grade 352x240 MPEG-1 on timeline.
2. Add a supersample envelope, to a small event set sample rate to 4 or 5.
3. applied a little unsharpen (also tried other filters)
4. render out as DV NTSC, 720x480 default template
Expected a big improvement (due to change in frame size, bitrate) like shown on SPOT's tutorial.
Nothing, nada, zip. No change in quality, looks as if I did nothing.
Are there extra steps or is it a hit or miss thing that works on some files, not so well not at all on others?
Do you have to change file format too? Like go from AVI to MPEG or the other way around? The changes in SPOT's tutorial are dramatic, I can't see any changes, only a big slow down in rendering
Denis said that it only works on motion created by vegas. It seems as if that would definitely lower the value of the tool, is there any way to use it and improve poor video? Say I slowed a video and now there's about 1 fps. If i add a supersample will it make it run smoother?
Thanks,
Elias
SPOT used it another way which peaked my interest. The default use seems to smooth things out by forcing Vegas to resample much more than it otherwise would, so smoother playback for pan/crop stuff etc.. What SPOT did almost looks like magic, because it looks like the image is totally rebuilt, supersampler on steroids.
Basically what I'm asking is how did SPOT do it? I know he says what he did, and I did the same thing and nothing happened. :-(
The two images are like night and day, there's so much improvement in the demo SPOT wrote. I tired in on several samples and haven't seen it change any of them for the better yet, so I must be missing something.
I'm waiting for Billy's (Clinton's) book.... that ough to be interesting.
SPOT's book is stuck at the publishers.... still. I think they pushed the release date back twice already. Not SPOT's fault. It seems a common happening to technical books, computers books in particular.
I also tried it (with a 10fps low quality avi) but it looks just like no supersampling was applied.
What SPOT did looks like a scene from an episode of CSI, where they enlarge an object a few pixels wide from a video camera, and it turns out to be a piece of paper with a paragraph of fully readable text. What he's done looks kind of impossible.
I tried a supersample envelope of 6 - so this should generate 60 fps in theory, for motion blur to work with (used motion blur of 3 I think)... But the resulting video still showed no difference.
From the page referenced earlier (with the tut): "Supersampling is just that: a super resample of existing media. Used on existing video at standard resolutions being rendered to resolutions of the same size will accomplish nothing with Supersampling. So, if you were thinking of using this tool to make your existing DV footage look better, forget about it. Nothing will be accomplished except long render times and footage that looks the same coming out as it looked when it went in. "
From all the explanations I've read, it can be comparred to using two rulers, one finely incremented, one not - measuring at 1/64 of an inch willl get you a more acurrate reading then using a ruler that only goes down to 1/4".
Or... editing from the timeline, you can zoom all the way out, and make some crude guesses on where to cut (forget the preview window for a moment). Zoom in and you can time you cut at the individual frame level.
Supersampling creates extra frames, basing FX or whatever on this extra info when it applies, like altering frame rates or moving something around the frame, in effect measuring with a finer calibrated ruler.
Judgeing from the pictures on the sundance page, I figure it's pretty likely that other tech stuff is involved, perhaps the algos/routines used as Vegas takes the original frame appart to sample it and then generates the extra frames. As you can't improve what's not there to begin with, ever, my *guess* is that for it to work as intended it calculates the differences between existing frames -> as with film/video &/or image restoration, takes info from one place and used parts of it elsewhere.
An explanation of the dramatic improvement then could be that while not enough info was present to improve on one frame by itself, sampling data was included from several frames, plus those added generated, building a composite. IOW, not unlike manually using the clone tool in an image editor, but with far greater precision. One block in frame A might contain a clear picture at it's center, the same block in frame B might have some good image data at it's center, which happens to be just slightly above that in frame A, and so on.
I am still confused by the concept. Could Spot or someone post two files, one showing conventional video and the other showing supersampled video, in order to clear this up for everyone?
The two photos on SPOT's site are the before and after. Scroll up to see the link. What's missing are the SPECIFICS.
1. What was the frame rate, and frame size of the source video?
2. What if anything done besides applying the supersample envelope, blur filter?
3. What was the frame rate, frame size of the project?
4. What template was used to render?
5. What were the project settings?
I did everything but stand on my head trying to get supersampling to work other than what Dennis says, and still can't get the results SPOT shows following his instructions exactly. I even swipped a copy of the source image and tried with that. Nada. No changes at all.
Maybe SPOT used some of that magic powder IBM is pushing in their commerical. You know, just sprinkle it on and it time servers fix themselves. :-)
FWIW, this got me currious so ran a test or three (as always, mileage may vary)
As I thought, posted earlier, believe that in the example posted at sundance, the prob with the original image is blockiness which varies over time. As such, if you layered the individual frames one on top of another, and then drilled down at most any point, you'd strike good image data at that point somewhere in the stack. Vegas is taking data that exists probably in several frames and averageing it into or onto those frames where the data is poor or missing. Same thing was available as a standalone app to derive stills from poor video, and it works because you're not improving existing data, but averageing the available data over time. It works in the example because the noise or blockiness is in motion, and odds are it's never in exactly the same place every frame.
Different from temporal filters (which "might" work better) as they intentionally look for differences, whereas IMO this is a byproduct of a process designed to improve measuring accuracy.
To test this out, I fed vegas a really nasty mpg2 captured from the tv station with the worst reception this evening - it not only had noise and smearing, but regular, moving diagonal stripes across the frame. While the results were barely noticable, they were there in between the stripes. The stripes themselves were always there, and not enough data existed to make any difference.
I took a decent wmv file then, and created a smallish wmv set to 1/2 the original size, 15 fps from 24, 256 bps, and quality to 100 (to max blocking and crawling artifacts). This file was imported into vegas, the blur and supersampling applied, and rendered to full frame 30fps. It did make a very real difference in those areas plagued by motion artifacts. Where the image survived more or less intact, didn't vary over time, nothing was changed.
Being a cynic, also can't help but wonder how many files were gone through to get that example shown?
I was trying the same thing more or less this afternoon. Out of 8 or 10 files the worst one, the one with visible blocks shifting over time was improved the most, but still not as dramatic as the example SPOT used. I then played the original and "fixed" version side by side using VCDCUTTER which can be sized to any size so both vids play in time with one another. Doing that if you look close you can sometimes see minor improvement, but probalby not worth the effort. Darn... it sure would be nice if you could really boost really bad images this way and get a steady improvement. Geez... wish it would do that. Really hoping someone can prove me wrong, this really could be sweet if it always worked really good.
I don't think there is any way the picture at the bottom came from the picture at the top. Not saying SPOT is being deceptive, but maybe he just included the wrong screenshots or something.
Look at the detail in the creases in the pants, the texture on the ground, the object on the right-hand side. There's *no way* you could get that from the top video, no matter how many before and after frames were sampled.
So here are my results of testing for the last 6 hours.
I tried different formats at different sizes and upscaled to NTSC DV. Some of the files were Qt, some were ASF and some were WMV. Some were 15 fps, some were 30 fps.
The first try made me excited - I took a 15 fps QT file, resample set to on, supersample set to 5 and exported to NTSC DV. The in between frames were excellent and the motion looked great in realtime playback. However there were blocks on 'non-moving' areas - in this case the sky. IT did not fix those. But my hopes were high because the overall re-size/re-sample quality was so nice.
Then it went downhill from there. I *may* have found a bug in 4.0c. I can duplicate this bug. The next three files I supersampled and rendered to 30P because the input files were Progressive. When I loaded the rendered file VV played the first few frames and than firewire output froze...and VV stoped responding. If I have preview window playback only it plays back fine, but with the 'preview on external monitor' on it crashes. I did the same thing to 3 different files and all 3 crashed with the exact same error message. Hmmm...
Anyway - results were varied. Overall the quality was acceptable but compared to the train wreck that DSE shows - not even close to that. What I am working with is not even close to being that bad, yet the results I get - like most of you - are not anywhere close to the results given in the tutorial. I seem to get better results from QT files than WMV or ASF files however. The smallest upscaling was from a 240 x 180 QT file and it looked very decent...but it looked like about a 20th generation VHS dub, but it did NOT look like a blow up to 720 x 480...so I guess that is good.
The last file I have been testing is a file I converted with Virtual Dub a year ago and got very good results. I did 3 versions in VV and none of them got rid of the blocks like Virtual Dub's filters did. The source file is 320 x 204, 30fps Progressive, WMV. I tried supersample with progressive on and resample off, I tried resample on, I tried gausin blur and I tried going to 24P. All result are more of less the same - very blocky and not really acceptable if you go by the tutorial, and the orginal in not that blocky. (and, again, compared to the file shown in the tutorial this file is 'master quality') A few things that might be an issue - 1> square pixel convert to non-squre pixel doesn't do that well in VV. 2> Lots of motion isn't figured out well with supersampling - sort of like converting to MPG in a scene with lots of motion. 3> DSE is using some other settings he is not telling about.
I guess you could say that the creation of the in between frames is very good while the resize quality is average but the handling of square to non-square pixel is lacking. ???? Yes? No?
SPOT and Dave Haynie over at creativecow had a discussion where it was suggested that NTSC to PAL conversion might benefit from supersampling (I guess from both a rescale and new frame rate point of view, aspect ratio permitting).
Going from PAL to NTSC, resampling seems to be an adequate process for a smooth result. Supersampling (triggered by the right choice of blur for the sequence/s) would appear to be good for NTSC to PAL, not unlike NTSC 60i attempting to go to HD 720p NTSC. Another set of tests!
SPOT and Dave Haynie over at creativecow had a discussion where it was suggested that NTSC to PAL conversion might benefit from supersampling (I guess from both a rescale and new frame rate point of view, aspect ratio permitting).
Going from PAL to NTSC, resampling seems to be an adequate process for a smooth result. Supersampling (triggered by the right choice of blur for the sequence/s) would appear to be good for NTSC to PAL, not unlike NTSC 60i attempting to go to HD 720p NTSC. Another set of tests!
I'm sorry to break this to you philfort, but I saw SPOT do it. The stills on the aforementioned website are from the before and after clips. The results look like magic and I wish I knew the step-by-step procedure. The results of supersampling in the right hands is SUPER.
But... can't the "before" clip come *from* the "after" clip, then be *altered* and renamed ""before" clip" ?
OK for the "right hands", but I'm still with Philfort here: how in the world can you put shadows AND *that* kind of details from a file that was previously composed from *plain-colored squares* ??? Computerize light source, objects positions, etc... then giving objects the "3d" features required to end up with a final plausible 2d picture rich in details?
How can the computer recognize that these "4 moving white squares" are actually a moving t-shirt instead of an old slow-moving albino bee passing by the lens?...
Computers "know" everything? Yeah right...(!)
I too want a ***precise*** answer to that ! (and still waiting...)