It is rather curious that neither any SoFo guy or SPOT have jumped in on this disucssion yet considering the interest and 'confussion' it is generating.
So for the heck of it i took one of the files I supersampled in VV and brought that into the latest version of Virtual Dub. I used the 2D Cleaner plug-in on it. The results were very 'clean'. So my next thing I will try is to put on the supersample and frame serve with Satish's plug-in to Virtual Dub for rendering, doing it all in one move. See, I think that there *has* to be some other filters that DSE is using in order to clean up the image. The 2d Cleaner plug has more options that just clicking on "Supersamling" and moving a slider between 1 - 8 and really that seems to matter...not the options but the ablity to actually play with the options more.
This is all sort like seeing something and being told "Yeah I shot it with the DVX100" so we all go out and use that camera but don't get the same results "Oh well...yeah I *shot* with that camera but I used this type of lighting and had this filter on the lens and didn't use a white card to white balance and than I did some color correction and motion blur in VV..." I just am feeling the tutorial is way oversimplified.
Welllll... Flipping through a copy of DV got last week or so, & Douglas Spotted Eagle is hawking Cool 3D studio. Not good, not bad, guy's making a buck is all, same way an NBA player endoses a brand of shoe. So is this restoration using supersampling a bit hyped? I'll just say that from experience in marketing and sales, and still more experience with marketing and sales types, from time to time exceptional examples are shown.
That said, I wanted to get a better idea of what the filter was doing when upsampling so came up with a different test - something easily enough duplicated by anyone.
1) Got a jpeg still that showed the angled roof of a building, but just about anything will do - just wanted the 45 degree edge in there.
2) In a paint program I applied a pixelate filter with blocks set at 3 tall & 5 wide to a copy of image. Then reduced the paper/canvas size to 320 x 240 with the orientation to top left so I wound up with the left hand corner of the image. I saved this as a jpg with high compression (79), which was a bit of overkill perhaps.
I then took the original image, cropped about 5 or 6 pixels from the left side and top, made a copy, and repeated the process. Making a series in this way I was able to simulate a pan, with the pixel blocks not aligned from one jpg to the next.
3) Brought these stills into VV4c, with a good picture (no blocks) as first and last in the sequence, then rendered to avi at 740 x 480 using the blur and supersampling.
Bringing this new avi into vegas in the track below the stills, I added the same amount of blur, and was able to compare the results on a frame by frame basis in a way that showed pretty much what the program was doing, which was averaging the image data from picture to picture. The picture I chose had snow on the roof, & the high contrast with a dark blue sky proved it to be a good choice.
The original images were generally a slight bit clearer, but the alliasing on the roof edge was much more prominant. The same edge on the rendered footage was smoother, but on the high contrast edge there was also a very noticable ghosting that appeared as a motion blur.
Conclusion: Using the process as in the tutorial might come in handy under somewhat unique circumstances, but it reduced quality enough that I'd try one or more of the temporal type filters available for V/Dub 99% of the time. The pseudo motion blur definitely would make me think twice about using it on a scene that already had smooth panning. On the other hand, where the panning was not smooth, it might provide a workable fix - it would be interesting to take a scene with a bad/jerky pan and try this, rendering to wmv for it's 60 fps rate, assuming you can get uncompressed wmv out by selecting quality vbr with quality set to 100, and then downsampling the result to 30 or 24 fps to see what happened.
I'm perfectly aware what Dennis said originally Jet. It is what has been said afterwards I and it would seem many others are looking for a response to because for sure this is currently is hanging.
SPOT is showing on his web site and apparently on behalf of SoFo in an official or at least semi official capacity (?) while on tour demonstrating Vegas a use for supersamplilng that is far beyond what SoFo is saying. I simply would like clarification and further details and apparently so would others.
Both the web site and at least a couple people that SAW what was done at one of the tour shows is suggesting under the right conditions supersampling can really be helpful in removing artifacts, blockyness, etc.. For sure a real plus many would love to learn more about.
The issue is what is demonstrated on SPOT's web site can NOT be duplicated that well if at all using different source files, while others say they saw SPOT do it live at one of the show with a differnt source file other than the example used in the tutorial.
We don't useless comments like you made Jet. We are looking for further details from the principles. That would be Dennis and Spot. Not you. Do you get what I'm talking about now? I can only hope.
I noticed that in the "before" example, the whole image is of a poorer quality - not just the video clip, but even the part of the image that shows the application interface. I'm interested in learning more about how supersampling can improve my application interface.
Tor
LOL! You noticed that too? How the heck does a FX filter change the application that you're viewing it on making IT as sharp as the video when before it was just as blurry?
If you look closely, note the difference in position of the scroll bar on Media player in the before and after shots. In the second picture it is much further right.
I think some of the confussion is coming from looking at the before and after shots and assuming (wrongly) is is the SAME frame before and after. Thinking about it, if supersampling is used at a rate of 4 then more frames are added and looking at the same exact frame before/after there would be no change, however several frames later there would be a change. Anyone why I asked... still waiting....
Has anyone progressed any further with this mystery? I have some lo rez grainy footage that I would love to oversample into the sort of quality that is shown in the DSE tute. Can Sofo or DSE enlighten us on the missing ingredient here , or have we truly had the wool pulled over our eyes?