15fps to DVD with motion frame interpolation?

Randal wrote on 3/10/2015, 11:14 PM
I have NTSC interlaced 15fps footage from a 8mm film transfer. Each frame is stop motion triggered. The software I use gives the option of duplicating each frame to bring it up to 30fps before exporting to DV avi. This results in jerky motion when panning. What would be the recommended procedure to go from 15fps to 30fps and improve motion appearance?
Thanks in advance.

Comments

musicvid10 wrote on 3/11/2015, 9:48 AM
Is the source footage interlaced or progressive?
If interlaced you should be able to "Bob" it to 30p in VirtualDub with good results.
If progressive, a paid solution like Twixtor would be your best chance for smooth results.
Randal wrote on 3/11/2015, 12:35 PM
Hi musicvid10
would it be wise to edit in vegas before or after "bob". Would I leave it as progressive once bobed?
Thanks
TheHappyFriar wrote on 3/11/2015, 1:15 PM
What if you just export an image sequence & import that in to Vegas as 30fps, slow motion it by 50% & turn off disable resampling?
johnmeyer wrote on 3/11/2015, 3:35 PM
I do film transfers all the time. The most common goal is to put 12, 15, 16, 18, or 24 fps film into a 29.97 interlaced container. The usual way to do that -- and still often the best -- is to repeat fields, something that Vegas does automatically if you turn off resample for every event. This gives you the cleanest possible result, with no artifacts or surprises, and everything is crisp and sharp. Of course the motion will be jerky, especially when the camera pans horizontally.

You are asking about how to get smoother motion by creating additional fields and/or frames in order to actually increase the number of events per second.

If this is what you do, I'm afraid that bobbing will achieve exactly nothing because you still have the same number of events per second. What's worse, that will throw away half your vertical resolution. So, don't take that course.

Likewise, exporting an image sequence and then doing slow motion on that with resample disabled will also accomplish exactly nothing because you will still have the same number of motion events per second.

Instead, to make this work, you must do exactly what you said in the subject title of this thread: create additional frames using some sort of interpolation. In order to do this entirely inside of Vegas, you must make sure that resample is enabled for every event. Fortunately for you (but to the consternation of many in this forum) it is enabled by default (i.e., "smart resample" will automatically create the additional fields and/or frames).

If your film capture file is set to play at 15 fps (i.e., if that is what Mediainfo reports when you drop the file into that utility), then simply set up a 29.97 fps project (or 30 fps, if that's what you want), and render. Vegas will interpolate the fields. In other words, you pretty much have to do nothing at all.

The only downside to this is that Vegas interpolates by blending adjacent fields, so you end up with a result that is a little "soft." For better interpolation, you need to render your project to some intermediate, and then put that into a program like Twixtor, MotionPerfect, or After Effects. They will create the additional frames using motion estimation. You can also use an AVISynth script and a free motion interpolation plugin called MVTools2. Here is a script that will do what you want:

# This script converts 15 fps progressive footage to either 29.97 or 30.00 fps progressive
# March 11, 2015
# John H. Meyer

loadplugin("C:\Program Files\AviSynth 2.5\plugins\MVTools\mvtools2.dll")

source=AVISource("e:\fs.avi").killaudio().AssumeFPS(15, false)

super=MSuper(source,pel=2)

backward_vec = MAnalyse(super,blksize=16, overlap=2, isb = true, search=3, searchparam=3 )
forward_vec = MAnalyse(super,blksize=16, overlap=2, isb = false, search=3, searchparam=3 )

#-----------------------------
#Use one of the three following methods to create interpolated frames

#Use these two lines for 30.0 fps progressive output
MFlowFps(source,super,backward_vec, forward_vec, num=30, den=1, ml=100) #29.97 fps progressive
AssumeFPS(30.0, true)

#Use these two lines for 29.97 fps progressive output
#MFlowFps(source,super,backward_vec, forward_vec, num=30000, den=1001, ml=100) #29.97 fps progressive
#AssumeFPS(29.97, true)

#Use the following line instead of MFlowFPS if you want traditional adjacent frame blending to 30 fps output, like Vegas does
#ConvertFPS(source,30)
#-----------------------------

final=last

#Use stackhorizontal instead of "return final" if you want to see both before and after on same screen
#stackhorizontal(source.bob(),final)
return final



This image shows a frame from a 15 fps film capture on the left. The cars are traveling across the frame, from right to left. The image on the right is a motion estimated frame that has been created using MVTools2, and it is halfway in time between the current frame and the next frame. You can see, on the right edge of the right-side picture, that a portion of the rear of the car has moved into frame that was not shown in the previous frame.




This is the same exact frame on the left, but this time the generated frame on the right is created by blending adjacent frames (the current frame with the next frame), just like Vegas does, although it was done with the optional code shown in the above AVISynth script:



You can easily see the blend (the ghosting at the front and back of the cars). It looks like a double exposure which, in essence, it is.

If you look at just this one sample, you might conclude that motion estimation is fantastically better than frame blending and, when it works, you would be right. Unfortunately, motion estimation sometimes fails to create a frame that looks flawless, like the one shown above, and when it fails, the results can be quite bad. By contrast, repeating fields creates no artifacts, and is as good as the original, and blending fields always produces predictable results, with no surprises.

For really critical work where I want to increase the actual number of frames per second in order to produce smoother motion, I create both a motion estimated and frame blended version. I put them in a Vegas project, with the motion-estimated version on the top track. Then, when I see a frame where the frame blending has failed badly, I switch to the track below, but just for that one frame. If I have infinite time, and someone is paying me lots of money, I'll use a mask on the upper track, letting the blended version show through, but only where the motion estimation has failed.

Randal wrote on 3/11/2015, 4:23 PM
Thank you all for the replies.
John, what an excellent explanation! Thank you so much for taking the time. Although I am weak at avisynth scripting, I will give it a go.
1. Would I have to deinterlace the footage before running the script? If so, can you include it in the script?
2.Would it be wise to edit the footage before or after running it through avisynth?
3. Once its exported as progressive, should I leave it progressive when rendering mp2?
Thanks
johnmeyer wrote on 3/11/2015, 5:03 PM
If your film transfer was done with a "frame accurate" transfer device, there should be no temporal difference between each pair of fields that make up each frame. If this is the case, you can ignore any report from MediaInfo that says the file is interlaced. Put another way, if both fields for each frame in an interlaced video come from the same moment in time, then that video is in fact progressive. In other words, progressive video can be stored inside an interlaced video container.

So, if that is what you have, you most definitely do NOT want to do any deinterlacing.

You can edit the footage before or after running it through AVISynth.

If the original footage is progressive, then you should right-click on every piece of media and manually set its properties to progressive, and also manually change the Project Properties to progressive. I do this every day.

musicvid10 wrote on 3/11/2015, 6:03 PM
John is correct that bobbing uses half the vertical information for each new frame, but preserves 30fps temporal smoothness if your source is 15i. No free lunch there, just a personal choice.

Randal wrote on 3/11/2015, 9:02 PM
John, excellent script!! this was my first try at avisynth and I am shocked. What a difference it made. Also thanks a ton for the tip on using progressive where useable. Huge improvement!! I think you hooked me on scripting, but it looks like a long road ahead. Glad your around!

Thanks again
Randal
johnmeyer wrote on 3/11/2015, 11:56 PM
You're welcome.

As for setting the media to progressive, just make sure the original really is progressive. If it is a frame-accurate film transfer, then it definitely is progressive. However, if the film was telecined, or if it was transferred with a non-frame-accurate rig, then it may be truly interlaced (i.e., temporal difference between fields in the same frame). In that case, setting the media flag to "progressive" will do really bad things.

However, I assume you really have progressive footage, inside an interlaced container. If so, you will see a huge improvement in sharpness by setting the media and project properties to progressive.

As for using motion estimation to get smoother flow, it works really well --- until it doesn't. It fails in fairly predictable circumstances, including the following:

1. Any object that enters the frame close to the camera. You will see all sorts of morphing around the object.

2. Objects which cross one another. This includes a person's legs as he or she is walking. In really bad situations, the legs will "break" for one field, and then mend themselves on the next field.

3. Repeated vertical objects. If the camera pans across a picket fence, the results can be unwatchable.

On the flip side, motion estimation does a near perfect job when everything in the frame is moving in the same direction. The best example is when the camera pans horizontally. As you know, with 15, 16, or 18 fps silent film, horizontal camera pans look absolutely awful, with all sorts of judder which results from the low frame rate. So, for this situation, the motion estimation will give you almost perfect results, and will make the pans look great.

I mentioned that I sometimes mix motion estimation and frame blending. Another technique I use, when I'm going to put the video on a DVD, is to create one version with traditional pulldown (i.e., repeated fields). This provides perfect sharpness and no artifacts. With low motion scenes, it is the best solution. So, I create one video at 29.97 that contains the pulldown fields. I then create another video at 29.97 created with motion estimation. I then simply cut to the motion estimation when the camera pans.

I am convinced that more bad video has been created by people who don't understand interlacing, or who think it is something awful that should be avoided, than all other video mistakes combined. A large part of my restoration business is devoted to trying to undo badly done deinterlacing, reversed fields, horrible re-sizing, and lots more.
Randal wrote on 3/12/2015, 12:51 PM
I viewed the original footage in Vdub and turned on "View fields". Upper and lower fields are identical, so I think you are correct.....it is effectively "progressive". Setting this in Vegas did improve my footage as you said it would. Since I am now going to take the time and run all my footage through Avisynth, can you recommend an addition to the script for stabilization?
Thanks
Randal
johnmeyer wrote on 3/12/2015, 2:03 PM
There are several extremely long threads over at doom9.org which provide some amazing scripts for improving frame-accurate film transfers. These scripts perform the following:

* stabilization
* grain reduction
* dirt removal
* color correction
* gamma correction
* sharpening

and more. Here are links to two of those threads:

The power of Avisynth: restoring old 8mm films

Capturing and restoring old 8mm films

The combination of techniques in these scripts can sometimes create remarkable results.

Here is one frame from one of my 8mm transfers. Notice the amount of detail the script was able to make visible (look especially at the vertical supports on the porch fence):



The person who original started these threads, "VideoFred," posted a video showing the before/after from applying his script to some modern, pristine Super 8 footage. Here is the result:

Improved Avisynth 8mm film restoring script

I was blown away by this result, but when I applied it to my transfers of old film stock that was not in such good shape, the results look horribly over-sharpened and artificial. Also, his script ran very, very slowly. So I took apart his script, line-by-line, and rebuilt it in a way that improved the performance by almost a factor of four, dramatically reduced the sharpening, and completely changed the modest dirt removal so that you can actually remove almost all dirt from your film. I posted my results in both the initial thread, as well as a much-improved version of my script in the second thread.

Both Fred and I have continued to develop our versions of his script, and he now has a version which uses much different tools for color correction.

FWIW, I have always found the color and gamma corrections in any AVISynth script -- including his excellent script -- to be a poor substitute for proper color correction (a.k.a. "color timing") done in Vegas.

Here is a clip I've posted many times before, showing a sixty-year old 8mm clip that had been lying on the floor in a basement, outside of a can or box. The dirt was embedded and could not be removed with Edwall film cleaner. The end result of applying the script can be seen on the right. While it is not pristine, like VideoFred's example, you will note the tremendous amount of dirt removal; the stabilization; the detail now visible on the side of the building and on the slope in front of the building; the boats now visible in the harbor in the background; and the reduction (although not the removal) of film grain.




[edit]One thing I forgot to mention is that VideoFred also included motion estimation in his script, to do exactly what was asked for in the original post in this thread. However, for the reasons I already described, I don't often use this, and much prefer the traditional pulldown approach because it produces the sharpest result, still looks like film, and never produces any surprise artifacts. Also, VideoFred's version of motion estimation is, IMHO, less than optimal. If you really want to do good motion estimation, you have to use the newer version of MVTools (i.e., the MVTools2 DLL that I use), and have to use the large block size I use.

Randal wrote on 3/13/2015, 2:06 PM
Thanks John
Got your script running!.....Now just have to play!

Thanks!