24p to 60i, Kinescope restoration, and more

johnmeyer wrote on 11/14/2008, 11:13 PM
I can now restore kinescopes (video captured on 16mm or 35mm film), and convert them back NTSC video. Magic!!

The process is tedious, but fairly straightforward, and I thought that since some people want to go from 24p to 60i -- which is part of the process -- I would take a moment to document what I've done.

If all you care about is the 24p to 60i conversion, scroll down to that section. I added a heading in bold so you can easily find it and not have to read this other stuff.

Kinescopes of NTSC video were made through the 1950s and 1960s. They were made with a film camera fitted with a 72 degree shutter, and the camera synced to run at an exact 24/29.97 of the speed of the video. Since


(360 - 72) 1 1
------------- x ------- = -----
360 24 30

this means that each frame of film will capture exactly two fields of video. The film is pulled down during the time the shutter is closed, and so some of the scan lines of every third field of video are missed. However, since the old video system always used what is now called a "rolling shutter," no matter where in the scan the film capture starts (line 214, line 19, line 143 -- it doesn't matter), you always get two complete fields.

When broadcast, the 24p film is telecined in a 3:2 pattern so that one frame of film is repeated across 3 video fields and the next film frame across 2 fields, just like film shot of real life instead of from a video tube. Thus, the motion displays at the proper speed.

The problem with watching a kinescope, however, is that the motion between fields is lost, and of course 1/5 of the original information is gone forever because the shutter was closed while the film advanced. In addition, the resulting picture now has both the artifacts of video coupled with the artifacts of film, which include gamma/contrast; grain; gate weave/judder; and of course the 24p cadence.

So how do you get back to the original video?

Here's how.

First, use inverse telecine software (IVTC) to recover the original 24p from the kinescope video. This is not necessary if you have access to the actual kinescope film, but most sources are from station videotapes, where the kinescope was transferred back to videotape after tape got cheap (and it usually includes an embedded timecode which must be removed using Delogo). I use IVTC software for AVISynth and it works perfectly. I get exactly the original kinescope film just as if I had transferred it myself, and I set the video header to 23.976.

Next step is to put this in Vegas. I use the standard 24p (23.976) DV template, since all kinescopes are 4:3. I edit in Vegas, restoring the soundtrack in Sound Forge, iZotope RX, and sometimes using the Band Extrapolator in Nero to try to add some life to what is usually a pretty dull (sonically speaking) soundtrack.

I use color curves to adjust the gamma, using Vegas' scopes and an external monitor to try to bring out as much of the midtones as possible. Many of these kinescopes were transferred back to video using poor techniques and they tend to have the highlights blown out. I have never had much luck with the Broadcast Colors plugin, so I use the Levels fX instead to make sure the blacks and whites are legal and look good on the monitor.

Most of these kinescopes are incredibly contrasty and unlike normal B&W film (these are almost always B&W), there just isn't any detail in the shadows. Therefore, most of the curves adjustments are designed to help the midtones, usually bumping them up ever so slightly.

Then, the fun starts.

I use Frameserver to serve the edited video into another AVISynth script. This one removes film dust. The script isn't perfect, but it usually gets 80-90% of all the dirt and makes a big difference.

If I was doing this for money (and I would charge a LOT of money for this), I would also use filters to reduce the film grain. And, if I had access to the film, I'd use Deshaker to remove the gate weave. Unfortunately, without the ability to see the edge of the film there is no reliable way to see the gate weave/judder (although you can see it easily on titles and certain static scenes and I guess I could cut those into events and have Deshaker work on those).

Then comes the main event: changing from 24p to 60i.

24p to 60i conversion

For those of you familiar with AVISynth, here's the magic script:

loadplugin("C:\Program Files\AviSynth 2.5\plugins\MVTools\mvtools.dll")

AVISource("d:\frameserver.avi")
source=assumefps(23.976).assumeframebased

backward_vec = source.MVAnalyse(blksize=16, overlap=4, isb = true, pel=4, search=3, idx=1, chroma=false)
forward_vec = source.MVAnalyse(blksize=16, overlap=4, isb = false, pel=4, search=3, idx=1, chroma=false)
last.MVFlowFps(backward_vec, forward_vec, num=60000, den=1001)

AssumeFrameBased()
SeparateFields()
SelectEvery(4, 0, 3)
#ComplementParity()
Weave()
AssumeFieldBased()
AssumeBFF()
#KillAudio()
ConvertToRGB32(interlaced=true)

What this script does is to convert the 24p video all the way up to 60p by using motion estimation to create intermediate frames. It then separates this video into fields and takes the top field from the first frame and the bottom field from the last frame in each group of two frames. The other fields from each pair of frames is discarded. This “trick” is common practice to those familiar with AVISynth and it provides a 29.97 interlaced video which now has a temporal difference between each field, unlike the kinescope where there is no movement in time between field from the same frame of film because film can’t do that.

The ComplementParity is needed if the fields end up in the wrong order (encode a quick test to a DVD+RW and watch it on a TV set). The KillAudio is required if you don't want to encode the audio with the video (the MainConcept external MPEG-2 encoder doesn't like to deal with audio from AVISynth scripts).

The result is pretty darned amazing. It really and truly looks like video, not a kinescope. The fluidity is definitely there. When I started this process, I didn't know if it would look all that good, or if the artifacts from the motion estimation would make the whole thing look fake. Nope, it looks like video.

The actual restoration of kinescopes is probably not something most people will ever need to do, but all these bits and pieces are definitely very useful from time to time. And doing 24p to 60i using motion estimation (which is the secret sauce that makes this all work) works really, really well.

If I do a lot of this (and I may be doing a huge amount of it), I'll need to tweak the settings for the motion estimation to make the scene detection work better. I'll also finally need to get a faster computer because this sucker is S-L-O-W (12 hour render for 50 minutes of material).

Hope this helps someone!


Comments

farss wrote on 11/15/2008, 12:46 AM
That's quite an effort.
I did some time ago transfer some 16mm kinescope film. You're right it sure looked pretty horrid even straight off the film. I loved the 'digital' countdown clock they used though, a bunch of 40W lamps.

I guess if anyone was to try to restore the film to video properly as this was at 25fps the task would be somewhat easier. The part that I'm still uncertain about is how one determines where on the film each scan line was. No doubt the gate weave in the original kinescope would not help finding them. On the prints you couldn't actually see any scan lines as such.

Bob.
johnmeyer wrote on 11/15/2008, 3:24 PM
part that I'm still uncertain about is how one determines where on the film each scan line was. No doubt the gate weave in the original kinescope would not help finding them. On the prints you couldn't actually see any scan lines as such.I had a lot of "aha" moments while doing this. One of them was realizing that recovering the original scan lines really doesn't matter. The kinescope machines were designed to hide the scan lines and in fact, in most cases you cannot detect the lines. For really fast moving objects, like a baseball or hockey puck, you end up with blur where the scan lines have been merged, and even if you could somehow know where to place the fields, you'd still end up with the same result. The key -- and this would even be true of PAL -- is to generate some synthetic temporal difference in the alternating scan lines, something that is lost even in the 50i to 24p conversion for PAL kinescope.

So, you just slice the picture into alternating fields and then generate the missing fields, and it all works really well because the whole thing was mushed together into the film frame.

Not sure if that makes sense, but another way to think of it is this: suppose the event had actually been filmed with a camera (i.e., not a kinescope) and you wanted to make it look like video? This technique would work just the same (which is why I posted this here). Since the kinescope was designed to kill the scan lines and merge the fields together, the whole goal was in fact to create something that had all the attributes of film, and they actually succeeded pretty well in most cases.
farss wrote on 11/15/2008, 4:15 PM
Now I've got it.
I wasn't really involved in this business back then but I recall someone telling me of tricks involving wobbling scan lines and/or using long persistance CRTs.
I'm impressed that in amongst the grain and grot on those old prints that the motion estimator can find objects.
Interesting that what you're doing is the opposite of the current fad of making video look like film.

Bob.