Comparison of Vegas Pro Deinterlacing Methods

NickHope wrote on 3/6/2011, 11:43 PM
I did a comparison of deinterlacers (1080-60i to 1080-30p) using Stringer's driving clip (thanks Gregory!).

Edit: On my computer, Firefox is zooming these pics which makes it impossible to compare. These should be 750px wide. If they are not then view in another browser or save them and view offline.













Full frame screengrabs including these and a few more tests are available as an 11MB zip file here.

Method:

Stringer's 1080-60i AVCHD clip was on a Vegas Pro 10.0c timeline with 1920x1080 generated text ("...tion Detail") on a higher track. Both within studio RGB levels.

Each test was rendered out to a 1080-30p HuffYUV file except for the Handbrake test, which was rendered to DNxHD then x264 in Handbrake at 20 Mbps.

Mike Crash's Smart Deinterlace plugin, the Yadif plugin, and the BCC7 plugin were added as Media fx.

The QTCMG tests went via Debugmode Frameserver, AviSynth (for QTGMC), VFAPIConv, and back into a 2nd instance of Vegas Pro 10.0c for HuffYUV rendering.

I saved snapshots in Vegas Pro 10.0c, cropped/annotated/saved to very high quality JPEG in Photoshop CS3.

Notes:

1. This is only one test! Results will differ with different footage.

2. In Vegas Pro 8.0c the Smart Deinterlacer cannot be used as a track fx, but in 10.0c it now can be, as can Yadif and BCC7. The pre/post toggle is more intuitive now, and by default the fx comes in correctly as "pre".

3. It's useful to compare the sharpness of the generated text, top left. The Handbrake and QTGMC methods oblige it to go through the external deinterlacing process, whereas it bypasses deinterlacing (I assume) in the Vegas native or plugin methods.

4. Setting Smart Deinterlace motion thresholds to 0 defeats the purpose of the plugin somewhat but I wanted to force it to deinterlace the whole frame in order to make a meaningful comparison of its blending/interpolating.

5. Yadif plugin is here. It has 32-bit and 64-bit versions in the download.

6. Mike Crash's Smart Deinterlace plugin is here: 32-bit, 64-bit versions.

7. The QTGMC method is in my YouTube/Vimeo tutorial (discussed here). It can be greatly speeded up by multi-threading.

8. I can see that the colours have become adjusted somewhere during the QTGMC workflow and will discuss that elsewhere.

Personal Observations (your mileage may vary):

1. The blend methods play back more smoothly than all the other methods, but are much softer (nothing new here but worth repeating).

2. I prefer Yadif over Smart Deinterlace.

3. Improvements in QTGMC using slower presets than "super fast" are slight.

4. In this test, Handbrake decomb doesn't seem to do much that the Yadif plugin isn't doing, and the yellow in the car's headlights is getting shifted over to the left. The yellow is also getting shifted in the QTGMC tests (Edit: Now fixed for QTGMC. I discovered this was due to leaving "interlaced=true" out of my AviSynth script. Not fixed for Handbrake though. See further down the thread for details):






Discuss!

Comments

Rory Cooper wrote on 3/7/2011, 12:41 AM
Just wondering with HD why film interlaced so that you can deinterlace and get 25/30 bad progressive frames and have all these problems
Why not film and get 25/30 good progressive frames?

Why deinterlace if filming interlaced footage is giving you temporal peripheral magic wand BS motion why not produce it interlaced? Why go through all the hassle
Of deinterlacing?
Interlaced footage also gives judder so that’s not the anwer, does it really give more light….in your mind’s eye maybe.

Interlacing was designed for broadcasting not filming or editing.

Simple HD rule interlacing BAD progressive GOOD
NickHope wrote on 3/7/2011, 12:54 AM
@ Rory - Because my Sony VX2000 and Z1P couldn't shoot true progressive. I was shooting for DVD and Blu-ray primarily, so I chose interlaced over the camera's fake progressive mode. Now I'm sitting on an archive of thousands of clips that require upload to the net. All of them are interlaced. None are progressive. If I upload them without deinterlacing they are going to look awful. I would love to shoot progressive in the future.

The tests are only 10 seconds long so later on I'll render them out as AVC at 20 Mbps (unless someone screams for a different format or bitrate) and upload them. For laughs I'll also include a QTGMC one at 60p.
farss wrote on 3/7/2011, 1:31 AM
Well my EX1 does shoot true prograssive and true interlaced!

I shoot interlaced because it does deliver more light, not in my minds eye but as measured.
Shooting progressive solves one problem and creates another one when you have to deliver HD progressive for the web and SD Interlaced for DVD.

The video below from around 1:00 is an intersting example of using the Smart De-interlacer. The way I set it up only the guys hands are seen as interlaced. It was also configures to ditch one field and interpolate using bicubic in the parts of the frame that are interlaced. That had the effect of reducing the motion blur of the hands and as a result they look too juddery for my tastes.




[edit] Darn it doesn't embed as HD despite my trying. Oh well, best to watch it directly in YT.


Bob.
NickHope wrote on 3/7/2011, 3:20 AM
Do you remember what you uploaded there, Bob? AVC at 720p?

By the way YouTube won't stream 720p by default until the embedded resolution reaches 1180x694 (including 30px player chrome at the bottom). They jump to 1080p (if there's a stream) at 1770x1026. For SD they jump from 360p to 480p at 684x543. I wish the thresholds were lower.
TeetimeNC wrote on 3/7/2011, 4:15 AM
>I shoot interlaced because it does deliver more light, not in my minds eye but as measured.

Bob, can you elaborate about interlaced delivering more light?

/jerry

farss wrote on 3/7/2011, 4:29 AM
"Do you remember what you uploaded there, Bob? AVC at 720p?"

Encoded using Sony AVC, 720p.

Going back to your original tests and I truly appreciate the amount of time you've put into them.
From my understanding of how the smart de-interlacer works it'll pretty much be a waste on the footage you're using. You might as well do a "field drop" aka in Vegas, "Interpolate". I think you'll find if you switch the de-interlacer to show motion areas it'll pick the whole frame as being in motion and hence just interpolate it anyway. Any form of intelligent de-interlacing can have issues with footage where there's full frame motion in different directions. From what I've read and understand of the problem of de-interlacing is it's one of those problems for which there's no perfectly foolproof solution. Different techniques are arguably quicker or better depending on the footage but it's easy enough to construct test cases or shoot footage which will confound the smarts of even the most expensive hardware boxes.

Unfortunately what would be quite difficult to de-interlace would be underwater footage no matter how well shot, even with a locked off camera there's oftenly always some motion in every part of the frame.

Bob.
farss wrote on 3/7/2011, 4:42 AM
"Bob, can you elaborate about interlaced delivering more light?"

To be technically correct it doesn't, it produces less noise, well that's my understanding. Of course the sensitivity to light of any camera is defined by the noise floor.

When shooting interlaced the camera needs to reduce vertical resolution to avoid line twitter and the way to do this is by line pair averaging. That has the added bonus of reducing noise.

This might not be the case in all cameras though!
If your camera cannot resolve over around 800 lines in progressive scan it's possible the designers avoided doing this. Some cameras seemingly have the same vertical resolution in both progressive and interlaced.

Bob.
NickHope wrote on 3/7/2011, 6:13 AM
>> From my understanding of how the smart de-interlacer works it'll pretty much be a waste on the footage you're using. <<

Actually as the whole frame is moving this wasn't really supposed to be a test of the deinterlacers' motion adaptivity, but rather to see what they do when they do deinterlace. So my intention was to force the Smart Deinterlace plugin to deinterlace the whole frame by using a motion threshold of 0, in order to view what sort of a job it made of cubic interpolation, which I guessed would be sharper than Vegas' interpolation. But I stupidly didn't check it with the "Show motion areas only" switch and I now see that even at motion threshold 0, it's leaving a scattering of around 10% of the frame untouched, which is probably a very bad thing since the whole frame is moving. To force it to deinterlace the whole frame I should have used frame-only or field-and-frame differencing.

Footage of action on a fairly static/uniform background (e.g. sports, guitarist) would be a better test to include the motion adaptivity as well as the interpolation and decombing quality.

Smart Deinterlace is a great option where you have a minor part of the frame you want to deinterlace and you really want control and instant visual feedback of which areas of the frame are going to get deinterlaced e.g. The guy's hands in farss' video. But there are a lot of options to get right, and plenty of scope to get things wrong. Yadif is a nice simple upgrade to Vegas' standard interpolate, with less scope for error, but no preview of the areas flagged for deinterlacing.
R0cky wrote on 3/7/2011, 7:14 AM
I tried Yadif vs. the MC smart deinterlacer and got much better results with yadif for a troublesome clip with the deal killing exception that yadif caused terrible line twitter on the top and bottom lines of the clip. It was completely unacceptable so watch out for it.

Currently Vegas interpolation is the best result (better than MC) on this particular clip though I haven't tried boris yet.

rocky
johnmeyer wrote on 3/7/2011, 9:27 AM
A few observations and some feedback.

First, thanks for taking the time to do this, and for doing it so well. It is very useful.

Second, everyone who does these tests or looks at the results of these test should recognize one very important thing and this is important for any discussion about interlacing:

Looking at still images is misleading!!

The whole reason that interlacing works, and why interlaced footage looks absolutely great on an interlaced monitor, and why the HD committee decided to include 1080i in the HD standard, and why 1080i is used for broadcast is because of this fact: interlaced footage looks really good when viewed with the video playing at normal speed.

However, once you freeze the frame, you end up with two different points in time blended onto a single frame. This would be exactly like taking two frames of film, stacking them on top of each other, and then viewing the combination on the movie screen: you would of course see ghosts, fuzzy edges, and all sorts of other artifacts because these frames are from two different points in time. That point seems obvious, and yet people insist on doing the same thing with interlaced footage. It is even worse with interlaced footage because the two fields are not only from two different moments in time, but also two different spatial locations.

However, until we get back to displays that can handle interlaced footage correctly, we are stuck with this problem.

In looking at your images, one of the most important places to look is the upper right corner, which is partially hidden by the text you overlaid onto the frames. I am speaking about the detail in the tree branches. This detail is "organic" and random and provides a very good test of the ability to extract detail from the motion estimation. Most people tend to focus on the combing artifacts or the staircasing on diagonal lines. As you can see, the most simple-minded deinterlacing does a very good job on that.

Key points to focus on are the grill in front of the car; the curb in front of the car; the car's side windshields; the center line on the road; the detail in the trees in the upper right corner; and, of course the color. The three best results, IMHO, came from (in ascending order of quality, worst to best) the YADIF Vegas plugin; the BCC7 plugin; and the QTGMC method in faster mode (I actually DID see quite a bit of difference between super-fast and faster, and the faster produced less noise in the subtle texture transitions in the traffic island). The QTGMC would win hands-down if the color issues weren't there. I'd be interested in whether those can be corrected.

For most people, it is clear to me that the YADIF plugin provides by far the best combination of speed, quality, ease of use. If the color issues can be corrected, and if you don't mind the extended workflow, QTGMC with one of the slower options (ultra fast is NOT good) is the best of the lot, and by quite a bit.

farss wrote on 3/7/2011, 12:58 PM
@John
"Second, everyone who does these tests or looks at the results of these test should recognize one very important thing...... "

You make good points here however I don't follow how it applies to these tests. The purpose of de-interlacing is to take two fields which will have spatial and temporal separation and combine them in a way that removes or compensates for the spatial and temporal separation to produce the same outcome as if a single full raster image was take at that point in time. As such looking at a single still image seems a very valid approach.
If the process leaves any interlace artifacts then you really have a problem given that the images will then be viewed on a progressive display. Even worse, it's very likely there will also be a scaling step in the workflow and any interlace combing artifacts will produce rather nasty aliasing problems during scaling.
It gets even worse. The next step in the workflow is to encode using H.264 and then send that to another encoder e.g. YouTube.
One of the reasons I decided to stay well away from the Handbrake to YT thread was I felt it was too complex a challenge. Without being able to tap into each part of the signal processing chain it has to be very, very difficult to workout where artifacts are coming from. I seriously take my hat off to those who've stuck with it for their persistance.

@Nick
YADIF and the smart de-interlacer are two quite different beasts.

From what I've read YADIF leaves field 1 as is. It then attempts to map all pixels from field 2 to align with where they should be based on field 1. The whole of field 2 is processed this way. As such there is no motion mapping.
The smart de-interlacer first derives a motion map. Based on that it either blends or interpolates that part of the frame. If it were to determine there's no motion anywhere between the two fields then it would simply blend them. It does no motion compensation, only motion detection.

Bob.
NickHope wrote on 3/7/2011, 1:46 PM
Here is the moving version to complete the picture :) ... A 190 MB zip of a 10 second 16Mbps x264 render of each method. This time I forced Smart Deinterlacer to cubic interpolate the whole frame. VLC does nicely with them if Vegas doesn't.

As I see it, it all depends what look you want to achieve...

a) At one end of the scale is to try to get razor sharp images from 60i as if you're shooting, for example 30p with a 1/10000 shutter.

b) At the other end of the scale is to attempt to simulate full motion blur as if you were shooting 30p with a 1/30 shutter.

Both approaches are valid. It just depends which look you're after, and the amount of motion in the footage. (a) is going to play like a hi-tech flip book but be sharp, and (b) is going to play totally smoothly but be soft. Have I got this right or does this ridiculously over-simplify things?

Bob, I think I understand the difference between Smart Deinterlace and Yadif in principle. I read the Handbrake guy's Yadif explanation and my brain imploded. I will try again some time when I'm feeling intelligent.

John, let's troubleshoot the colour shift in the other thread.
johnmeyer wrote on 3/7/2011, 2:00 PM
The purpose of de-interlacing is to take two fields which will have spatial and temporal separation and combine them in a way that removes or compensates for the spatial and temporal separation to produce the same outcome as if a single full raster image was take at that point in time. As such looking at a single still image seems a very valid approach.I agree that the approach is useful, but I'm not sure I agree that it is "valid." I'm not trying to nitpick your choice of words, but instead I'm trying to make a broader point, namely that the important thing in ANY technical decision about video is:

how does it look when you watch it?.

Specifically, what I have found is that some video fX, filtering, restoration, etc. may make individual frames look better, but the result looks like c*&p when you actually watch the video.

The best example is temporal filtering (and there is some of this going on in the advanced deinterlacing script that Nick describes in his tutorial). With temporal filtering, you can get some amazingly good-looking individual frames that, when viewed statically, one frame at a time, look far better than the original frame. However, when you view them one after another, depending on the filtering used, you often see grotesque "screen door" effects, or smearing, or ghosting, etc. These artifacts only show up when the frames are viewed sequentially because the artifacts are only contained in the differences between frames.

farss wrote on 3/7/2011, 2:14 PM
@John.
Now that is a very valid point. Yes, after ensuring that the frames are artifact free then you really want to check how they playout "at speed".

One reason I would see that as vital is all the "modern" compression schemes look at motion and if the de-interlacing process introudces some wierdness then the encoder might make a hash of it.

I've been caught out with this using the smart de-interlacer. I found I could get it to deliver a good looking result but I didn't do as you've rightly suggested and look carefully enough at full fps. When it got to YT's encoder what were seemingly minor artifacts that came and went between frame with motion became much worse blocking artifacts.

@Nic

I've read that same blog. Keep in mind it's someones reverse engineering of YADIF and there's some code that the guy admits he doesn't know why it's there. The fine details of what it does is way over my head. I have just skimmed through it enough to get the gist of how it works.


------------------------------------------------------------------------------------

I suspect the chroma problems may come from how chroma sub sampling works with interlaced footage. This is only a suspicion though. My very vague understanding of this seems to suggest there can be an issue where chroma samples may be taken from one field AND the other field. All the info I've found on the web explains who chroma subsampling works on a frame. I have no clue what happens with it and fields and my head hurts thinking about the problem :(

Bob..
NickHope wrote on 3/7/2011, 10:34 PM
>> I suspect the chroma problems may come from how chroma sub sampling works with interlaced footage. <<

Thanks for that Bob. I had indeed been incorrectly leaving "interlaced=true" out of my AviSynth "ConvertToYV12" line (details in the tutorial thread). However that error only makes a tiny difference to the colours and ultimate sharpness (Edit: Actually it fixes the problem of the shift of the yellow in the car's headlights). The reason for the bigger shift of colour of the QTGMC snapshots was caused by the VFAPIconv process I used to get the footage back into Vegas for HuffYUV rendering. Apparently we need something more sophisticated than a simple PC->TV conversion to compensate for the levels shift it makes. The RGB parade scope helps to see what's going on. The best place to follow that up, if it's an issue, is probably in John's slo-mo tutorial thread, where people are most likely to encounter VFAPIconv. Colours in the mp4 renders in the zip file are fine as I went straight to MeGUI rather than via VFAPIconv.
Andy_L wrote on 3/9/2011, 7:13 AM
Nick, thanks for putting in the work on this!

Based on what I'm seeing, I think straightforward interpolate suits my needs. I can see, however, that with a camera on a tripod and/or a relatively static image, you could retain a good amount of information with some of the deinterlacers you tried.
NickHope wrote on 3/9/2011, 7:44 AM
With a relatively static image you may be better off with a blend. With a locked-down shot with a moving subject taking not much of the frame you might be better off with the smart deinterlacer, which blends the static part and deinterlaces the moving part. All depends on the footage.
goodrichm wrote on 3/10/2011, 8:08 AM
Didn't see Dawdle's Interlace Control plugin mentioned:

http://www.sonycreativesoftware.com/forums/ShowMessage.asp?MessageID=671694&Replies=18
NickHope wrote on 3/10/2011, 10:53 AM
Ah, I forgot about that one. That's like a fields toolkit that lets you wrangle fields in all sorts of ways. It's a useful filter that wish I'd remembered it earlier to help me get my head around this stuff because it lets you really see what is going on with each field.

It has a cubic interpolation algorithm, same as the Smart Deinterlacer, so it can give a slightly sharper result than Vegas' linear interpolation. It's also useful for deinterlacing selective clips within a project (e.g. a few interlaced clips in a project of progressive footage). And it can also swap fields, which is not something I have a need for, but others do. But it either deinterlaces a frame or doesn't. There is no motion detection.

Here's a screenshot of a cubic interpolation preset:



And here's a screenshot of the settings I found to force Smart Deinterlacer to cubic interpolate a whole frame. Result much the same as above: