Educate me: Why/when is deinterlacing needed?

johnmeyer wrote on 7/18/2008, 10:21 AM
Lots of people regularly go through the step of converting video from interlaced to progressive

Question: Why do you do this?

I can think of a few reasons, but for me they are rare. For instance, if you have several cameras, and some record interlaced and some record progressive and you want the "look" to be consistent and you can't afford to rent or buy cameras which match, you would have to deinterlace (and probably change frame rates). I can also see how a person might want to deinterlace in order to achieve a certain "look."

But if I shoot with my DV or HDV camcorder, both of which record 29.97 NTSC and 29.97 HDV respectively -- and both record interlaced -- when would I ever want to deinterlace if I don't have the issues mentioned above?

I am really trying to learn here, not start some sort of argument.

The reason why I don't see the need is that interlaced video look just fine on my TV when I playback from my camera or from a DVD. Also, on my computer monitor, if I use the proper playback software, interlace looks fine and doesn't have any "herring bone" artifacts. It is true that fast motion action will show the interlaced "herring bone" if I use the wrong software that displays both fields at the same moment in time, but that is the fault of the software.

So help me out, when should I deinterlace?


[Edited for spelling]

Comments

kentwolf wrote on 7/18/2008, 10:49 AM
While I cannot address your root question, I know for a fact that when I am working with output from Boris Red (3 or 4), if I export it from Red as interlaced, it simply doesn't look good in Vegas.
GlennChan wrote on 7/18/2008, 11:03 AM
I don't think there is anything wrong with sending an interlaced image to a television. They should be designed to show interlacing correctly.

*Some/many non-CRT TVs have low quality de-interlacers. But then again, broadcast will remain interlaced for a long time, even if the source material is 24p.
Former user wrote on 7/18/2008, 11:11 AM
I can think of reasons NOT to deinterlace

1) If you are producing for normal TV, the signal will always be interlaced, regardless of what you produce. A Standard TV signal is 29.97fps, interlaced. A progressive signal will be turned to interlaced, but both fields will show the same thing.

2) If you deinterlace an interlaced picture, you are basically throwing out 50% of the video image. You are either using field one or two, or using some software interpolation to blend the two fields. Either way, you are losing image information.

3) If you are playing a progessive video signal on a standard TV, then the signal is getting reverted back to interlaced.

The only reason I see to deinterlace is to play on a computer or an LCD screen. If your original picture is deinterlaced, then it won't make any difference on a standard TV.

Dave T2
johnmeyer wrote on 7/18/2008, 12:10 PM
Dave,

#2 in your post is the thing that makes me think that deinterlacing is generally a bad thing.
riredale wrote on 7/18/2008, 12:40 PM
Well, in theory a very smart deinterlacer can use data from both fields in those portions of the image where there is little motion. In other words, still areas will show full vertical resolution, while moving areas will have full temporal resolution (60 images/sec) but only half the vertical resolution.
Tim Stannard wrote on 7/18/2008, 1:01 PM
So help me out, when should I deinterlace?

From my experience one very good reason is if you want to zoom (or out) in using P&C.
My understanding is this (and it seems to be bourne out in my experiences) though I'm more than happy to be corrected.

The image from your camera is made up of pixels designed to work with 576 (PAL) or 480 (NTSC) horizontal lines. As we know only half of those line are displayed each frame.

If you now zoom in say x2 you now have the pixels intended for 288/240 lines spread across 576/480 lines.

This means that half of the shot intended to be viewed on the "odd" lines will now be visible on the "even" lines and vice versa.

I wish I could explain it more clearly (perhaps with pictures), but this is what results in the "jaggies" people often suffer.

craftech wrote on 7/18/2008, 1:11 PM
The short answer - for the web and the PC monitor.

Televisions match the refresh rate of the input for the most part. A field gets a matched refresh. Even and odd scans are shown in equal amounts. Anything missing may produce a slight flicker. On a computer , the video card regulates the refresh rate of the monitor (progressive by nature), and those rates are incompatible with the refresh rates of video. Example 75 HZ, 85 HZ, etc Deinterlacing methods try and compensate for this incompatibility. Better deinterlacing software theoretically produces better web video.

Watch this video if you get a chance by Jan Ozer entitled Choosing a Software Based Streming Media Encoder. He does a nice job explaining some of this with respect to specific software for streaming.

John
Former user wrote on 7/18/2008, 1:29 PM
Tim,

I think your explanation is partly right, but here is where I might correct you.

Remember that a frame is made up of two fields, each 1/60th of a second apart (in NTSC). field 1 then field 2. If you place this on a timeline with the same properties then the fields play in the proper order. BUT, if you zoom in one line bigger, now the field order will be messed up. Field 1 of your original video will be on field two of the timeline video and thus will play out of sequence temporally and looked jaggedy. Vegas is very good at correcting this error, especially if you use BEST quality rendering. I find Final Cut is very bad at this.


If you change the video to progressive then each frame is made up of two fields which are the same image. So when you zoom in or out, it doesn't matter which line it is on, it will still look correct temporally.

At least I think that is the effect you are describing.

Dave T2
ScorpioProd wrote on 7/18/2008, 2:39 PM
Also note that when you downconvert interlaced HDV to interlaced SD in Vegas, you need to specify a form of deinterlacing, blend or interpolate, in the project properties. If you leave it set for none, it will not look good in the final interlaced SD version.

The reason is that Vegas needs to deinterlace before doing the scaling required to convert the HDV to SD, it will then reinterlace it for your final SD product.
johnmeyer wrote on 7/18/2008, 3:16 PM
A field gets a matched refresh. Even and odd scans are shown in equal amounts. Anything missing may produce a slight flicker. On a computer , the video card regulates the refresh rate of the monitor (progressive by nature), and those rates are incompatible with the refresh rates of video. Example 75 HZ, 85 HZ, etc

That's something I hadn't thought of. However, wouldn't I still have a similar effect even with progressive? For instance, if I have 30 progressive frames per second, and the monitor refreshes 75 tines a second, wouldn't some frames be refreshed slightly more often than others, and wouldn't this also lead to some flicker?

Watch this video if you get a chance by Jan Ozer entitled Choosing a Software Based Streming Media Encoder.

For others who want to watch this, he begins his discussion of deinterlacing at 4:53 into the clip.

I have been critical of Ozer in the past, and based on what I saw in this video, I am going to be critical again. What he shows on the screen when he is attempting to show the difference in deinterlacing seems to me to have nothing whatsoever to do with deinterlacing (look at timecode 7:22 into the video). While he is showing single frame grabs, the areas of the frame he highlights where we are supposed to see the differences in deinterlacing quality are clearly not moving much. Unless I am wrong (and I sure have been wrong a lot lately), if the image doesn't move, you won't notice the interlacing artifacts and you don't need to deinterlace (at least not for the portions of the video which don't move). You have to have motion -- horizontal motion is often more noticeable than vertical -- so that when both fields are displayed at the same moment in time as they are in progressive video OR in a still frame capture, you see the odd field shifted horizontally compared to the even field. This is quite different than jaggies -- a differentiation which Ozer does make -- but his "guitar" example is actually something which shows jaggies on a non-moving image (unless the guitar was being played by Hendrix).

However, based on what I'm reading in this thread, I'm beginning to develop an idea of a few things I can do to test. I have some great AVISynth deinterlacing scripts and plugins, so I should be able to do some elegant deinterlacing, and I can create what engineers call pathological test cases (i.e., torture tests) to see what happens when I encode for the web with and without deinterlacing and then play each back with proper playback software.

Thanks for the excellent ideas.

[Edit]
Just after I posted, I read information at this site:

Why Deinterliace?

which provides more detail about what John (Craftech) talked about. This makes sense to me; however most other descriptions of why to deinterlace, including the one on Wikipedia, just don't seem right ...
ingvarai wrote on 7/18/2008, 3:50 PM
Johnmeyer,
thanks a lot for starting this thread, I have learnt a lot,

Ingvarai
farss wrote on 7/18/2008, 4:06 PM
You seem to have answered your own question.
There's big caveats in your reasoning as to why you shouldn't and therein is a lot of the problem. i.e. if you use the right software and if the TV is doing its job properly.

We don't have any control over what our clients will do with our footage and trying to tell them our images look crappy because of what they own or what they do has it's own issues.

Aside from that though, if you limit the discussion to simply what you deliver then I kind of agree, there's not that much in it. If you're talking about what we might need to do when processing video that's another discussion altogether. Progressive images are easier to work with in almost everything we need to do, motion tracking, compositing, scaling, all these generally require the image to be de-interlaced and then re-interlaced which is not a good thing unless you've got expensive hardware based boxes to do most of those tasks and even then perfection is hard to achieve.

At the end of the day, there's no perfect way to de-interlace so it's best avoided if possible however given all the attendant technical issues of working with and displaying interlaced footage I'm left with a different question, 'why would I ever shoot interlaced?'

Bob.
craftech wrote on 7/18/2008, 4:18 PM
[Edit]
Just after I posted, I read information at this site:

Why Deinterliace?

which provides more detail about what John (Craftech) talked about. This makes sense to me; however most other descriptions of why to deinterlace, including the one on Wikipedia, just don't seem right ...
===============
Thanks for that link. He did a lot better job than I did of explaining what I was trying to describe.

John
corug7 wrote on 7/18/2008, 4:18 PM
John,

What does not seem to have been mentioned yet is that many codecs do not allow for the encoding of discreet fields. Thus, each frame is encoded as a whole. A few of these are WMV9 (not including Advanced Profile), Sorenson 3, Apple's h.264, and On2 VP6.

Are you scaling that video? Good luck if you aren't deinterlacing, as you will most likely be introducing very ugly artifacting that playback software won't be able to remove.

Then there's the issue of playback software. You state that if one is using the "proper" software it shouldn't be a problem. But who's to say what is proper? Usually the end user, who may or may not be technology savvy.

Personally, I really like the look of 30p as opposed to 24p with pulldown for a filmic look. Unfortunately, there isn't a very good way to distribute 30p to PAL countries, as the conversion to 25fps, interlaced or progressive, is horrid looking.

Hope these replies helped a little.

Corey
johnmeyer wrote on 7/18/2008, 4:57 PM
At the end of the day, there's no perfect way to de-interlace so it's best avoided if possible ...That's gets to the nub of the reason why I started this post: de-interlacing will always, to some degree, "damage" your video and therefore that cost has to be worth the gain. Therefore, I need to make sure I fully understand what I am trying to gain, and make sure it is real.

however given all the attendant technical issues of working with and displaying interlaced footage I'm left with a different question, 'why would I ever shoot interlaced?OK, on that one I have an answer: because 29.97 interlaced has an "immediate" and "real" quality to it, whereas 24p or 25p has that "removed" feel to it. For sports, I don't want that feeling of being shot in another time and place (although I note that many sports stations do a conversion to what looks to be progressive when they do their highlights summary at the end of the game -- makes it feel like it took place in the past, which of course it did).

While I have never seen 60p displayed, I suppose it might have a similar feel to 60i, but if the goal as you say is to deliver to a client and not have to worry about that client having the correct setup, not many (today) can display 60p. So, if I want the "video" look, I need to shoot interlaced.

I just got back from a run, and one thing that kept popping in my head is that all these sites I've been looking at always post still photos of interlaced video. A snapshot of moving interlaced video always looks like heck, and it seems that once people see that, they think that it must look bad when the video is displayed. Of course the flaw in the thinking is that the fields are never shown at the same instant in time, as they are in the still photo, so the problem in a still photo never actually exists when one field is displayed an instant in time after the other. The fact that the odd and even fields occupy BOTH a different vertical space AND a different temporal location is something that trips me up almost every time I revisit this subject.
fldave wrote on 7/18/2008, 5:09 PM
I must say, johnmeyer convinced me long ago the benefits of "leaving it alone".

Question: Can I use my FX1 and HDVSplit or Vegas Capture to do real time capture of 24p, 30p or even 60p without the loss that the field-to-frame loss creates? Or is that camera dependent?

That would eliminate one of the biggest drawbacks of the deinterlace in post problems.
farss wrote on 7/18/2008, 5:43 PM
I've never seen 60p displayed either which is pretty stupid of me as my camera will shoot 50/60p.

You should spare a thought for the rest of the world, at least you've got the option of 30p, we're sort of stuck with 25p although I'm wondering if the content will never be broadcast maybe I should shoot 30p.

Bob.
Joe Balsamo|LVX wrote on 7/18/2008, 8:10 PM
John,

Do you notice when shooting sports if you "catch the action" better with, say, 60i rather than, say 24p or 30p...or is it strictly the "film" vs. "video" look?

Regarding all of the settings for various controls when we render and set our project parameters...is there a good single source that discusses the various settings...or is this just something one learns from study of various sources and practical experience?

Regards,

Joe
John_Cline wrote on 7/18/2008, 10:03 PM
If anyone has watched either ABC or FOX in HD in the U.S., then you have seen 60p. They are using 1280x720-60p for their broadcast standard. Most LCD, Plasma and DLP TVs can display 720-60p, only a few can display 1080-60p

LVX, yes, absolutely you catch the action better using 60i as opposed to 24p or 30p.

This is all about "temporal resolution" which is how many individual images are captured and displayed over a given amount of time. Regular old NTSC SD television has a temporal resolution of 59.94 images per second. There are 59.94 fields per second and every two fields are interlaced into one frame. If you de-interlace 60i, then you end up with 30p which is exactly 1/2 the temporal resolution and, therefore, motion will be represented by half as many images per second and will not be as smooth.

This was discussed about four years ago where I wrote a pretty lengthy post on the subject:
http://www.sonycreativesoftware.com/forums/showmessage.asp?forumid=4&messageid=285294
johnmeyer wrote on 7/18/2008, 10:29 PM
Do you notice when shooting sports if you "catch the action" better with, say, 60i rather than, say 24p or 30p...or is it strictly the "film" vs. "video" look?With the caveat that everyone has different tastes, for me there is absolutely no way I would shoot sports in 24p, UNLESS I was trying to achieve some specific effect in order to make some artistic statement. For conveying the action and maximizing viewer enjoyment, it is my opinion that 29.97 interlaced (60i) is vastly preferable. The main reason is that most sporting events require you to pan the camera. I learned back in the 1950s never to move a movie camera left or right: the "judder" and jump is just awful. Now of course that was a silent Super8 operating at 18 fps, so the problem is worse but it is still very evident at 24p. By contrast, you can move an NTSC OR HDV 29.97 video camera all around with 60i and not have any noticeable "judder" artifacts.

Regarding all of the settings for various controls when we render and set our project parameters...is there a good single source that discusses the various settings...or is this just something one learns from study of various sources and practical experience?I am not a good one to answer. My expertise is almost entirely on the computer end of things, and not video (hence the need to ask questions like the topic of this thread). However, with only a few exceptions (like the "default" template for MPEG-2), I think the presets in Vegas, both for the project and rendering properties, generally do the job correctly, and usually all you have to do is set the average bitrate to match the length of your video and the visual quality you want to achieve (higher bitrate is better quality, but you decrease the minutes of video that will fit a given disc size).

I must say, johnmeyer convinced me long ago the benefits of "leaving it alone".I convinced myself as well, especially about deinterlacing, but so many people insist on doing it, that I keep having second thoughts, hence this thread.

Can I use my FX1 and HDVSplit or Vegas Capture to do real time capture of 24p, 30p or even 60p without the loss that the field-to-frame loss creates? I own the FX1 and it is a 60i camera. The "Cineframe" settings are not true 24p or 30p.

Here's a link to a great forum about the FX1 where you can find out all sorts of things about what it can and cannot do:

Sony HDR-FX1
Rory Cooper wrote on 7/19/2008, 10:27 AM
Thank you so much for this thread johnmeyer

I did some research at various sites and there was contradiction between different experts

Some said if you interlace you have to de-interlace to get rid of the combs
So for PAL shoot and work in progressive I never interlace so I never have to deinterlace
Others said only interlace for LCD and not PLASMA which prefers progressive

If I receive footage that has been interlaced then I deinterlace and use replication animation as the mode
And it defiantly looks better than non deinterlaced

I am on the creative side not technically minded at all but would really appreciate getting the correct understanding

Rory


megabit wrote on 7/19/2008, 10:39 AM
My opinion on this is simple (though biased as I own the EX1):

1. If I want to avoid the "dreadful" 24/25p stutter (e.g. with high speed action), I'll go 720/50p

2. Otherwise, I always shoot 1080/25p.

Why?

Because the 720/50p offers a spatial resolution which is still better than the 1080/50i after ANY deinterlacing procedure. Not to mention the temporal resolution....

And when stuttering is not an issue (see case 2. above), why interlace in the first place?

My worries with the V1E were related to whether or not I can deliver 25p on BD (as the 25p was really 25PsF, blah, blah). Now, with the EX1 (offering the "native" 24/25p), I know I can - at least with Vegas, I simply render to the 1920x1080/50i BR template, and all is well; no combing!

So, I never shoot interlaced!

AMD TR 2990WX CPU | MSI X399 CARBON AC | 64GB RAM@XMP2933  | 2x RTX 2080Ti GPU | 4x 3TB WD Black RAID0 media drive | 3x 1TB NVMe RAID0 cache drive | SSD SATA system drive | AX1600i PSU | Decklink 12G Extreme | Samsung UHD reference monitor (calibrated)

johnmeyer wrote on 7/19/2008, 3:04 PM
Others said only interlace for LCD and not PLASMA which prefers progressiveWell, I am definitely learning a lot about this as well. However, one thing I was sure of prior to this thread -- and am still sure of now -- and that is that EVERY TV (in the US, and probably elsewhere) has GOT to display interlaced material and make it look good. This is true whether it is Plasma or LCD or DLP or anything else. If this weren't true, then all that interlaced SD programming still being broadcast over the air and on many basic service cable systems would look horrible, and people with these new sets would be going bonkers.

Therefore -- QED, as the math people like to say -- deinterlacing is not necessary when you plan to watch the results on your TV set. And, since deinterlacing clearly ALWAYS degrades the original material (because, it either eliminates lines, "bobs" lines, or does motion estimation in order to synthesize new lines -- a process which cannot be perfect), something will be lost.

Now, for posting on the web or watching on a computer monitor which has no direct provision for handling interlaced material, the artifacts introduced by deinterlacing may be preferable to the problems created by it. However, as I read these posts I am not convinced that combing is going to be an issue if the playback software does the right things. However, the flicker and line twitter that John (Craftech) has described appears to me to be a plausible issue. How bad it is, I don't know yet until I do my own tests.

One thing for sure -- and I already posted about this, but it bears repeating: Those still photo snapshots showing both fields of interlaced material combined together into one still photo are completely and totally misleading and I think contribute more than anything else to people's desire to deinterlace.
farss wrote on 7/19/2008, 4:23 PM
"...has GOT to display interlaced material and make it look good"

Sorry but that is anything but true.
We have sevearl early series Bravia HDTVs. Connect a PD170 up to them and the image is absolute garbage, there is so many things wrong with the image I have a hard time figuring which artifact is produced by what. My mate whose an old school video person says it looks like someone set the aperature correction to 11 but it's more than that.

On the other hand if we take a J30 VCR and playback one of our DB dubs of the BBC's Blue Planet series and it looks vastly better. Both times we're feeding the signal into the composite input of the TV. What seems to happen is the TV dramatically enhances the slightest defect in the SD signal. Blue Planet looks better but even that is far from artifact free if you look closely.

It gets worse!
1080i has a limit of around 800 lines vertical res, 1080p is good for 1080 lines. Except almost all TVs will think 1080PsF is 1080i and make quite a mess of the image if the V res is greater than 800 lines. This is easy enough to test. Create a test pattern of alternating black and white lines, this is easy to do with Vegas. What you should see is what you expect, alternating black and white lines. If the panel isn't a 1080 panel you should just see grey. Good luck finding anything that'll do either. Feeding the test pattern into a Dell 2407 as the secondary display with Vegas you'll get the correct image. Most HDTVs will give you a jittery mess. I haven't gone as far as trying pushing the H res to the limit of Nyquist.

Your FX1 probably does quite well as it's resolution is limited by the OLPF. The V1E/P got a lot of bad press due solely to the crap job almost all HDTVs do. The V1U on the other hand has lower V res.

Similar fun can be had with the EX1.

Probably worth a mention that the best rescaler chips were / are made by Farujda. When the company was sold the brains behind the business at his farewell dinner let the cat out of the bag, their designs didn't solve the problems as the problems are unsolvable.

None of this answers the original question but it does I think explain why there's so much FUD about this topic on the net, there isn't any definative answer. My approach has been mostly to stop worrying about it and just get on with it, life is too short. The only exception is when I should have control over the display system. Then I'll attempt to optimise the image for the display device. Only issue I've had with that is trying to get the client to understand why this matters. I shoed him 1080p on a 2407 and it blew his soxs off. Then he comes back with the advice from another "expert" that a SD DVD would be simpler and cheaper and only look 10% worse.

Bob.