HDV to SD deinterlace: surprise!

johnmeyer wrote on 12/8/2008, 11:13 AM
I was doing some tests to prepare a response over in this thread:

HDV to SD Workflow

In that thread -- and in many others in this forum -- people suggest that when rendering HDV footage to MPEG-2 in order to create a standard SD DVD, you MUST set the Vegas "Deinterlace Method" (found in the Project Properties) to something other than "none," or you will get horrible results.

This has never made sense to me, so I did some tests, and boy, did I ever get a surprise!!

I won't bore you with all the steps, and just cut right to the chase:

Setting deinterlace to "blend" or "interpolate" and then rendering to MPEG-2 using the standard DVD Architect Widescreen template (720x480) gives you interlaced footage, but if you set deinterlaced to "none" you get progressive footage, and this footage has all sorts of bizarre artifacts that look like interlace combing except that they are multiple scan lines thick!!

None of this makes sense to me, at any level. I used Vegas 7.0d for the test. Perhaps this behavior has changed in 8.x.

So, just to make sure you understand what I did, I captured some HDV footage directly from my FX1. I put it on the timeline, and set the Vegas project properties to the "HDV 1080-60i (1440x1080, 29.970 fps)" preset. I then set "Deinterlace Method" to none. I then selected Render As, and rendered using the "DVD Architect NTSC Widescreen video stream" MPEG-2 template, without any modifications.

I then did the same thing, except before rendering, I changed the Deinterlace Method (in the Project Settings) to "Blend."

I took the resulting two MPEG-2 files and put them into this AVISynth script, which I read into VirtualDub:
mpegsource("e:\test (HDV widescreen, BFF default SD, no deinterlace).mpg")
separatefields()

When I had deinterlace set to blend, each field was from a different instant in time than the previous field, i.e., it was interlaced. So, in fact, Vegas did NOT deinterlace anything!!!! At least not in the sense that most of us talk about when we use the word "deinterlace."

I can't show that in a still photo, but I can show in this photo the other thing, namely that each field looked smooth and free from artifacts:



In the photo above, each time I pressed the right arrow key in VirtualDub, which takes me to the next field (not frame, but field, because I'm reading the video via the AVISynth script), I got horizontal movement, because each successive field was from a different moment in time.

By contrast, when deinterlace was set to none, each pair of fields was from exactly the same moment in time (i.e., it had been deinterlaced!!!!), and it looked absolutely terrible:



This picture (above), like the previous picture, is one field of video, not a frame (that's why it appears vertically squished). Thus, we shouldn't be seeing interlaced "herring bones." But also note that the herring bones are more than one scan line, so in fact I don't think they are interlace artifacts at all, but rather some sort of strange Vegas bug. If so, who knows how deeply this affects other video rendering situations.

Also, as I went from one field to the next, I got no movement (other than slight up/down) between each pair of fields. This is what you get with progressive video.

So I have no idea what is going on here, and this is one place where it sure would be wonderful if there were an actual live body in Madison who still cared about this stuff and could provide input and possibly a fix, if this is indeed a bug.

I think some of this was covered about a year ago in another thread, but the focus there was in getting rid of the horrible jaggies. I am not sure whether anyone at that point in time figure out that setting deinterlaced to "blend" or "interpolate" actually results in creating interlaced results, whereas setting it to none results in progressive footage.

BTW, I redid these test four times just to make sure I didn't screw up the first time.

Comments

fldave wrote on 12/8/2008, 11:33 AM
I thought the important point of setting a deinterlace setting other than None" was when you were downsizing or otherwise changing the frame size of the rendered footage?

The point being that there are some internal things that Vegas does that require temporary deinterlaced footage to act upon, thus the need to specify a type of deinterlace method regardless of the ultimate render target.
Former user wrote on 12/8/2008, 11:34 AM
Could this be part of the upper field to lower field conversion?

Dave T2
farss wrote on 12/8/2008, 11:35 AM
"I am not sure whether anyone at that point in time figure out that setting deinterlaced to "blend" or "interpolate" actually results in creating interlaced results, whereas setting it to none results in progressive footage."

That's about right, the results are so horrid further analysis is kind of morbid.

What you're seeing here is pretty common knowledge. You cannot scale interlaced footage without de-interlacing. The horrid results you see if you don't I see from time to time in OTA. Actually much worse than your examples. Even converting from NTSC <> PAL requires de-interlacing and re-interlacing. How good the de-interlacing is determines the quality of the result.
Somewhere here I have the manual for a Leitch standards converter. It uses motion compensated de-interlacing. The manual makes a strong point about how the systems will not work so well if noise levels are high. That's another reason why I'm a bit of a noise nazi.

Bob.
johnmeyer wrote on 12/8/2008, 12:00 PM
The point being that there are some internal things that Vegas does that require temporary deinterlaced footage to act upon, thus the need to specify a type of deinterlace method regardless of the ultimate render target. So I guess what you and Bob are saying is that the scaling from 1440x1080 1.333 PAR to 720x480 1.2121 PAR cannot be done without first deinterlacing and then, in order to create the SD widescreen DVD, which should normally be interlaced, Vegas then re-interlaces.

I am still left, however, with the question as to why I end up with progressive footage when I set deinterlace to none? This is true, even though I used exactly the same Render As settings. In other words, if I do everything exactly the same, but set deinterlace to none, I get progressive footage (even though the render settings are calling for interlaced), but if I set deinterlace to blend, I get interlaced footage.

Also, while I understand that interlacing can be difficult to deal with, I've done enough work with my own code that I am stumped as to how such severe artifacts can be created during the scaling process. That seems WAY beyond what I'd expect.

Oh, one of many things I didn't mention in my initial post (because I did a LOT of testing before I posted), and that is if you set the render quality to Best, about 70% of the artifacts shown in my second still photo (in my initial post) disappear!!. This is another reason why I think that something is not right in the way Vegas is handling this scaling.

Finally, in the highly unlikely event that anyone at Sony should ever read this, I should think that they might want to change the UI so that this extremely common scenario that I am testing (i.e., making an "old fashioned" widescreen SD DVD from HDV footage) would be foolproof [I think we say "Grazie proof" in this forum :) ]. It sure isn't that way right now.



Marco. wrote on 12/8/2008, 12:08 PM
To determine a video is rendered deinterlaced you always need two settings: The deinterlace method given in the project properties as well as the field order setting in the render dialog. The deinterlace method (project properties) only affects the way a frame will be processed in certain cases. The field order selection (render dialog) will then determine if the rendered video is interlaced or progressive.

The project properties should ALWAYS be set to either interpolate or blend fields. There are very, very rare cases where setting the deinterlace method to none makes sense. Better leave it set to blend (or interpolate). This is the default setting. If you want deinterlaced output just select this in the render dialog.

What happens when scaling a video without having the default deinterlace method selected is just a good example for having chosen a false frame processing. But this is not a good example for what deinterlacing usually means. Deinterlacing in Vegas needs combining the correct frame processing with the correct render setting.

So - no, it's not a bug. It's the way frame processing works and affects some things like scaling.

Marco

.
johnmeyer wrote on 12/8/2008, 12:20 PM
The project properties should ALWAYS be set to either interpolate or blend fields. There are very, very rare cases where setting the deinterlace method to none makes sense. Better leave it set to blend (or interpolate). This is the default setting. If you want deinterlaced output just select this in the render dialog.Yeah, I think you are correct. That is my conclusion as well. Thanks!
johnmeyer wrote on 12/8/2008, 12:29 PM
Postscript to my last post:

When I frameserve from Vegas, should I set deinterlace to none, or set it to blend (or interpolate)?
johnmeyer wrote on 12/8/2008, 12:35 PM
Having a fine conversation with myself here ...

I went ahead and frameserved into my script and viewed the separated fields in Virtualdub. The deinterlace setting makes no difference in the frameserved result.
farss wrote on 12/8/2008, 12:46 PM
It doesn't matter what you're scaling actually, any scaling of interlaced footage ideally requires de-interlacing as part of the process for best results. As I said converting PAL <> NTSC requires de-interlacing. From what I know all SD broadcast standards converters internally de-interlace. For HD the S&W Alchemist doesn't have the processing power to do adaptive motion compensation and reverts to a simple algorithm. Don't know about the rest.

The cause of the dogs teeth is simple aliasing. You're trying to sample 1080 lines into 480 or 576 lines. The original 1080 lines have temporal separation. End result is a series of lines in the downscaled frame that come from what was the wrong field. You get a sequence of lines from field 1 then a sequence of lines from field 2 when using nearest neighbour. Switching to Best and bicubic sampling changes that but still not good enough.

There's a big trap in how one could think about all this. It's easy enough to think that each field of interlaced HD contains more than enough resolution. Why not interpolate each field, scale that frame and use every second line to create a field. After all 1080 is more than twice the resolution of 480 and so close to twice the resolution of 576 it doesn't matter.

The problem is there's almost certainly NOT 1080 measurable lines of resolution in any HD image. Limitations of optics, imagers, focus etc can very easily mean the figure is closer to 600 lines. Halve that and you're left with 300 lines and that's a pretty fuzzy picture on any TV. What I'm saying is to get a good SD outcome from HD you might well need to start with ALL the resolution you had in the two HD fields. Problem then is half the image was taken at a different point in time.

I sure agree about the need to make this process more "Grazie proof" but it isn't just Vegas that has these issues. Watching an item on the local news with dogs teeth 10% of the frame height is pretty sickening.

Try connecting a SD monitor to Vegas and edit HD, you can get the same outcome too. Unless you run your project at Best and specify a de-interlace method.

Bob.
Marco. wrote on 12/8/2008, 12:47 PM
I think it is for internal frame processing only. And even here it does not affect a result that often. But IF it affects a result - in almost any case "blend" (or "interpolate") is the better choice over "none".

Marco
farss wrote on 12/8/2008, 1:30 PM
Some new thoughts:

1) Sure you realise this but still worth a mention for others. Using HDV from a camera will tend to mask what you;re studying. The motion blur and tendancy of HDV to turn fast motion into mush hides artifacts.

2) In "None" mode Vegas seems to treat all footage as PsF when scaling. It simply merges the two fields, scales that and splits back into two fields. I think I can get the same outcome by taking a frame grab from Vegas and asking PS to scale it the same as if Vegas did it.
In Merge or Interpolate Vegas is forced into field by field processing. Although in Merge it samples the other field into the Bicubic calc it only does this for every line in field 1 and every line in field 2. In None each field in the interlaced output is derived from the same internally merged frame.
Wish I had the time and resources to draw some pictures to explain this, hard to get the words right.

Bob.
Laurence wrote on 12/8/2008, 1:44 PM
I first pointed this phenomenum out in http://www.sonycreativesoftware.com/forums/ShowMessage.asp?ForumID=4&MessageID=376417this post[/link], which in retrospect I think was ignored because it was April Fools day of that year.

I have since pointed this out in the following threads:

http://www.sonycreativesoftware.com/forums/ShowMessage.asp?ForumID=4&MessageID=608288

http://www.sonycreativesoftware.com/forums/ShowMessage.asp?ForumID=4&MessageID=574536

http://www.sonycreativesoftware.com/forums/ShowMessage.asp?ForumID=4&MessageID=446795

http://www.sonycreativesoftware.com/forums/ShowMessage.asp?ForumID=4&MessageID=449059

http://www.sonycreativesoftware.com/forums/ShowMessage.asp?ForumID=4&MessageID=451857

http://www.sonycreativesoftware.com/forums/ShowMessage.asp?ForumID=4&MessageID=496058

http://www.sonycreativesoftware.com/forums/ShowMessage.asp?ForumID=4&MessageID=500423

http://www.sonycreativesoftware.com/forums/ShowMessage.asp?ForumID=4&MessageID=521318

http://www.sonycreativesoftware.com/forums/ShowMessage.asp?ForumID=4&MessageID=521830

http://www.sonycreativesoftware.com/forums/ShowMessage.asp?ForumID=4&MessageID=569615

http://www.sonycreativesoftware.com/forums/ShowMessage.asp?ForumID=4&MessageID=574236

http://www.sonycreativesoftware.com/forums/ShowMessage.asp?ForumID=4&MessageID=581822

http://www.sonycreativesoftware.com/forums/ShowMessage.asp?ForumID=4&MessageID=587044

http://www.sonycreativesoftware.com/forums/ShowMessage.asp?ForumID=4&MessageID=589571

http://www.sonycreativesoftware.com/forums/ShowMessage.asp?ForumID=4&MessageID=598157

The crazy thing is that this involves footage that is interlaced both before and after the resize. You wouldn't think that selecting a deinterlace method would be so important when you are working with a start to finish interlaced project, but it is.
Laurence wrote on 12/8/2008, 1:53 PM
The rule of thumb I live by when resizing footage is this:

If you are resizing interlaced footage, always select an interlace method. Checking this tab will cause Vegas to resize the even and odd field separately before folding them back together at the new interlaced image size.

If you are resizing progressive footage, make sure that you do not check the "select deinterlace method" tab. If you do, Vegas will split the progressive image into even and odd fields before resizing and you will lose about half your vertical resolution.

Also, don't forget that changing from SD 4:3 to 16:9 resolution is a crop and a resize as the middle is stretched vertically after the top and bottom are cropped. You need to specify a deinterlace method any time you do this.
johnmeyer wrote on 12/8/2008, 3:08 PM
Laurence,

Thanks for all that. I knew I had read a lot about this in the past, but couldn't remember who posted. The whole thing seems designed to cause failure, given that the "deinterlace" in the project settings apparently, as you say, doesn't affect how the project is rendered, but how it is treated prior to rendering. For someone like me who doesn't want to deinterlace my interlaced footage (see my recent posts), there is a strong temptation to set this to none.

I now know better.

Very strange stuff, and even though you are very clear on when and how to set it, without your explanation, it sure isn't clear, either from the UI or the help system.

Many thanks!

Laurence wrote on 12/8/2008, 3:43 PM
No problem. You've helped me out many more times than I could possibly hope to help you.
kb_de wrote on 12/8/2008, 11:02 PM
For someone like me who doesn't want to deinterlace my interlaced footage ....


I¡¯m not expert.
For the topic deinterlace (method) I¡¯d say:

Deinterlace method is to reduce scan line artifacts especially by interlaced materials but also by any other materials even a still image if they are edited no matter what kind the editing is.

If you just cut and export, lets say in DV-AVI format, the deinterlace method has no function, because no pixel of the material has been changed.

Otherwise, pixels that changed and did not match the scanline must be:
forced to come with the scanline (blend) or
thrown away, the needed pixels would then ¡°recreated/borrowed¡± from their neighbors (interpolate) or
let them be as they be (none).

A NLE treats the materials always in frames and not in fields no matter whether they are interlaced or progressive, thus theoretically or rough to say, has nothing to do with deinterlacing/interlacing.

A codec can convert deinterlaced material into progressive one and vice versa. But in fact, it just only tells the signal acting so. This process has nothing to do with deinterlace (method).

Once we got unwanted result there are only 3 reasons: we chose the wrong deinterlace method or the method can¡¯t afford that much, or the source material has problem including sometimes the NLE arranges the field in wrong way.

Again, I¡¯m not expert.

Laurence wrote on 12/8/2008, 11:08 PM
That's what I thought as well. It turns out Vegas doesn't work that way though. In order to resize interlaced footage, you need to separate the image into separate even and odd fields, resize the separated fields, then fold them back together into a new interlaced image at the new size. The only way to get Vegas to do this is to select a deinterlace method. It doesn't make sense because both the original and the resized footage are interlaced, but that's how Vegas does it. I'm absolutely sure of this.
Marco. wrote on 12/8/2008, 11:13 PM
In fact all you usually have to do is - nothing. Just leaving the default value untouched.

Marco
kb_de wrote on 12/8/2008, 11:22 PM
In order to resize interlaced footage, you need to separate the image into separate even and odd fields, resize the separated fields, then fold them back together into a new interlaced image at the new size.

Perhaps then you got more surprise!

I find the smart deinterlace of Mike Crash works great. Key word is to set motion threthold at 0.
Christian de Godzinsky wrote on 12/9/2008, 12:20 AM
Thanx John for bringin up the subject as its own thread. It certainly deserves it!!!

I did earlier similar test as you did, and ended up with the same result. Downscaling (or any re-scaling) of interlaced material MUST be handled properly by the codec. Therefore you MUST flag a deinterlace method so that the process is done correctly.

It would be prudent if such information would be included in the native documentation... Would reduce the fuzz and experimenting, and prevent loosing time on non-productive work... Especially when we are forced to do (in the foreseeable future) lots of downscaling from HD to SD).

I certainly hope that the scaling of (interlaced material with the same framerate) is done so that the source fields are scaled separately, and then just used as the source for the final output (with correct field order), instead of FIRST deinterlacing the source material to one full resolution frame, that is then scaled and re-interlaced at the output format. Probably it is done according to the first method, othervice you would lose some temporal information.

Things become even tricker (and the results even worse) if the frame rates differ. This is a situation that you should avoid like a pest, if quality is a part of your vocabulary.

Christian

WIN10 Pro 64-bit | Version 1903 | OS build 18362.535 | Studio 16.1.2 | Vegas Pro 17 b387
CPU i9-7940C 14-core @4.4GHz | 64GB DDR4@XMP3600 | ASUS X299M1
GPU 2 x GTX1080Ti (2x11G GBDDR) | 442.19 nVidia driver | Intensity Pro 4K (BlackMagic)
4x Spyder calibrated monitors (1x4K, 1xUHD, 2xHD)
SSD 500GB system | 2x1TB HD | Internal 4x1TB HD's @RAID10 | Raid1 HDD array via 1Gb ethernet
Steinberg UR2 USB audio Interface (24bit/192kHz)
ShuttlePro2 controller

Lou van Wijhe wrote on 12/9/2008, 1:30 AM
QUOTE from farss:
The problem is there's almost certainly NOT 1080 measurable lines of resolution in any HD image. Limitations of optics, imagers, focus etc can very easily mean the figure is closer to 600 lines. Halve that and you're left with 300 lines and that's a pretty fuzzy picture on any TV.
UNQUOTE

Bob,

Aren't you mixing up lines of resolution with scan lines? IMO the number of (1080) scan lines isn't influenced by optics, focus, or what have you.

And I always shoot PAL at 25p. What I'm wondering about is if it then still is important to de-interlace as I always have pairs of identical fields. What do you think?

Lou
farss wrote on 12/9/2008, 1:57 AM
"Aren't you mixing up lines of resolution with scan lines?"

Um, no! I'm saying just because there's 1080 scan lines it doesn't mean there's 1080 lines of resolution in the recorded image. We're saying the same thing I hope.

[I]'What I'm wondering about is if it then still is important to de-interlace as I always have pairs of identical fields. What do you think?"[/i]

If you have fields then I guess you're recording 25PsF. Using de-interlace method = None or Merge should give the same result I'd think. Field Order for the project should be None.


Bob.
Christian de Godzinsky wrote on 12/9/2008, 2:05 AM
Hi,

Whatever your focussing is (or optical limitation), you have 1080 vertical (EDIT: OOPS - I meant horizontal) lines in the image, that will produce interlace artefacts during rapid movements, even if the original image would be out of focus!!!

The HD sensors have typically a very good MTF(modulation transfer function) in the vertical direction. Real vertical resolution reducing facts are; the color matrixing at the sensor pixel level, and the following mpeg encoding. Even if these also tend to somewhat blur the vertical spatial resolution, you will anyhow see interlace artefacts if the conversion from HD to SD, if not done properly.

I never shot 25p myself (dunno have such equipment), but theoretically there should be no reason for deinterlacing 25p HD material, if converted to 50i SD. Can you actually de-interlace progressive material? Probably not! I'm not sure what the codec does if you select this option. Experimenting pays off sometimes... Still I rather would like to read about this in the manual, instead of doing the experimenting...

Christian

WIN10 Pro 64-bit | Version 1903 | OS build 18362.535 | Studio 16.1.2 | Vegas Pro 17 b387
CPU i9-7940C 14-core @4.4GHz | 64GB DDR4@XMP3600 | ASUS X299M1
GPU 2 x GTX1080Ti (2x11G GBDDR) | 442.19 nVidia driver | Intensity Pro 4K (BlackMagic)
4x Spyder calibrated monitors (1x4K, 1xUHD, 2xHD)
SSD 500GB system | 2x1TB HD | Internal 4x1TB HD's @RAID10 | Raid1 HDD array via 1Gb ethernet
Steinberg UR2 USB audio Interface (24bit/192kHz)
ShuttlePro2 controller

farss wrote on 12/9/2008, 4:21 AM
"Can you actually de-interlace progressive material?"

Problem is we don't have the terminology to differentiate two different processes. De-interlacing traditionally means combining two fields and reducing the temporal resolution e.g. make 50i look like it was shot 25p.
What then do we call combining the two fields from 25PsF into a single frame?
We're not reducing temporal resolution, there wasn't any difference time wise between the two fields to start with. However we still need to combine them ideally to process them e.g. scale them. Then we could well need to split the frames into fields again.

"The HD sensors have typically a very good MTF(modulation transfer function) in the vertical direction."

MTF seems to be more a function of optics than sensors. It's a good measure of how sharp an image appears. You can get quite a difference in MTF from the same camera with different lenses or even with the same lens changing aperature or focal length. There's quite a good article on this at Luminous Landscapes.

I raised the issue in relation to scaling HD to SD. The sensor in the camera's resolution is fixed. However depending on what and where you're shooting the lens on the camera may give you an image of barely adequate resolution and/or MTF for SD. Throwing anything away during scaling might not be a good idea. I'm cutting a stage production at the moment. Two HDV and one 4:3 SD camera. I can't crop the HDV frames much at all or the result looks worse than the SD when scaled to SD. If this footage had been shot on a sunny day I'm pretty certain I could have cropped the HD much more.

Bob.