Aliasing, artifacts and moire?

Comments

craftech wrote on 4/15/2011, 3:36 AM
John (craftech),
=====================
Actually you were right. It was 3:2 and not 4:3.
=====================
If you want anamorphic 16:9 for widescreen DVD then you'll want to leave it at 720x480 and set a widescreen aspect ratio flag when you encode to MPEG-2.
====================
If I am frameserving it from the Vegas SD timeline I guess I have to do that in the Procoder 3 program.
This workflow is completely new to me. Up till now I have been frameserving my HD timelines into Procoder 3 for converting to 16:9 for widescreen DVD. Avi has not been in the workflow at all. That's probably why I was confused. 3:2 is an AR I haven't seen on any project I have ever done. Looking at it I assumed it was 4:3. I have only used VD once , but only to test just the Lanczos3 resizing some raved about. I decided it wasn't any better and never used VD for anything else. I did try nesting the HD project into an SD project in Vegas using the SD project to frameserve to Procoder. It allowed me to sharpen after resizing. That worked OK. But so does adding a sharpening filter in the Procoder when it is creating the Mpeg2. It can be set to sharpen after resizing.

Either way, Nick, do you have any ideas as to why I am getting the strobing with ghost trails with any movement using this script? Sort of like judder as motion occurs? Although I like the results when no one is moving, I can't use this method for HD to SD if it comes out with those anomalies. They are unwatchable. And those anomalies aren't just there in the preview window. They are there after rendering as well. Do you think it is a result of frameserving YUY2 instead of RGB32? I have never frameserved YUY2 up until now.

Thanks for the clarification regarding the AR Nick. Much appreciated.

John

EDIT: As a side note. As I follow this discussion and you and John Meyer are comparing results, are you using the same CBR of 8000 to compare? John uses that bitrate. Are you?
NickHope wrote on 4/15/2011, 4:07 AM
I can't help with the the strobing and ghost trails yet, as I've barely used this script myself, but I'm currently embarking on a test of a minute's varied footage through various HD>SD DVD workflows, and one of them is via that script, so I'll let you know my results. And yes, I'm using 8000 2-pass CBR in CCE Basic for the video stream encoding.

Your project properties should match your HD media and that's what you should be frameserving into that AviSynth script for resizing. Just checking as what you've written sounds like you might be serving out SD from Vegas, which would defeat the object of the script.

I'm thinking that you should be able to keep VirtualDub and an intermediate AVI out of your workflow completely and go straight to Procoder 3 for MPEG-2 encoding (which, as you haven't confirmed so, I'm assuming you're doing?). Can Procoder 3 open an .avs file? If so then you can just open that in Procoder. If it can't then you can use VFAPIConv to convert the .avs to .avi which can then be opened in Procoder. It's quite a chain of concurrent processes you'd then have going but there's no reason it shouldn't work. A search for VFAPIConv on the forum will turn up a few fairly recent results from John M and myself.
craftech wrote on 4/15/2011, 4:14 AM
Your project properties should match your HD media and that's what you should be frameserving into that AviSynth script for resizing. Just checking as what you've written sounds like you might be serving out SD from Vegas, which would defeat the object of the script.
===============================
My project properties match my media, but thanks for the response. That would have caused problems if I had been doing that.
==================================
Can Procoder 3 open an .avs file? If so then you can just open that in Procoder. If it can't then you can use VFAPIConv to convert the .avs to .avi which can then be opened in Procoder. It's quite a chain of concurrent processes you'd then have going but there's no reason it shouldn't work. A search for VFAPIConv on the forum will turn up a few fairly recent results from John M and myself.
==================================
That's a good idea. Let me check Procoder to see if can open an .avs. It just might, otherwise I may try that converter you mentioned. But I first have to figure out what is causing the anomalies using the script.

Thanks.

John
craftech wrote on 4/16/2011, 10:14 AM
Can Procoder 3 open an .avs file? If so then you can just open that in Procoder. If it can't then you can use VFAPIConv to convert the .avs to .avi which can then be opened in Procoder. It's quite a chain of concurrent processes you'd then have going but there's no reason it shouldn't work. A search for VFAPIConv on the forum will turn up a few fairly recent results from John M and myself.
==================================
That's a good idea. Let me check Procoder to see if can open an .avs. It just might, otherwise I may try that converter you mentioned. But I first have to figure out what is causing the anomalies using the script.
====================================
I tried this. I frameserved a 1920x1080 square pixel /UFF mxf loop from the Vegas timeline using YUY2.

Ran the script and opened the avs in Procoder 3. Procoder 3 saw it as a 4:3 LFF avi so I changed the Source parameters to square pixel (3:2) and UFF.
The Target was set to NTSC DV (avi) 16:9 Widescreen. I also ran a second test changing the Source parameters to 16:9 and UFF to see if there was a difference in the results.

I brought both results into a Vegas project. In both cases the video was BLACK with Audio only.

So I tried changing the frameserver to RGB 32 instead of YUY2 and ran both tests again. Same result. Black video (Actually the color of the frame on the timeline looks orange, but displays black).

All of those avi tests play just fine and look fine in a media player such as VLC Media Player, but will not display the video in Vegas. Vegas recognizes the video clip's properties correctly however.

Next I tried using the frameserver signpost as the source using YUY2 and got the same black video.

Next I ran Virtual Dub and tried rendering with the Lagarith codec for the avi. Same strobing effect.

EDIT: OK. I think I know the reason for the strobing. If I set the project properties in Vegas to Progressive Scan (None) the strobing disappears. Maybe there is a field order reversal somewhere in this process. If I change the file properties of the clip to LFF and the Project properties to UFF or vice versa it also works without strobing.

John
NickHope wrote on 4/16/2011, 11:39 AM
John,

Sounds like there's something with that Procoder DV codec that Vegas doesn't like.

What version of Vegas are you running? There was a problem with non-Sony DV files in10.0a that got fixed in 10.0b, but it was slow opening, not black video. But perhaps it's related.

What is your final destination for the SD video? DVD? DV? If it's DVD then why not go straight to MPEG-2 in Procoder?

Also, I assume you meant VLC media player, not Vimeo, right?

Can't help with strobing etc. yet but will share my results when I've got a bit further with my own testing.
craftech wrote on 4/16/2011, 11:49 AM
John,
==============
Vegas 8.0c
=================
What is your final destination for the SD video?
===========
DVD. I usually go right to Mpeg 2 from the frameserver, but I wanted to try that script because of the talk above that it would produce better results going from HD to SD. An avi intermediate is required when using the script it appears.
===============
Also, I assume you meant VLC media player, not Vimeo, right?
==========
Yes, I corrected it. The strobing is from a field order reversal. Maybe VD is doing something funny to it because the script says assumetff The master being frameserved from Vegas is 1920x1080 Square pixel TFF. Either way the same problem occurs whether I set the deinterlace properties in the master to interpolate or blend. Makes no difference when I make the VD avi.

Thanks for this help. I really appreciate it Nick.

John
NickHope wrote on 4/16/2011, 2:41 PM
John, You don't need an intermediate at all, and if you use a DV intermediate then it will be lossy. AviSynth is a frameserver. And from what you said earlier, it sounds like Procoder will accept .avs input. So you can just run this chain all in series and concurrently:

Vegas > Debugmode Frameserver > AviSynth > Procoder

This is exactly what I'm doing as I type this, except that I'm using CCE instead of Procoder.

That script will accept RGB24 input as well as YUY2. I'm in the thick of these tests right now but I'm seeing that if I serve RGB24 into it then full scale levels are unchanged but if I serve YUY2 then levels get squeezed (0-255 => 16-235) so you need to beware of that and compensate for it in your workflow if necessary. You could do that in several ways in Vegas, a couple of ways in AviSynth, and very probably in Procoder too. It all depends where and how you like to conform your levels.

If you upload a few smart rendered seconds of your source footage somewhere (e.g. www.mediafire.com) I'll run it through the tests I'm doing here and report back. If you do so, choose a part where you've been getting strobing.
craftech wrote on 4/17/2011, 4:04 AM
Thanks for taking the time to help me with this Nick.

The interest I took in these experiments was after reading John's post above that stated:
I took the extra step of rendering to high-quality SD MPEG-2, using Vegas, burning a DVD, and then viewing the resulting DVD on my WEGA Sony 30" CRT monitor. The Vegas downsized result was OK, but had significant moiré; the QTGMC result was unwatchable; but the video created with the script below looked almost perfect and had no moiré whatsoever..

Then he posted the script I tested and that we are discussing right after that statement.

I am always looking for better HD to SD (for DVD) methods. I have been frameserving HD projects like I described straight into Procoder 3 using Procoder to do the scaling and transcoding to Mpeg 2. I have tried VD for resizing using the Lanczos3 scaler because some people thought it was better, and I have also tried nesting into a Vegas SD project.

Like many, I am never completely satisfied with the results. That script John posted sounded promising so I started experimenting with it. The strobing is present throughout the entire video and is a result of field order reversal somewhere in the process. If I change the field order of the avi I import into Vegas the strobing disappears.

But moreover, the only reason I am messing with it at all is because of this quest for the best HD to SD conversion for DVD I can get.

So maybe I'll try your method:

Vegas > Debugmode Frameserver > AviSynth > CCE and substitute Procoder 3 for CCE (although I have an older version of CCE myself).

I have four questions Nick:

1. Which AviSynth are you using for your workflow?

2. Why RGB24 over RGB 32 for the frameserver?

3. In the web page that Musicvid put up that covered using Handbrake to produce nice web video, do you know if anyone has used Handbrake for producing Mpeg 2 for DVDA?

4. For uploading a few smart rendered seconds of my source video to Mediafire, what do you want me to render it to?

Thanks again,

John
NickHope wrote on 4/17/2011, 5:21 AM
Then he posted the script I tested and that we are discussing right after that statement.

That's the correct script. The two similar scripts that I posted later contain ConvertToYV12(interlaced=true, matrix="PC.709"), which both John Meyer and I have since found causes issues such as ghosting, so avoid that YV12 conversion. In any case the script will accept RGB24 or YUY2. YUY2 input will result in less contrast than RGB24 input. If you don't want that and would prefer to maintain full scale levels (e.g. if you conformed to Studio-RGB 16-235 in Vegas prior to frameserving) then you can add ColorYUV(levels="TV->PC") to expand the levels out again. The whole script would then look like this:

source=AviSource("d:\fs.avi").AssumeTFF
IResize(source,720,480)
function IResize(clip Clip, int NewWidth, int NewHeight) {
Clip
SeparateFields()
Shift=(GetParity() ? -0.25 : 0.25) * (Height()/Float(NewHeight/2)-1.0)
# Shift=0 #John Meyer says also try it with this line instead of one above
E = SelectEven().Spline36resize(NewWidth, NewHeight/2, 0, Shift)
O = SelectOdd( ).Spline36resize(NewWidth, NewHeight/2, 0, -Shift)
Ec = SelectEven().Spline36Resize(NewWidth, NewHeight/2, 0, 2*Shift)
Oc = SelectOdd( ).Spline36Resize(NewWidth, NewHeight/2, 0, -2*shift)
Interleave(E, O)
IsYV12() ? MergeChroma(Interleave(Ec, Oc)) : Last
Weave()
}
ColorYUV(levels="TV->PC") # Expand levels


Or you could expand levels in Vegas before frameserving by using a Studio RGB to Computer RGB preset in Levels.

There is an extra line in there, commented out, that John Meyer has suggested I try, but haven't done yet (at least I think that's what he was suggesting I do).

1. Which AviSynth are you using for your workflow?

2.5.8 single-threaded version from here.

2. Why RGB24 over RGB 32 for the frameserver?

John Meyer tried RGB32 and found it halved the speed in a QTGMC script. Also, it's unnecessary as, according to Satish, the only difference is that RGB32 contains an alpha channel. See this old thread.

3. In the web page that Musicvid put up that covered using Handbrake to produce nice web video, do you know if anyone has used Handbrake for producing Mpeg 2 for DVDA?

I haven't tried it but Musicvid or Anendegw may have tried that, I guess(?). I see no direct MPEG-2 output, only MP4 or MKV so I guess there would be further transcoding/reinterlacing to do.

4. For uploading a few smart rendered seconds of my source video to Mediafire, what do you want me to render it to?

Any format it will let you smart render it to, that can be opened with Vegas Pro 10.0c, so that I am getting the original quality. If your camera is an EX1, would that imply MXF? I am not experienced with that but I'm sure someone else knows what format you could smart render EX1 footage to.
amendegw wrote on 4/17/2011, 5:49 AM
"I haven't tried it but Musicvid or Anendegw may have tried that, I guess(?). I see no direct MPEG-2 output, only MP4 or MKV so I guess there would be further transcoding/reinterlacing to do."Yeah, I know of no way for HandBrake to produce MPEG-2 output. AFAIK, it only exports to h.264 progressive.

However, I tried a test (and I think musicvid did as well) that used the 60i source to export to 30p in HandBrake and then rendered back to the MainConcept MPEG-2 "DVD Architect NTSC Widescreen video stream" template (i.e. 720x480 29.94 fps Interlaced).

The results came out pretty good, but I didn't test anything with motion - as I was somewhat concerned that the 60i->30p->60i may introduce stutter. Also, I was somewhat concerned that this procedure included two lossy renders.

All that said, as I recall I briefly experimented with farss' suggestion of using the Mike Crash Smart Deinterlacer to deinterlace from 60i->60p to HB 60p to MC MPEG-2 60i. It didn't work very well, but I think the concept is valid - I just didn't have my DeInterlacer params set properly. Maybe I'll do some more experimenting with this. Edit: I'm going to have to think more about this - why even get HandBrake involved here?

...Jerry

System Model:     Alienware M18 R1
System:           Windows 11 Pro
Processor:        13th Gen Intel(R) Core(TM) i9-13980HX, 2200 Mhz, 24 Core(s), 32 Logical Processor(s)

Installed Memory: 64.0 GB
Display Adapter:  NVIDIA GeForce RTX 4090 Laptop GPU (16GB), Nvidia Studio Driver 566.14 Nov 2024
Overclock Off

Display:          1920x1200 240 hertz
Storage (8TB Total):
    OS Drive:       NVMe KIOXIA 4096GB
        Data Drive:     NVMe Samsung SSD 990 PRO 4TB
        Data Drive:     Glyph Blackbox Pro 14TB

Vegas Pro 22 Build 239

Cameras:
Canon R5 Mark II
Canon R3
Sony A9

johnmeyer wrote on 4/17/2011, 8:24 AM
Nick,

The Shift=0 line that I commented out was for my own purposes to see if the half-pixel shift really did anything. I was not recommending that you use it. I simply forgot to delete it. BTW, I didn't see any differences with Shift=0, but I only tested by looking at still images. I might see more if I watched the actual video and did so on a CRT.

In my case, the reasons for trying all three Satish frameserver settings is that when I changed from AVISynth 2.5.8 MT to 2.6 MT, I found that I could no longer serve RGB24 into an AVISynth script (2.6MT broke that), but RGB32 still worked. That's when I found that RGB32 was much slower. YUY2 works, but I never did fully resolve some very small color shift issues that seem to creep in (which only affected extremely saturated reds, at leas in my limited testing). These are different issues than those associated with the 0/15-235/255 issues.

All my work was using Vegas in 8-bit mode. I don't know whether RGB32 frameserving would make sense if I used the full-color Vegas mode.

Finally, the script posted over at doom9 which did such a great job with the doll torture clip uses a plugin that apparently preceded QTGMC and which I haven't yet had time to track down (tempgaussmc_beta2a). Also, I couldn't quite follow what it was doing. However, the result was by far the best I've seen. Intriguing, but I just don't have the time to figure it out right now.The concept of scaling multiple times is interesting. The first scale goes to 2x the final resolution; the next to final horizontal resolution; and the third to final vertical resolution. QTGMC is applied AFTER the downsize. However, the tempgaussmc_beta2a is applied before all of this re-sizing. I don't have time to look into tempgaussmc_beta2a, so at the moment I don't know if it is just denoising, or whether it is performing some other function.
craftech wrote on 4/17/2011, 9:03 AM
4. For uploading a few smart rendered seconds of my source video to Mediafire, what do you want me to render it to?

Any format it will let you smart render it to, that can be opened with Vegas Pro 10.0c, so that I am getting the original quality. If your camera is an EX1, would that imply MXF? I am not experienced with that but I'm sure someone else knows what format you could smart render EX1 footage to.

I uploaded about 8 seconds of the 1920 x 1080 60i Square Pixel UFF video rendered from the Vegas 8.0c timeline as mxf to YouSendIt.

Thanks,

John
NickHope wrote on 4/18/2011, 12:13 AM
why even get HandBrake involved here?

Potentially for it's good decombing/resizing ability, if you prefer not to get involved with AviSynth. The part that I haven't got clear in my head yet is whether to always re-interlace or not for DVD, and how best to do that.

The Shift=0 line that I commented out was for my own purposes to see if the half-pixel shift really did anything.

OK, I'll drop that test. That makes things easier :)

I found that RGB32 was much slower. YUY2 works, but I never did fully resolve some very small color shift issues

I am successfully frameserving RGB24 into that IResize script. It's a bit slower than YUY2 but works well. The QTGMC scripts won't accept RGB though. By the way, I duplicate your red ghosting on the dancer's leg at 480i with either the IResize script or your simple (no-filtering) QTGMC script, but I DON'T see that problem with 720p output created as per my QTGMC web video method (in other words ConvertToYV12 is OK for that). Hoping to get to the bottom of that issue later today.

I don't have time to look into tempgaussmc_beta2a
Forget tempgaussmc_beta2a. -Vit- posted an exact equivalent that uses QTGMC instead. If you are concerned about 2 instances of QTGMC running concurrently then write a lossless intermediate after the first stage. I did a short test without the intermediate (i.e. double QTGMC usage) and didn't get a crash, but it's pushing things a bit. In any case, double-QTGMC is going to be SLOWWW and probably unnecessary. I'm planning tests with a simpler bob for the first stage using Yadif, TDeint and/or the built in Bob instead, as per Didée's suggestion.

I uploaded about 8 seconds of the 1920 x 1080 60i Square Pixel UFF video

Thanks. Got it. That's a great deinterlacing test clip without (mentioning no names) being deliberately sadistic. It's a particularly great candidate for testing smart deinterlacers as the background is static. I'll include some of that in the timeline that I'm using for testing. Got a cracking project now that includes Jerry's doll clip, John M's spinning ballet dancer, Stringer's driving clip, some of my own HDV, and test charts etc.. I'm planning to make a DVD of various HD>DVD permutations in different chapters. Hopefully then I'll upload a DVD folder online for others to burn and test, and write another guide with the best method(s) using these tools. I'll of course ask all the media owners for permission first.
amendegw wrote on 4/18/2011, 9:47 AM
Okay, I ran a test using farss' suggestion of using the Mike Crash DeInterlacer and combined that with the musicvid/Nick_Hope suggestion of using HandBrake. For your viewing pleasure, download HERE. And, imho, the results came out pretty darn good. Are they the best so far? I dunno - beauty is in the eye of the beholder.

Here's my workflow.

1) Put the Hula Doll 1920x1080 60i clip on the Vegas timeline with the project settings at 1920x1080 59.94 fps progressive.
2) Apply the Mike Crash Deinterlacer.
3) Render to a 1920x1080 59.57 fps DNxHD intermediate.
4) Import to HandBrake. Export to a 720x480; match framerate; CQ:RF=20 h.264 media file.
5) Drop this clip on a Vegas timeline after a "Match Media Settings"
6) Right click video event: "Reduce Interlace Flicker"
7) Add the Sony Sharpen FX.
8) Render to MainConcept NTSC Widescreen Video Stream template.

As with all things video, YMMV.

...Jerry

System Model:     Alienware M18 R1
System:           Windows 11 Pro
Processor:        13th Gen Intel(R) Core(TM) i9-13980HX, 2200 Mhz, 24 Core(s), 32 Logical Processor(s)

Installed Memory: 64.0 GB
Display Adapter:  NVIDIA GeForce RTX 4090 Laptop GPU (16GB), Nvidia Studio Driver 566.14 Nov 2024
Overclock Off

Display:          1920x1200 240 hertz
Storage (8TB Total):
    OS Drive:       NVMe KIOXIA 4096GB
        Data Drive:     NVMe Samsung SSD 990 PRO 4TB
        Data Drive:     Glyph Blackbox Pro 14TB

Vegas Pro 22 Build 239

Cameras:
Canon R5 Mark II
Canon R3
Sony A9

craftech wrote on 4/25/2011, 12:07 PM
Jerry,

Looks nice.

What are the settings for the Mike Crash Deinterlacer?

Thanks,

John
amendegw wrote on 4/25/2011, 3:30 PM
"What are the settings for the Mike Crash Deinterlacer?"Copied these from someone's post somewhere - can remember where I got them, but "thanks" to whoever made the post.


...Jerry

System Model:     Alienware M18 R1
System:           Windows 11 Pro
Processor:        13th Gen Intel(R) Core(TM) i9-13980HX, 2200 Mhz, 24 Core(s), 32 Logical Processor(s)

Installed Memory: 64.0 GB
Display Adapter:  NVIDIA GeForce RTX 4090 Laptop GPU (16GB), Nvidia Studio Driver 566.14 Nov 2024
Overclock Off

Display:          1920x1200 240 hertz
Storage (8TB Total):
    OS Drive:       NVMe KIOXIA 4096GB
        Data Drive:     NVMe Samsung SSD 990 PRO 4TB
        Data Drive:     Glyph Blackbox Pro 14TB

Vegas Pro 22 Build 239

Cameras:
Canon R5 Mark II
Canon R3
Sony A9

craftech wrote on 4/25/2011, 4:10 PM
Thanks Jerry,

Regards,

John
NickHope wrote on 4/25/2011, 11:42 PM
This is one plugin where generic settings are difficult to recommend. You really need to check "Show motion areas only" and then experiment with the thresholds to see exactly which parts of the video it is going to deinterlace. Appropriate settings are really dependent on the nature of the clip in question. This deinterlacer is good for just deinterlacing the moving parts of the scene and leaving the static/near-static parts alone.
amendegw wrote on 4/26/2011, 3:57 AM
"This is one plugin where generic settings are difficult to recommend. You really need to check "Show motion areas only" and then experiment with the thresholds to see exactly which parts of the video it is going to deinterlace. Appropriate settings are really dependent on the nature of the clip in question"Which begs the question... if my render produced pretty darn good results using some settings I found on this forum, maybe we can get good results keeping Mike Crash out of the mix. That is, use Vegas to render to double NTSC (i.e. 59.94 fps) - then using HandBrake to resize to 720x480 59.94 fps progressive - and a final render in Vegas to get the footage back to 720x480 SD DVD format ?? I haven't thought thru the implications of that flow - maybe it doesn't make sense, but it might be worth at test.

I don't think I'll have much time this week for that test - we'll see.

...Jerry

System Model:     Alienware M18 R1
System:           Windows 11 Pro
Processor:        13th Gen Intel(R) Core(TM) i9-13980HX, 2200 Mhz, 24 Core(s), 32 Logical Processor(s)

Installed Memory: 64.0 GB
Display Adapter:  NVIDIA GeForce RTX 4090 Laptop GPU (16GB), Nvidia Studio Driver 566.14 Nov 2024
Overclock Off

Display:          1920x1200 240 hertz
Storage (8TB Total):
    OS Drive:       NVMe KIOXIA 4096GB
        Data Drive:     NVMe Samsung SSD 990 PRO 4TB
        Data Drive:     Glyph Blackbox Pro 14TB

Vegas Pro 22 Build 239

Cameras:
Canon R5 Mark II
Canon R3
Sony A9

farss wrote on 4/26/2011, 4:25 AM
"That is, use Vegas to render to double NTSC (i.e. 59.94 fps) - then using HandBrake to resize to 720x480 59.94 fps progressive - and a final render in Vegas to get the footage back to 720x480 SD DVD format ?? I haven't thought thru the implications of that flow - maybe it doesn't make sense, but it might be worth at test."

Sorry but it doesn't make any sense.
Converting fields to frames with nothing else achieves zip, it's exactly what any app dealing with interlaced does anyway.

I use Mike's Smart De-Interlacer all the time but as noted you do need to optimise the settings depending on your content. I works well for me because a lot of my content is pretty much with the camera locked off.

Bob.

craftech wrote on 4/26/2011, 7:03 AM
What I am a little confused about (forgive me if it's already been answered) is why do I want to take a timeline with HD 1080i footage and deinterlace first before downsizing, then reinterlace it later for the SD DVD?

The Mike Crash Smart Deinterlace plugin adds a tremendous amount of time to the rendering process.

Thanks,

John
johnmeyer wrote on 4/26/2011, 8:26 AM
why do I want to take a timeline with HD 1080i footage and deinterlace first before downsizing, then reinterlace it later for the SD DVD?I know everyone understands that you have to re-size the odd fields as a separate operation from re-sizing the even fields: if you instead resize both fields at once, you end up with a mess, with warped vertical lines and bands, etc. because the re-sizing algorithm is creating video using pixels from different moments in time. So, the fields are separated into even and odd fields, re-sized, and then put back together.

Now, I'm not sure if you are asking about whether doing a de-interlacing operation before resizing makes this resizing better. If you are asking that, then the answer is definitely yes, but only if the de-interlacing is really good. If the deinterlacing were perfect, constructing an artificial set of fields that perfectly created a full progressive frame that looked exactly like what the video would be if it were shot at 60 frames per second instead of 60 fields per second, then you would end up with perfect resizing, with more details because the resizer would know exactly where to put each pixel in the reduced resolution image.

If instead you are asking why the heck you would, after going to all the trouble of doing this perfect deinterlacing, throw away half of the work (half of all the fields) and then re-interlace, the answer is that you can't create a DVD that is 60p, and even if you are not going to DVD, there are other display devices that can't keep up with 60p (e.g., most of my older computers). So, if you are delivering on DVD, you have to return the result back to interlaced 29.97 fps (60i), something that is -- for better or worse -- the lingua franca of video in North America.



craftech wrote on 4/26/2011, 4:47 PM
Thanks for the reply John.

I read lots of posts on the forum. HD to SD is a huge topic of discussion and it is frequent.

I probably missed some threads or posts because I don't remember anyone suggesting that deinterlacing was suggested for best results in the workflow for interlaced HD to SD downconversion.

I remember endless suggestions regarding setting the deinterlace method in the project properties to either Blend or Interpolate and unless you were editing progressive footage, not to set it to None.

I do not remember anyone suggesting using the Mike Crash Smart Deinterlace filter at the project output before either nesting an HD project into an SD project or frameserving to another program.

I am testing Jerry's method above and I have taken my 1080i MXF timeline and set the deinterlace method in project properties to NONE. Then I have applied the Mike Crash Smart Deinterlace settings Jerry posted to the output. I am rendering it to a DNxHD intermediate file.

With a few color enhancement filters and the Mike Crash filter I started the rendering of the 1 hour and 20 minute project last night. The rendering time has stabilized and I can safely project that the estimated rendering time is 50 HOURS!!

Why? The Mike Crash Smart Deinterlace Filter. I frameserved the same timeline without that filter to Procoder for an mt2s HD project and it took 12 hours.

Is anyone else using that filter at the project output with the end goal being an SD DVD of interlaced HD material?

Thanks,

John

musicvid10 wrote on 4/26/2011, 6:25 PM
I wonder why the Mike Crash filter, which is along the lines of neuron2's older "bob" filter is getting so much more attention than yadif, which I understand is also available for Vegas. Is there something about the yadif plugin that makes it a noncontender for this kind of work? I confess, I have not tested it in Vegas yet.