Converting 1440x1080i to 720x480

Greenlaw wrote on 6/28/2009, 12:34 AM
Hi,

When scaling down 1440x1080i footage for DVD, I've been deinterlacing the footage in Vegas before scaling it down to 720x480 and exporting the result as progressive for editing. My thinking is that this should result in sharper standard res footage.

This seems to be working for me most of the time but I was wondering if this was a proper thing to do and what the 'catches' might be. Opinions? Thanks in advance for any helpful advice.

Greenlaw

--
Greenlaw
Little Green Dog
www.littlegreendog.com

Greenlaw
Senior Digital Artist
Rhythm & Hues Studios - The Box
www.rhythm.com

Comments

farss wrote on 6/28/2009, 1:02 AM
I would not do a separate de-intelace on the HDV.

Set de-interlace method in project properties to Blend or Interpolate, the latter if you have a lot of fast motion. Render to your NTSC 60i template at Best from your 60i HDV project. Watch your levels, HDV cameras seem to record well over 100%. Glenn Chan has a project you can download with a Color Curve FX in it that'll bring the 109% level down to 100%. Not such a big thing, more a matter of taste unless you're doing it for broadcast. You can simply leave the highlights where they are and let the player clip them if it wants.

As an aside you don't want it too sharp before you downscale or you can run into issues with line twitter.

Bob.
Greenlaw wrote on 6/28/2009, 1:21 AM
Thanks for the tips, Bob! I'll try your suggestions immediately.

Best,

Greenlaw

--
Greenlaw
Little Green Dog
www.littlegreendog.com

Greenlaw
Senior Digital Artist
Rhythm & Hues Studios - The Box
www.rhythm.com
John_Cline wrote on 6/28/2009, 2:17 AM
HDV is interlaced, which means that there are 59.94 (or 50 in PAL) individual consecutive images per second. The images are called "fields." 59.94 (or 50) images per second is its "temporal resolution." When you deinterlace the video, you have thrown away exactly half of this temporal resolution forever. Motion in the video will be exactly half as smooth because you have reduced the number of images per second from 59.94 to 29.97 (or 50 to 25 in PAL.)

Vegas is smart enough to resize interlaced video correctly, assuming that you have chosen a deinterlace method in the project settings, it also correctly compensates for pixel aspect ratio. (In fact, more correctly than other NLEs.) It takes each HDV field, which has an actual spatial resolution of 1440x540, and resizes it to 720x240 and then reinterlaces the video into 720x480-60 field interlaced. (It actually resizes it to 704x480 and pads the sides with a little bit of black.) You have gone from an HDV frame size of 1440x1080 with a Pixel Aspect Ratio of 1.3333 and a Display Aspect Ratio of 16:9 to a SD frame size of 720x480 with a PAR of 1.2121 and a DAR of 16:9 without losing any of the temporal resolution.

If there is any motion in the video, then selecting "interpolate" in the project properties is preferable to the "blend" method. Also, for highest spatial quality, set the render method to "best." Basically, just set these two things and render away, Vegas will handle all the rest of it for you. And don't manually deinterlace like you've been doing!
Greenlaw wrote on 6/28/2009, 10:08 AM
Thanks for all this insightful information guys!

It seemed like the more I read elsewhere about deinterlacing, the more confusing it got because there's a lot of conflicting advice out there that lacks any real explanation. I really appreciate learning exactly what Vegas is doing with the footage 'under the hood'.

Greenlaw
musicvid10 wrote on 6/28/2009, 10:28 AM
Despite its intuitive application, the actual deinterlace methods used in Vegas are rudimentary. Blend and Interpolate are older, static methods adapted from early Photoshop days, while there are now some outstanding smart-deinterlace technologies available. Without going into a lot of detail, it is my hope that sharper options like Smart-Bob / Adaptive Decomb will be available in Vegas soon.

Further Reading (although slightly dated):
http://www.100fps.com/
TheHappyFriar wrote on 6/28/2009, 12:52 PM
maybe it's just my circumstances, but I've never done anything fancy, I'd just render to a HD 720p format & SD DVD 16:9 w/o any issues. If I wanted it progressive in SD I would render out to 1/2 resolution & use that one for a final render. (1080/2 = 540, which is 60 more lines the NTSC DV)
John_Cline wrote on 6/28/2009, 2:56 PM
Smart de-interlace is only really useful if you are converting from interlace to progressive and you are not changing the spatial resolution. When converting HD to SD, separating the fields, resizing each field using a bicubic or bilinear algorithm and then reinterlacing, works perfectly well and this is exactly what Vegas does.
SuperG wrote on 6/28/2009, 8:46 PM
Smart de-interlace is only really useful if you are converting from interlace to progressive and you are not changing the spatial resolution. When converting HD to SD, separating the fields, resizing each field using a bicubic or bilinear algorithm and then reinterlacing, works perfectly well and this is exactly what Vegas does.

Good point. I realized, after a long time unfortunatly, that it was busy work to deinterlace in Virtualdub if the intended output was going to be reduced in size. I recall reading an artcle once that quoted Yves Faroudja himself as saying there was no such thing as true deinterlacing when referring to temporally spaced fields. Not that he hasn't made a lot of money selling stuff to attempt it..