Interlacing and vegas question

Mike M. wrote on 8/7/2010, 5:06 PM
I'm a bit confused as to what happens during import, edit and rendering interlaced files.

For example, let's say I have an interlaced file, import it into Vegas, edit it and then render it to be used as a mpeg compliant DVD. Does Vegas do all the interlacing conversion behind the scenes...........or............do I have to blend the fields and then re-interlace?

Also, what would be the case if you had a combination of interlaced and progressive files and then rendered to interlaced?

Thanks,

Mike

Comments

kkolbo wrote on 8/7/2010, 5:46 PM
For example, let's say I have an interlaced file, import it into Vegas, edit it and then render it to be used as a mpeg compliant DVD. Does Vegas do all the interlacing conversion behind the scenes...........or............do I have to blend the fields and then re-interlace?*******

If it is interlaced on imported as interlaced and is SD, then going to an interlaced DVD does not require you or Vegas to do much. If you are going to a progressive DVD then Vegas will de-interlace using the setting you have in the project properties. If you use SD progressive on the timeline and go out to interlaced DVD, then Vegas will take care of that.

Edit: Removed incorrect suggestion

KK
John_Cline wrote on 8/7/2010, 8:41 PM
If you are going from interleaced HD to interlaced SD, then set your project to interlaced and choose a deinterlace method. (I use interpolate.) If you set it to progressive as kkolbo suggests, you will throw away half your vertical resolution.

When resizing interlaced video and you have a deinterlace method selected in project properties, Vegas "unfolds" the frame into individual fields at double the frame rate, rescales the video and then "refolds" (reinterlaces) the fields back into interlaced frames at the new size. The rendering quality must be set to "Best" so that Vegas uses the Bicubic rescaling algorithm. This is exactly how interlaced video should be resized.
Mikey QACTV7 wrote on 8/7/2010, 8:57 PM
But then there is upper field or lower field or progressive. Do you know about upper field and lower? Renderning an upper field video to lower field can do some weird things. Maybe someone can explain to mixer440 up and downs in interlaced video.
musicvid10 wrote on 8/7/2010, 9:04 PM
Nope, upper/lower considerations are not an issue in Vegas 99% of the time, quite unlike P******e and other software.

The rare exceptions are an incorrect media flag or a truly reversed field order, which rarely happen anymore.
John_Cline wrote on 8/7/2010, 9:31 PM
All the interlaced HD I have ever run across has been upper field first. Standard definition DV is always lower field first. It doesn't really matter whether you use upper or lower when rendering to MPEG2 as there is a flag in the header to tell the player how to handle it. Typically though I will use the same field setting as the source footage.
Mike M. wrote on 8/7/2010, 10:15 PM
...................Where it gets sticky is when you have interlaced HD and you are going to resize it to SD of any kind.......................



What about going the other way for some reason (4:3 to 16:9 widescreen)?



Thanks folks for the information and help.
kkolbo wrote on 8/7/2010, 11:18 PM
Let me apologize and also thank John Cline. He is absolutely correct. The handling he describes is how it should be handled. I ran my "visual" tests several years ago when HDV was first out. Even though technically it should have behaved as John said, at that time it was not performing that way. Hence my work around.

I ran the whole battery of tests again tonight knowing that John knows his stuff and the results are very different. (#1 so much faster on multi-core instead of the processor of old) What used to be strange artifacts are not there at all when set properly as John says. In fact I pulled the old test files out to look at and it was night and day.

Thank you John. One for the correction, and two for showing me that Vegas is now handling this so well. I am now confident to let Vegas do the heavy lifting again without me thinking about it.

KK
John_Cline wrote on 8/8/2010, 12:16 AM
"What about going the other way for some reason (4:3 to 16:9 widescreen)?"

As long as Vegas knows what the properties of the source video happens to be (and it usually does) then you will have no problem. Of course, taking 4x3 SD, cropping it to 16x9 and upscaling it to HD will look pretty soft, but it will do it correctly.

The "deinterlace method" under "Project Properties" serves two functions: 1) It tells Vegas how to deinterlace interlaced material to progressive when rendering. 2) It also tells Vegas how to deal with interlaced material when resizing, cropping or moving.

Since there are 59.94 individual fields per second, (or 50 in PAL) Vegas is taking the interlaced video, splitting it into progressive fields at 59.94 fps. Let's say you have a 1920x1080 widescreen interlaced video and you want to rescale it to widescreen interlaced SD. Each frames consists of two fields taken 1/59.94th of a second apart in time, each field is actually 1920x540 with a horizontal pixel aspect ratio of 1.0, Vegas rescales the 1920x540 fields to 720x240 fields with a pixel aspect ratio of 1.212121 and then when it reinterlaces the fields back into frames and you end up with perfectly interlaced 720x480 at 29.97 fps. Both 4x3 and 16x9 SD video is 720x480, it's just the pixel aspect ratio flag that is different.

Vegas will also handle interlaced to progressive and progressive to interlaced correctly. In most all cases, just set the "Deinterlace method" under "Project Properties" to "Interpolate" and leave it set that way permanently. (If there is NO motion in the video, under that circumstance you might want to use "blend fields.")

Like I said, as long as Vegas knows the source file's image size, frame rate, pixel aspect ratio and whether it's interlaced or progressive (which it usually does without user intervention), it will automatically handle all format conversions correctly.
farss wrote on 8/8/2010, 1:00 AM
"Vegas will also handle interlaced to progressive and progressive to interlaced correctly"

I'm intrigued by how Vegas will handle interlaced to progressive correctly. I assumine we're talking about "i" and not PsF.

One reason I ask is in the past I've jumped through a number of hoops to go from 1080i to 720p for the likes of YouTube. A couple of days ago I tried a way simpler and faster approach and well the results to me at least looked pretty darn good.
All I did was drop the 1080i footage into a 720p project and encode to H264 using the Sony Media Encoder at 6Mbps. Deinterlace Method was set to 'Interpolate'. I would have thought without some form of smart de-interlacing I was throwing resolution away. Maybe I am, maybe not. Short of conducting tests its hard to know for sure simply relyin on how footage looks.

Bob.
John_Cline wrote on 8/8/2010, 1:18 AM
"I assuming we're talking about "i" and not PsF."

Yes. I don't typically shoot PsF, but I would think as long as Vegas knew what it was, it would handle that correctly, too.

All I did was drop the 1080i footage into a 720p project and encode to H264 using the Sony Media Encoder at 6Mbps. Deinterlace Method was set to 'Interpolate'.

That's exactly how I do it and I've been quite pleased with the results. It sure is easy and the Sony AVC encoder is pretty darned fast and looks really good. I sometimes run it up to 10 Mbps or even higher if I want to eek the last little bit of quality out of YouTube or Vimeo.

I would have thought without some form of smart de-interlacing I was throwing resolution away.

Well, you are throwing away a bit of vertical resolution when upscaling from 540 to 720, but it doesn't really seem to be all that noticeable. Every great once in a while, I get a strange artifact from a smart-deinterlacer, so this method eliminates that possibility altogether.
farss wrote on 8/8/2010, 1:37 AM
" Every great once in a while, I get a strange artifact from a smart-deinterlacer, so this method eliminates that possibility altogether. "

I've had exactly that problem as well. By the time I encode to H.264 and YouTube does its thing the small artifact gets quite a bit worse as well. I have found by spending enough time tweaking the de-interlacer I can get rid of it however it almost seems to require adjustment on a case by case basis so I'm not certain I could recommend some magic soup that'll just work.

Bob.
Randy Brown wrote on 8/8/2010, 7:05 AM
All I did was drop the 1080i footage into a 720p project and encode to H264 using the Sony Media Encoder at 6Mbps. Deinterlace Method was set to 'Interpolate'.

What settings would you guys use in V8?
Mike M. wrote on 8/8/2010, 3:09 PM
Thanks so much for your expertise John, Bob and the group. I wish the forums had a place to "Sticky" these topics.
PeterDuke wrote on 8/9/2010, 6:23 AM
John
I understand that a double rate deinterlacer (eg 50i to 50p) that works by interpolating both fields is sometimes called a "bob" deinterlacer because a horizontal line may bob up and down in the deinterlaced video. The treatment is to apply a modest amount of vertical blur and follow that with modest sharpening. Do you know if Vegas does that?

Edit
Sorry, bob deinterlacing is apparently something else. Nevertheless, I understand that blurring and sharpening is still beneficial, so the basic question still stands, unless I'm wrong there too!
John_Cline wrote on 8/9/2010, 1:57 PM
If you were to set the project properties to 59.94 (or 50, PAL) progressive and dropped an interlaced file into the timeline, Vegas seems to split the fields and doubles the each field's vertical size numerically which results in 59.94 progressive. To answer your specific question: I have never seen any evidence or documentation that Vegas is applying any vertical blur or sharpening in the process.

Vegas is more often used to convert 59.94 field per second interlaced to 29.97 frame per second progressive.

When resizing or using motion, Vegas splits the frames into separate fields which results in double the frame rate at half the frame's vertical resolution (since each field consists of only the odd or even lines.) The processing is done and the fields are reinterlaced. There is no bob, weave, blur or sharpening applied or needed.

When Vegas is actually converting interlaced to progressive and you have selected "Blend", it takes both fields in the frame and averages them together. Combing is avoided because both of the images are laid on top of each other. Vegas then resizes it back to the original vertical size but this can cause "ghosting" on the left or right side of objects which are rapidly moving in the scene. This method results in half the temporal (time) resolution at the expense of ghosting.

When Interpolate is selected, Vegas throws one of the fields away and interpolates the missing lines by using the line directly above and below each "missing" line. This also converts 59.94 fields per second to 29.97 progressive frames per second. This method results in losing up half the vertical resolution and half the temporal resolution but no chance of ghosting.

"Smart deinterlacing" or motion adaptive blending (which Vegas does not do natively), is a combination of weaving and blending techniques. (Weaving is accomplished by adding consecutive fields together. This is fine when objects have not moved between fields, but if they have it will result in "combing" artifacts, when the pixels in one frame do not line up with the pixels in the other, forming a jagged edge.) As areas that haven't changed from frame to frame don't need any processing, the frames are weaved and only the areas that need it are blended. This retains full vertical resolution, half the temporal resolution, and has fewer artifacts than weaving or blending because it's a combination of both. However, motion adaptive blending algorithms can be fooled.
farss wrote on 8/9/2010, 3:31 PM
There's one thing about interlaced video that needs to be kept in mind otherwise the mental arithmetic that one might apply can lead you to wrong conclusions.
By design interlaced video is lower actual resolution than progressive. Although HD cameras record 1080 lines that does not mean the image has 1080 lines of actual resolution. Many factors come into play here however the most relevant is that by design when a camera is set to record interlaced video it will use line pair averaging which reduces vertical resolution to around 70% of the vertical resolution. Note carefully: This does NOT apply when a camera is set to record progressive and writes it into an interlaced stream.

What this means in a practical sense is that when John talks about loosing half the vertical resolution, the real world outcome is not as bad as half because the original image did not have 1080 lines resolution to begin with. It's probably lucky to be over 700 lines. The difference between 700 lines and 540 lines is not that dramatic, certainly not as big as the difference between 1080 lines and 540 lines.

Bob.
John_Cline wrote on 8/9/2010, 4:30 PM
Bob is absolutely correct, it is necessary to keep the distinction between actual resolution and numerical resolution as defined by image size.
jabloomf1230 wrote on 8/9/2010, 4:58 PM
Doesn't the new Panny TM-700 output 1000 lines of actual resolution in 1080 60p mode, as opposed to most other consumer camcorders putting out 700-800 lines in 1080 30p mode?
PeterDuke wrote on 8/9/2010, 5:19 PM
There should be no reason why lines of resolution should be dependent on framerate. Perhaps you meant to compare 60p (60 progressive frames per second) with 60i (60 interlaced fields per second). In that case the 60p should have more vertical resolution as Bob has said.
jabloomf1230 wrote on 8/9/2010, 6:06 PM
Most of the consumer 30p modes are really 30psf (both fields embedded in a 60i wrapper).
farss wrote on 8/9/2010, 6:12 PM
"Most of the consumer 30p modes are really 30psf (both fields embedded in a 60i wrapper). "

Indeed and as I pointed out technically in this case there is no need to use line pair averaging in the camera. However it may be employed or else the OLPF will reduce resolution to limit problems with line twitter and aliasing if the consummer plays the video out of the camera into their TV.

At the end of the day resolution figures can be grossly missleading. A 35mm print delivers only 700 lines of resolution onto a cinema screen and yet that is the gold standard.

Bob.
jabloomf1230 wrote on 8/10/2010, 10:26 AM
Here' s a review of the Panasonic HDC-TM700, covering its "sharpness":

http://www.camcorderinfo.com/content/Panasonic-HDC-TM700-Camcorder-Review-37681/Motion-amp-Sharpness-Performance.htm. Even with this camcorder, the actual resolution is ~1000x900.
PeterDuke wrote on 8/10/2010, 6:19 PM
"Even with this camcorder, the actual resolution is ~1000x900. "

I wonder how much the test equipment contributed to lowering this score. Their method: "After the shooting is complete, we connect the camcorder to our HDTV using the camcorder's highest quality connection, typically either composite-out, S-video, component-out, or HDMI." If they enlarged the video digitally and then viewed it on the TV then we would have got a more accurate result.

I raised the question in another thread as to how many pixels are there in a 1920x1080 TV, given that a TV is designed for watching television signals and typically crops about 5% off the edges. If you try to use the TV as a computer monitor and set the display to 1920x1080, you will lose the edges. Are there only 1920x0.95 = 1824 horizontal pixels in a full HD TV, for instance?
PeterDuke wrote on 8/10/2010, 6:30 PM
I also noticed this comment in the review:

"So, if you want the sharpest image possible from the HDC-TM700, you should use its 1080/60p settings—the only problem is this footage is nearly impossible to work with or edit on a computer."

They might have added comments about viewing as well, given that Blu-ray and AVCHD discs don't support 1080/60p.