I’m trying to resize some interlaced DV footage in VirtualDub; when performing this process within Vegas, the resizing degrades the footage discernably. My intention is to reduce the 16:9 60i DV footage to 606X404 so that I may overlay a graphical border on top of it for the full frame presentation – approximate 16% reduction. The issue I’m having in VirD is that my processed footage levels are higher than the original. When I drop this VirD processed footage into Vegas I am unable to apply a proper secondary color correction mask to it. I posted to the VirD forums and the developer explained the issue thusly:
“There are several issues involved here.
First, codecs are annoyingly wishy-washy about levels, and programmers have unfortunately often imposed their own beliefs about best practices over any sort of consistency in levels between programs. The most common problem is a mismatch in levels between 16-235 and 0-255 in luma (Y) levels when using YCbCr; less common is a similar problem in RGB space. VirtualDub follows the general convention in Windows of Rec. 601 for YCbCr (16-235) and 0-255 for RGB, and you will get contrast problems when this is not followed by codecs. This is again compounded again by the same kinds of problems in video display drivers. The usual way to diagnose these problems is by switching between 24-bit RGB and YUY2/UYVY in Video > Color Depth until the problem has been identified. If you are working with DV, you can force the internal DV decoder in VirtualDub in Options > Preferences > AVI to avoid at least one source of problems. I've been thinking of adding some helpers (vectorscope, colorbars) to help diagnose this, but haven't gotten around to it. (A vectorscope filter does exist, but it hasn't been updated to support the latest feature set.)”
I’m in a bit of a fix time-wise; the somewhat terse (though much appreciated) explanation the developer offered leaves me with some questions. It seems to take days to get viable replies on that forum and I’m hoping one of the resident VirD experts might be willing to offer me some VirtualDub for Dummies info.
I followed the prescribed steps to force the internal DV decoder. Next I switched the 24-bit RGB color depth to YUY2/UYVY (inout/output respectively) as per the developer’s suggestion.
First: should I match the input and output color depths, e.g. YUY2/ YUY2 or do I set the input to YUY2 and the output to UYVY???
I rendered an uncompressed .avi from VirD with input/output color depth set: YUY2/UYVY. The preview within VirD shows the “after” preview as remaining brighter than the original in the side-by side regardless of what color depth settings I’ve changed. In Vegas, the footage still looks a tad brighter than that which was not processed in VirD; I was however able to create a decent SCC mask with this VirD processed clip.
Any suggestions?
My second set of questions relates to the resizing in VirD. First and foremost, is VirD the best method to resize interlaced footage? And if so, does anyone know the optimal settings within VirD to do so?
Here’s the rest of VirtualDub developer’s comments regarding resizing interlaced DV footage:
“Second, as for the interlaced resize, you can do that in VirtualDub, but it's tricky. A straight resize alone won't give you good results; I recommend trying the following set of filters as a start:
deinterlace (mode: yadif, double)
resize
interlace (frames)
You will need to set the field order correctly on the deinterlace and interlace filters.”
Does the above suggestion make sense? As well, am I better off resizing the footage to the 606X404 within a 720X480 frame and pinning it at the top left corner or am I OK floating the resized footage in the Vegas timeline?
Thanks in advance!!
Mov
“There are several issues involved here.
First, codecs are annoyingly wishy-washy about levels, and programmers have unfortunately often imposed their own beliefs about best practices over any sort of consistency in levels between programs. The most common problem is a mismatch in levels between 16-235 and 0-255 in luma (Y) levels when using YCbCr; less common is a similar problem in RGB space. VirtualDub follows the general convention in Windows of Rec. 601 for YCbCr (16-235) and 0-255 for RGB, and you will get contrast problems when this is not followed by codecs. This is again compounded again by the same kinds of problems in video display drivers. The usual way to diagnose these problems is by switching between 24-bit RGB and YUY2/UYVY in Video > Color Depth until the problem has been identified. If you are working with DV, you can force the internal DV decoder in VirtualDub in Options > Preferences > AVI to avoid at least one source of problems. I've been thinking of adding some helpers (vectorscope, colorbars) to help diagnose this, but haven't gotten around to it. (A vectorscope filter does exist, but it hasn't been updated to support the latest feature set.)”
I’m in a bit of a fix time-wise; the somewhat terse (though much appreciated) explanation the developer offered leaves me with some questions. It seems to take days to get viable replies on that forum and I’m hoping one of the resident VirD experts might be willing to offer me some VirtualDub for Dummies info.
I followed the prescribed steps to force the internal DV decoder. Next I switched the 24-bit RGB color depth to YUY2/UYVY (inout/output respectively) as per the developer’s suggestion.
First: should I match the input and output color depths, e.g. YUY2/ YUY2 or do I set the input to YUY2 and the output to UYVY???
I rendered an uncompressed .avi from VirD with input/output color depth set: YUY2/UYVY. The preview within VirD shows the “after” preview as remaining brighter than the original in the side-by side regardless of what color depth settings I’ve changed. In Vegas, the footage still looks a tad brighter than that which was not processed in VirD; I was however able to create a decent SCC mask with this VirD processed clip.
Any suggestions?
My second set of questions relates to the resizing in VirD. First and foremost, is VirD the best method to resize interlaced footage? And if so, does anyone know the optimal settings within VirD to do so?
Here’s the rest of VirtualDub developer’s comments regarding resizing interlaced DV footage:
“Second, as for the interlaced resize, you can do that in VirtualDub, but it's tricky. A straight resize alone won't give you good results; I recommend trying the following set of filters as a start:
deinterlace (mode: yadif, double)
resize
interlace (frames)
You will need to set the field order correctly on the deinterlace and interlace filters.”
Does the above suggestion make sense? As well, am I better off resizing the footage to the 606X404 within a 720X480 frame and pinning it at the top left corner or am I OK floating the resized footage in the Vegas timeline?
Thanks in advance!!
Mov