Comments

Coursedesign wrote on 10/28/2006, 7:16 AM
Outstanding. Gotta love it when clear facts prevail over the marketing fantasies and Personal Pride stuff that should be rightfully sold in bags at Home Depot labeled "Steer Manure".

We should be grateful that Sony is pretty good about just presenting the facts actually.

GlennChan wrote on 10/29/2006, 9:57 PM
Does anyone have information on which exact 4:2:0 chroma scheme is being used? For MPEG-2, I know there is an interlaced sampling scheme and a progressive sampling scheme.

Presumably, 1080i HDV uses the interlaced sampling scheme. Rendering out of Vegas into MPEG2 with the HDV 1080 60i template, you would get chroma results like the following:

http://glennchan.info/Proofs/forums/sony/420-HDV-demo.png (300% zoom applied, nearest neighbour resampling in Photoshop)

What happens in the interlaced scheme is something like this:
-The frame is split up into two separate fields.
-Each field has its own set of chroma samples!
-For the first field, a single chroma sample describes the chroma for a 2X2 array/area for that field. Spatially, the SIZE of that 2X2 area takes up 2X4 samples of the FRAME. This is bizarre, but it's what happens. Because the fields play "hopscotch" with each other, the lines in each field is spaced apart by one line. You can see this in the image linked above: Beneath the text, there are "ghosts" of black areas. You see the hopscotch happening there.
-The 2X4 area is 2 pixels wide, 4 pixels tall.
-So if you were looking at a frame (both fields at once), and chroma samples A and B described different fields, then it would look like:
A A
B B
A A
B B

You see that the chroma samples describe chroma over a 2X4 area.

Other explanations of this:
http://www.hometheaterhifi.com/volume_8_2/dvd-benchmark-special-report-chroma-bug-4-2001.html Hometheatrehifi article; see chroma interlacing problem

http://www.dv.com/news/news_item.jhtml;jsessionid=23YORKCTYGQL2QSNDLOSKH0CJUNN2JVN?LookupId=/xml/feature/2003/wilt0603 4:2:0 follies by Adam Wilt; registration required; I find this article denser.

If you want to get rid of the combing/ghosting effect, you have to de-interlace the footage and then blur the chroma (or apply a comb filter); good de-interlacing is a very difficult problem.

2- The progressive sampling scheme would seem a little less stupid. Chroma samples describe a 2X2 area. i.e.
A A
A A

However, the problem is that the luma for each field occurs at different points in time. The 4:2:0 chroma samples can only describe the chroma for one point in time, or some mix of both. In any case, there will be an inconsistency between the luma and the chroma.

What this means (assuming that HDV uses the 4:2:0 interlaced scheme)
The effective resolution is actually less than what the # of samples suggest. Because the 4:2:0 chroma samples describe chroma over a 2X4 area, the effective resolution is roughly halved in the vertical dimension.

If the progressive sampling scheme is used, then you have motion artifacts.

This is the reason why DVCPRO PAL broke with the DV standard and implemented 4:1:1 chroma sampling instead of 4:2:0.

*Progressive footage doesn't suffer this problem.
Spot|DSE wrote on 10/29/2006, 10:32 PM
Glenn, there are differences in how this is managed in acquisition vs processing in post; the way the Sony V1 works for example, involves a unique means of sampling the signal that results in about a 25% loss vs the 50% that you'd expect.
Anyway, I haven't heard directly from Sony engineers how they're doing this, but I'm told by product managers that it's progressive in progressive mode and interlaced in interlaced mode.
Steve Mullen makes some interesting suppositions in the linked article, and given that most of what he writes is identical to what Sony has said, I'd believe he's pretty close to the bone on what he says.
farss wrote on 10/30/2006, 12:59 AM
Having read through most of those articles this isn't an issue only with HDV, it affects any interlaced video using 4:2:0 sampling. Mistakes in the implementation of early mepg-2 encoders and decoders have made the problem worse and that's just in SD.

However I suspect there's more at play than just this.
This article by Grahame Nattress explains 4:2:0 sampling in DV fairly well although not in great detail, it does show that it's not a simple 2x2 matrix in interlaced video.

And I think perhaps there's a difference between how things are happening in a camera where the source is 4:4:4, well RGB actually, and the issues that arise when attempting to go from already sampled video through another encoding pass.

Bob.
GlennChan wrote on 10/30/2006, 6:55 AM
Graeme Nattress' article talks about DV PAL, which uses a different 4:2:0 scheme than MPEG-2 (I believe). I think there are 3/4 different 4:2:0 schemes:
*One as used in JPEG.
*One as used in MPEG-2; there are interlaced and progressive sampling schemes.
*Another as used in DV PAL.

Graeme is talking about a different scheme. If you encode HDV out of Vegas, it doesn't seem like the DV PAL scheme at all.

2- The 50% loss I'm talking about is inherent in the 4:2:0 HDV format when using interlaced sampling; I'm not talking about the camera, which is what Steve Mullen is talking about.

3- Mistakes in the implementation of early mepg-2 encoders and decoders have made the problem worse and that's just in SD.
Just to clarify, the hometheatrehifi article mentions two different problems:
A- Interlaced chroma problem.
B- Chroma bug.

To solve the first problem, you need to de-interlace the footage and then apply a vertical blur to the chroma.

The second problem is simply improper implementation. This shouldn't be a problem with software codecs (i.e. Vegas) if they are written correctly. (AFAIK) In DVD hardware, the chroma bug exists partly because the hardware has a limited # of gates, which limits the # of features.

DJPadre wrote on 10/30/2006, 7:43 AM
so then, how do we know that the mpg encoders which we use (such as Mainconcpet found in vegas) are using the correct colour space for our footage?
Lets say Pal DV to MPG2 progressive 25fps... would the selection of "progressive" switch the colour spacing to the appropriate one? Or will it recognise progressive as 24p.. ? In turn forcing an NTSC colour space on pal footage...
OR
does t look at the frame rate? and decide which to use based on the framerates selected?
Hmm...
I think, depsite its total lack os speed.. there is a reason why conemacraft encoder is so popular among the pros, as it seems to be the only encoder which i know of, to offer the option of colourspacing based on the required output AND/OR the source material..

interesting stuff.. is this a conspiracy or encoding applications to lock these options off from us? And if it was such an issue, why were we not made aware of these nuances of the codec?
Considering the fundamental impact it has on our finished results, one woudl have thought that this issue would be made clear and precise. hmm..

kkolbo wrote on 10/30/2006, 8:32 AM
At first I got bogged down in all of the tech specs and methods with HD and the various formats of aquisistion and storage, then it hit me... I don't care about the specs anymore.

Now I look at if I like the results on tape/disc when I use a particular camera. I look at if I can use and maintain the workflow requirements for a specific format.

If I like the way it looks and I can use it. I have a weiner. Oh, I forgot, can I afford it?

Let's face it, the quality of most of all of this is amazing. I don't care if they are using one pixel sensors and voodoo to get the picture, when it looks this good and I can work with it on a tired old PC, what tricks they are using doesn't matter to me.

I have a next step up that I want to take, because I like the look that much better, but I can't afford it. That will always be the case.

Until I am filthy rich I will look at the output and choose the best one that I can afford. My eye is my tech spec meter. Like it was said, HDV frankly looks better that it should. I have found that to be true of many pieces of hardware and formats lately. So specs be damned (except for engineers) and I will buy what looks pretty and has usability.

You may now return to the rather intelligent and informative discussion that was occuring. :)
farss wrote on 10/30/2006, 2:03 PM
So, encoding HDV out of Vegas might be problematic due to color sampling issues.
Is this really such a big issue. I'd always assumed using HDV as a mastering format wasn't a good idea reagrdless of this.
If this chroma sampling issue is for real with HDV encoders in NLEs then I'd also suspect the problem is a compounding one i.e. it gets worse every generation.
Haven't we already been advised to use intermediate codecs such as CF DIs or Sony YUV at HD. I'm not being dismissive of the problem but rather it seems to me like it's the icing on the cake. All the other issues of encoding to a bandwidth constrained long GOP format would seem more significant.
So long as HDV cameras can encode a correct image then HDV is a valid acquisition format. How good or bad it is as a mastering format is another issue. My gut feeling has always been it wasn't a viable mastering format, even the audio side of HDV is very lossy and to be avoided if possible.

Bob.
Spot|DSE wrote on 10/30/2006, 2:13 PM
HDV isn't a mastering format, isn't/wasn't intended to be.
Much of the rest of it is measurebating rather than actual use. Some folks (including me from time to time) get caught up in numbers vs actual use and visual experimentation. That's not what we're all about, is it? Are we interested in math or movies?
HDV as an aquisition source is the most cost effective means we currently have of obtaining great images with which we can tell our stories.
GlennChan wrote on 10/30/2006, 3:45 PM
1- Sorry if anyone was misled into thinking HDV was intended to be a mastering format. The only reason I brought it up was because I wasn't sure what the camera's MPEG2 encoder was doing. Looking at some sample 1080i footage from the Vegas 6 sample projects, it looks like the camera is using the interlaced 4:2:0 scheme.

On normal footage, you can spot the inherent artifacts (the interlaced chroma problem) if you have a good monitor and are close enough to the monitor. The picture I linked to shows an extreme example of this. It probably won't crop up much in real world shooting.

Is this artifact a big deal? Probably not.

2- Measurebation:
I definitely agree that real world tests and experience are the most useful.

There are some rare cases where numbers and technical understanding are useful:
- Avoiding subjective judgments being skewed by bias or other factors. i.e. snake oil audio cable. Granted, "objective" results can be skewed/fudged and there are ways to make subjective measurements free of bias.
- Measuring things in numbers is easier and/or more accurate than subjective measurements.
- Cutting through marketing hype and number fudging.
- Describing differences without having to use vague subjective terms. Granted, not all subjective things can be stated in a number (i.e. is camera A 10% better than camera B?).
Coursedesign wrote on 10/30/2006, 9:24 PM
Subjective comparisons are particularly important when the underlying parameters are not well understood.

For example, I don't think there is any serious suggestion for an audio quality parameter for a length of cable, but this certainly doesn't mean that all cables sound the same.

I'm certainly not a proponent of buying "tweaky" cables made of exotic materials, but I seriously think I could make a living betting people that they could hear a very clear difference between different cables in reasonable price ranges. Notice I said "they," as I have actually tested this with a number of people. I'm sure there are quite a few people who are totally tone deaf, but I think most people are not.

Ditto with say the quality that is possible to get out of HDV. I think it was Spot who said, "it seems possible to get better quality out of HDV than perhaps it deserves."

Who'd a thunk? Nobody, unless they tested!

A lot of people never tested HDV cameras out of misdirected snobbery, while hardcore DPs like Jody Eldred, etc. just tested them for workflow to do things they couldn't do with their $150,000 cameras (like shooting inside helicopter cockpits for JAG, and intercutting seamlessly with the big camera footage).