The Ambiguously Progressive Vixias

Comments

bigrock wrote on 2/25/2010, 11:15 AM
You said "Notice that I say exposure (at the CCD)".

I thought all the latest Canon's used CMOS sensors. With your motion test you would also be introducing rolling shutter distortion would you not? Is progressive exposure just any oxymoron with a rolling shutter?
Laurence wrote on 2/25/2010, 11:31 AM
Just a bit more for anyone who doesn't know this.

1/ Most progressive footage is flagged as interlaced. This means that it shows the odd then even fields even though the footage itself may well be true progressive. This has no effect on the viewing experience since your eyes don't care about the order that the lines are drawn. You can also change the properties of each clip to progressive in this case if you want.

2/ Some video cameras have what is called "frame mode". What this means is that only one field is captured and then doubled for a progressive image with no interlace comb. This "progressive" image is then flagged as interlaced and split between even and odd fields.

3/ True progressive cameras often use modes that flag the footage as interlaced and split it between even and odd fields (adding pulldown if it's 24p). The footage is still true progressive though and the image can be corrected in your video editor.

4/ Some progressive cameras also have true progressive modes where the progressive image is put down on tape (or memory device). This is often in addition to the modes mentioned in the previous point. If you have a choice, these modes have the advantage of making better use of data compression than if the image is split into fields. The disadvantage is that the footage can't be captured off older cameras or decks.

5/ If you want a progressive image and only have a camera that shoots interlaced or frame mode, you are better off shooting interlaced because a good smart deinterlacer (like the Mike Crash one or DV Film) will be able to rescue resolution from the areas of the screen where an interlace comb does not exist. This will give you a noticeably better looking final render.

6/ One problem with progressive clips that are flagged as interlaced is that Vegas will want to deinterlace them if you render to a progressive format. This is something you really need to watch for since it will throw away resolution. If you are using a mix of interlaced and progressive footage that is flagged as interlaced, you should reset the properties to progressive on all the miss-flagged progressive clips.

7/ Then there is also the issue about how Vegas resizes or zooms and crops. Any footage that is flagged as interlaced will be split into even and odd fields before it is resized. These split fields are resized separately then folded back together (odd fields from one resize, even from the other). This isn't a great way to resize progressive footage and while it doesn't look terrible, you are again losing resolution.
david_f_knight wrote on 2/25/2010, 12:32 PM
To Andy_L:

I don't need to tell Canon anything. I'm not terribly concerned what one of their people told you over the phone; I doubt he is authorized to change Canon's official view. You can read what Canon says about PF30 mode in their marketing material, which will be their official statement on the matter: "In addition to the standard interlaced video frame rate of 60i, you may choose to set the VIXIA HF200 to capture video in 30p (30 progressive frames, recorded to 60i)." Whatever the Canon person said, his words don't change reality.

The reality is what I see from my Canon HF200, and I can prove that it takes progressive scans when set to the PF30 mode. The proof consists of a combination of horizontal motion (as Melachrino suggested in his first post above) plus a subject that would reveal whether interpolation has taken place, such as a circle or diagonal drawn with a fine line. By doing a frame grab, and enlarging the frame by pixel replication, it is possible to clearly see every pixel. Such an inspection will reveal whether interpolation or replication was used to define alternating rows, and if not will also reveal whether the frame was interlaced. You can't have true 1080 (i.e., non-interpolated) lines of resolution AND no interlacing comb in a horizontally moving subject unless the frame was captured progressively.

It is essential to distinguish the image capture process from the image storage process. No one argues that the PF30 mode is stored in interlaced format. The only issue is how the image is captured. If you define "progressive" as a combination of capture and storage format, then PF30 mode isn't that kind of progressive, but it is also not that kind of interlaced. Perhaps the Canon person you spoke with was referring to a definition of progressive that included both the capture and the storage format.

Anyway, I offer proof:
Frames in Photobucket
I uploaded three consecutive frames taken with my Vixia HF200 in PF30 mode. The HF200 was nearly stationary while a snowmobile went from left to right at about 35mph. From the middle frame, I also uploaded a cropped portion showing the front part of the snowmobile and enlarged it 8X by pixel replication. (Photobucket reduced the originals from their 1920x1080 resolution.) The sequence of frames prove the horizontal motion of the snowmobile, and the front of the snowmobile has diagonal lines (which can't be produced by interpolating or replicating the lines above and below).
david_f_knight wrote on 2/25/2010, 1:17 PM
To bigrock:

I believe all Canon's Vixia's use CMOS sensors. I think "progressive exposure" quite accurately describes what goes on, so it's no oxymoron. The word progressive doesn't mean simultaneous or instantaneous.

For nearly all practical circumstances, though, progressive frame capture is nearly simultaneous for all pixels, or at least close enough that rolling shutter distortion is not evident, especially with longer frame exposures. I don't know how long it takes to read the CMOS sensor, but the Vixia (or at least the HF200) offer 1/2000 second per frame exposures for the fastest shutter speed. Rolling shutter distortion is more evident the shorter the exposure time, that is the higher the ratio of image capture time to exposure time. If it takes the Vixia camcorders anywhere near 1/2000 second to capture each frame (i.e., read the CMOS sensor) then the rolling shutter distortion of video shot with a 1/2000 second shutter would be unacceptable. But it isn't, so clearly the image capture time is much less than 1/2000 second per frame. The only practical manner of obtaining noticeable rolling shutter distortion is to have high frequency noise (i.e., vibration) shaking the camcorder while filming, such as by rigidly mounting it to a moving vehicle.
Rob Franks wrote on 2/25/2010, 2:05 PM
"This choice of low frame rate was helped at the time by problems in economically getting bright enough projectors. In addition, and hand in hand with the former, it was acceptable to darken the projection room as much as possible to obtain better results{'"

Yes... and the 24 specifically was chosen because it's about the slowest speed you can run a film before the human eye starts to see the frame separations in that film.

Now 24p in the digital world (IMO) is a really BAD imitation of 24p film and does not even come CLOSE to what 24p film is all about. It's a gimmick and a total waste of time.
Andy_L wrote on 2/25/2010, 4:53 PM
David,

Maybe the Vixias are using a sophisticated hardware deinterlacer which is capable of passing your tests. I queried more than one Canon rep repeatedly on this subject. They unanimously said that no camera in the Vixia lineup, including those scheduled for release this spring, capture progressive frames. All of them said that the Vixias use a "progressive mode" which creates a whole frame from interlaced fields.

It's possible I horribly misunderstood the Canon people. It's possible they were all giving me bad information. But you'd think with such a significant competitive advantage over all the competition, Canon would get this question right. I'm going to remain a skeptic for now. And I'll try to think up some new tests.
Laurence wrote on 2/25/2010, 5:05 PM
I'll tell you this: If Canon (or anyone else) was to add a bit of smart deinterlacer technology (where it looked for combs and interpolated just those sections) to an interlaced frame capture, it would be very close to the quality of a true progressive frame capture. Maybe that is what is going on here.
david_f_knight wrote on 2/25/2010, 7:22 PM
Laurence has described an adaptive de-interlace algorithm. It is true that where the image doesn't change much between the exposure of frame fields that such a de-interlacer can produce a frame that has full resolution by copying the pixels from both fields. Where the de-interlacer detects interlace combs it can interpolate between lines of just one of the fields. The combination can often produce very good results, especially because things in motion tend to have motion blur so resolution isn't as significant there as where things are still.

But interpolation has its limitations. For one thing, it isn't just horizontal motion that produces interlace combs, but any change will, just not necessarily in as obvious of a manner. No matter how you do it, interpolation always involves guessing or making assumptions about things. And it is impossible to always guess correctly.

That's what my uploaded frame grabs from my previous message were designed to reveal, and I thought they were conclusive. Pixels representing diagonal lines can't be interpolated correctly from between horizontal rows of pixels. It is especially pronounced with thin diagonal lines (like the front of the snowmobile ski), but also apparent for diagonal edges (like the body of the snowmobile). When attempted, the interpolated portion of diagonal lines will be as wide as the combination of both rows surrounding it and the color will be lighter. In other words, the diagonal line will be fatter wherever it is interpolated from the rows of pixels above and below, and the colors of all those interpolated pixels will be faded. It makes diagonals look banded and blocky. None of that is visible in the enlarged portion of the frame I uploaded to Photobucket.

I trust the evidence of my tests more than the second hand words from some sales reps that contradict their employer's official statement.
Andy_L wrote on 2/26/2010, 6:23 AM
Well I guess the question is what does "progressive mode" actually mean in that 'official' statement?

I've been monitoring Canon's website ever since I got my Vixia last year. The description of the progressive mode has subtly changed over time, and always slightly ambiguous.

I'm with Laurence on this one: if the camera is doing a good job of deinterlacing, it's probably hard to see a difference between that and a true progressive scan, which raises the question as to why anyone should care.

I guess the answer to that is, if you're a data freak (and I suppose I am), you think that tomorrow's deinterlacers will be better than today's, so recording in 60i mode preserves more data, whereas the 30p mode is discarding data.
Laurence wrote on 2/26/2010, 6:53 AM
Well even if tomorrows deinterlacers are better than they are today, there is still good reason to use the camera deinterlace (if that is indeed what it is), and that is that interlaced frames really compress badly. If the camera is doing a high quality deinterlace and that image is then compressed, the compression will do less damage.

Look at it this way, if you can zoom in on a diagonal line of a moving snowmobile and not see any jagged edge, for all intensive purposes the image is progressive. Use the progressive mode and don't worry about it.
Melachrino wrote on 2/26/2010, 8:18 AM
bigrock.
For the purposes of the question at hand, CMOS will produce the same results. As for the rolling shutter, that introduces a different, very observable artifact.
jabloomf1230 wrote on 2/26/2010, 8:52 AM
Maybe I'm missing something here. If the 1080 30p upper and lower fields are shot simultaneously (not shifted in time, like with 1080 60i), the "deinterlacing" consists of reconstituting the upper and lower fields on playback. There is neither discarding of a field nor interpolating between the two fields. There is no loss of information. It's just the way the progressive frame is stored. I suppose there could be some subtle issues with the way the video codec encodes the information, but I've never read anything about that.

The Canon HV20 could only shoot 60i and 24p within 60i. The Canon HV30 could also shoot 30p. But interestingly enough, the HV20 could be "tricked" into shooting a "fake" 30p, by using a slow shutter speed (SLS mode) of 1/30th sec. At this shutter speed, the camera used frame accumulation, such that the upper and lower fields were identical.
Melachrino wrote on 2/26/2010, 10:09 AM
jabloomf1230.
That is exactly what we have been saying. If the image is EXPOSED all at once, there is no time difference between a (logical) Field 1 and a (logical) Field 2 and it needs no interpolation to reconstruct.

I just looked at the Canon website and their (marketing) description of the 30p mode supports this.

""... VIXIA HF200 to capture video in 30p (30 progressive frames, recorded to 60i), which is particularly useful for footage to be used on the Internet. In addition, this setting gives enhanced quality to still images captured after recording. It's also excellent for action shots and sports.""

The comparative pictures at Canon's website show exactly the artifact we have been describing above which 30p gets rid of : serrations in a combined still of Field 1 and Field 2 with motion. (You will not see serrations on video displayed continuously). In fact, their wording is clear that when one extracts a "still" from an interlaced video with motion, said "still" will have serrations unless they are reduced with very good interpolators. Or use 30p ....

But remember that nothing comes for free. In 30p you will have only 30 pictures per second, whereas in 60i you have 60 fields per second which the eye-brain combines rather well and provides good experience for sports and fast action.

The XXI Olympics in Vancouver look superb, eye popping in 1080i High Definition.

Melachrino wrote on 2/27/2010, 2:53 AM
Andy:

You are correct.
After waking up my dormant neurons, I believe that, strictly speaking and in terms of what we have known "progressive" to mean, Canon's Vixia 30p is an emulation and not a true progressive system.
What is missing to be a pure progressive is to read out the complete image to the display from top to bottom in one pass.
Perhaps a computer could display in true 30p once the video is captured to it.
But since the majority of TV displays are interlaced, the emulation works rather well if that is what you want. In other words, the final displayed image will look as if it was 30p progressive. At this moment I cannot think of a detectable artifact to the eye-brain which this emulation can cause. The only possible other effect, but not an artifact, is that motion may seem to stutter.
Nice wordsmithing by Canon: "30p progressive in 60i format"...let confusion begin ...
I think I now understand your viewpoint.

Andy_L wrote on 2/27/2010, 6:44 AM
Melachrino,

My understanding is that all LCD displays, including current hdtv's, cannot display interlaced footage. They all use hardware deinterlacers to display a progressive image.

Which has obvious implications for the desirability of a progressive scan off the sensor. Unless I'm mistaken ??
Hulk wrote on 2/27/2010, 7:03 AM
I am pretty sure after investigating this issue a while ago that at least my Canon HF100 is true progressive with, as David stated, the video only being stored in the 60i format for the sake of compatibility.

The problem is getting the toothpaste back into the tube. That is putting the 60i frames back together to finally have the original 30p video. There was a tutorial on this a while back that required 3 or 4 applications and about 25 steps if I remember correctly.

- Mark
farss wrote on 2/27/2010, 1:03 PM
"The problem is getting the toothpaste back into the tube. That is putting the 60i frames back together to finally have the original 30p video."

R Click media, select Properties. Change field order to None (Progressive). Yeah that's it.

Bob.
Melachrino wrote on 2/27/2010, 2:03 PM
""My understanding is that all LCD displays, including current hdtv's, cannot display interlaced footage. They all use hardware deinterlacers to display a progressive image.

Which has obvious implications for the desirability of a progressive scan off the sensor. Unless I'm mistaken ??""

Andy.

Now we would be getting into a very deep subject, highly controversial, best left alone.

However, I can point out that LCD's, and Plasmas for that matter, can and could (I have seen them) display native interlaced images. But, cost and complexity for the extra bank switching do not warrant that nowadays.

Besides, the major original driving force for progressive displays was computers, not TV.

As I pointed out in an earlier post, consumer camcorders were meant to be displayed in television sets and therefore, for compatibility, they too (the camcorders) conformed in someway to the interlaced standard.

Yes, current flat screen displays convert to progressive scan in software, not in hardware, and some algorithms work very well because now they have the speeds in processing and in LCD pixel switching to do so with minimal artifacts.

So, yes, we could eventually change the transmission standards from interlaced to progressive, to match the newer displays BUT you have to give up something. Besides needing many years to establish a committee and come to an agreement on a new TV standard, (look how long it took to move from NTSC to ATSC ...), progressive takes twice the amount of bandwidth (a very limited and costly resource) to carry and display the same amount of spatial information as interlace. This is basic science and engineering and not a fable or invention. It even carries a scientist's name ...You are free to chose, but you do not get something for nothing.

Our TV and HDTV pioneers chose interlaced to conserve bandwidth and yet display the maximum spatial resolution. The ATSC standard permits broadcasters to transmit in Progressive, (such as ABC in 720p) but in doing so, since the bandwidth is fixed, the resolution is cut about in half.

For illustration purposes, a 1920x1080i transmission in the current fixed channel bandwidth can display about 2 Million spatial pixels per image (rocket science 101: 1920 multiplied by 1080...). The 1220x720p transmission in the same fixed channel badwidth can transmit about 1 million spatial pixels (rocket science 102: 1220 multiplied by 720 ...) per image.

You choose. Whatever choice is right for you, you get.



david_f_knight wrote on 2/27/2010, 5:47 PM
For any given resolution, progressive requires twice the bandwidth as interlaced only if there are as many progressive frames as there are interlaced fields. But if you compare 30 frames per second progressive to 60 fields per second interlaced (which is what we're doing here), then the bandwidth requirements are identical (neglecting differences in compression efficiency for 30p vs. 60i).

There is a tradeoff between interlaced and progressive, so no one can say that one is always superior to the other. 60i video has double the temporal resolution but half the spatial resolution, as compared to 30p video, in the same bandwidth. However, with an adaptive deinterlace algorithm, it is often possible to reconstruct a frame from two interlaced fields that has the same spatial resolution (but only in those areas of the image where field to field change is sufficiently small) as a frame captured progressively would have, sometimes giving the best of both worlds (neglecting the many extra headaches that processing interlaced video has).

Though adaptive deinterlace algorithms generally provide the best deinterlacing possible (for true interlaced video, but not for PF30!), it's important to realize that adaptive deinterlacing algorithms are not universally used! I have Vegas Movie Studio Platinum 9, which does not provide any adaptive deinterlace algorithm, for example. I don't know whether Vegas Pro 9 provides one or not. That's not an issue if when editing you take interlaced in and put interlaced out, without panning, cropping, scaling, stabilizing, or rotating your source video. But, if you take interlaced in and put progressive out, or if you pan, zoom, scale, stabilize, or rotate your source video, then the editor's deinterlacer is a big issue.
Laurence wrote on 2/27/2010, 6:22 PM
One thing that not many people talk about, but which I use and like, is mixing 30p and 60i for a sort of "best of both worlds". You can't do this with tape because tape wants a single format, but if you are recording to memory card or hard disc, you can shoot your interviews and low motion footage in 30p, your fast motion and handheld stuff in 60i, and render your final project to 60i. Your 30p parts will have the detail of progressive and your eye couldn't care less that the even and odd lines are shown a sixtieth of a second apart. Your fast moving parts will sacrifice a little detail for temporal resolution. You should also shoot 60i for anything that you might want to slow down. You can also prerender your titles and photo animations to 30p if you want. Everything is fine doing it this way so long as nobody asks you for a 24p or PAL render.
Andy_L wrote on 2/28/2010, 8:59 AM
Thanks everyone for all this great info! For those of us who own Vixias, there may be a class action suit pending: http://eplaw.us/1080camcorder.html?gclid=CP33psPAlaACFQldagodjQgaUw

:)
Dreamline wrote on 2/28/2010, 10:57 AM
Network broadcasting standards will die. Computers will make H.264/AVC level 5.1 a reality for all. The current crop of HD cams are a waste of money with their BS specs.

Read this:

http://en.wikipedia.org/wiki/H.264/MPEG-4_AVC#Levels
david_f_knight wrote on 2/28/2010, 12:44 PM
Oh, boy....

To Andy_L:

There are a lot of manufacturers that sell camcorders marketed as Full HD; some of them might do what the lawsuit claims. I do hope no Canon Vixia owner is foolish/greedy enough to join in the lawsuit, though, because Canon hasn't done what the lawsuit claims. All those lawsuits accomplish are to increase the costs for everyone because those legal defense costs are part of the cost structure priced into every product made. Not one penny of it goes towards making products any better. I guess as with anything, customers should go beyond the bullets printed on the box and read the details. It's called "due diligence." You can tell, by the way, that that law firm is the equivalent of a hack ambulance chaser trying to get rich quick, IMHO. In the second sentence of the last paragraph, they used the word "television" rather than "camcorder." Apparently, they are an assembly line of lawsuits, and when they cut-and-pasted from one to the next, the lazy, careless, incompetent bast**ds didn't even bother to proofread it to see that they screwed it up. Somebody should apply the same standard to law firms that they apply to everyone else, and start a class action lawsuit against them. (Why don't lawyers sue lawyers?)
david_f_knight wrote on 2/28/2010, 1:01 PM
To FishEyes:

There will ALWAYS be something better. If and when that day arrives that H.264/AVC level 5.1 is a reality for all, there will surely be something better out there, and somebody on a forum like this will be telling all that the current standards will die and that the then current crop of cams are a waste of money with their BS specs.

The current crop of HD cams are a waste of money only if you buy one, stick it on a shelf never to be used for 10 years, and then expect it to compete against what is available then. But if you actually want to make video today, you can hardly use the camera that will be introduced years from now.

I don't think the current crop of HD cams have BS specs. And I don't think that the cams sold years from now will be marketed any differently than they are today. I don't know how old you are, but have you ever looked around at anything in your life? My god, marketers have always marketed the way marketers market today, and they always will. I don't care what the product is. Customers are attracted to the shiniest object they see, and it's the marketer's job to make every product shine. It's the customers' job to see through the hype. The information is available to those that look for it.

Speaking of looking at information, did you notice that the bandwidth for H.264/AVC level 5.1 is nearly 1Gb per second? Do you have any idea how expensive it would be to have such a standard in place? A 128GB flash memory card* would hold about 17 minutes of video. A dual-layer Blu-ray disk won't even hold seven minutes of video (which they couldn't play, anyway). You think editing video from current AVCHD camcorders is hard/slow? You ain't seen nothin' yet, compared to H.264/AVC level 5.1. Any idea how long it'd take to render footage shot with that standard? You'll be an old man before your computer today could finish one project. And for what purpose? A lot of users don't even shoot video at the highest bitrate available today because they can't see much difference. I think you'd be better served worrying about what you're shooting rather than what you're shooting with.

* Of course, you wouldn't be able to use any 128GB flash memory card made today, because none can record at 1Gb per second. But if they could exist today, it's safe to say that they would cost well over $1000 each. Per 17 minutes. I guess you'll be buying them by the dozen, right? And then, there's the power requirements. Unless you have a car battery inside your camcorder, you'll drain your battery in minutes. But that's okay, because your camcorder will probably heat up to a couple hundred degrees due to all the processing required in that time anyway, so I don't think you'd really want to hold your camcorder much longer than that.