Shooting interlaced - what's the point?

Comments

GlennChan wrote on 12/14/2008, 3:09 PM
The LCD is transmissive, not emissive. So if you want it to display a black line, I believe you have to address that row and make those pixels non-transmissive/black. But now because you are constantly switching those rows from picture to black and black, that introduces issues (e.g. halving of contrast ratio).

I don't believe addressing is really the problem.

since I think that most modern cameras, other than those with "rolling shutters" grab all the even lines at the same moment in time, and then the odd lines, rather than actually scanning each line at a its own discrete instant in time
Heh. I believe what most of them do is "row pair summation". Basically they take pairs of rows and average them together. For the next field, the pairs shift up/down one and those pairs are averaged together to form the field. This cuts resolution in half and doubles sensitivity (because you scan all the rows, not half of them). The hit in resolution helps deal with interlacing issues like line twitter... when the image is scanned this way, you can't get alternating 1-pixel black and white lines.

I believe rolling shutter is something else.
Coursedesign wrote on 12/14/2008, 3:15 PM
It would be difficult for the broadcaster, since they would have to deal with timecode and timing issues.

Good point, but it looks like they are going to have to do that anyway in order to solve the soundsync mess in digital television.

Even the Varicam doesn't truly shoot at variable frame rates like a film camera can.

That is correct for the classical Varicam 2700, designed 10 years ago. The current Varicam 3700 has true variable "natural" frame rates.

World TV already has to deal with 25fps, 30 fps, 50 fps, and 60 fps.

ScorpioProd wrote on 12/14/2008, 4:22 PM
And let's not forget how nice slow motion looks in Vegas with interlaced footage and not with progressive footage.

Having more temporal information available is a big help with slow motion smoothness.
Former user wrote on 12/14/2008, 4:56 PM
Analog TV is not disappearing. Low level TV stations will still exist transmitting analo.

And no, it is not a religion. It is a standard. One that has been around since TV began. It will not change overnight, but it will change. Once everything is digital, then you will be able to shoot and edit at any resolution and framerate that you want (as long as the electronics can support the bitrate and such.

And yes, many programs and commercials are still being done in SD.

Dave T2
John_Cline wrote on 12/14/2008, 5:02 PM
The ONLY reason that 24p even exists is that film was/is expensive and 24 fps was the absolute slowest frame rate that they could use and still have a semblance of fluid motion. The decision was based entirely on economics and was a complete compromise.

To me, hanging on to 24p is like the the audio world being so used to the LPs ticks, pops, surface noise and inner-groove distortion that they carried it over to digital audio recording.

Thankfully, early on, the car manufacturers didn't put a large horse's butt on the front of our cars because that's what folks were used to seeing when driving the buggys around. Maybe that's why the Amish don't like cars, they can't see a horse's butt when they look out the windshield. Of course, I occasionally see one when I'm driving, but it's not quite the same, they're usually driving other cars.

Death to 24p!
kairosmatt wrote on 12/14/2008, 5:07 PM
John Cline,
I hear you on the history of 24fps-but what can I say? I still prefer it! Much more so in fact.

Maybe its because I've been 'trained' (brainwashed?) by the movies. I've always prefer watching a movie to TV, so maybe thats where that comes from.

Or maybe...I prefer movies because of the stutter! (maybe not really...)
Laurence wrote on 12/14/2008, 5:54 PM
One of the things I find myself doing whenever I sit behind a new (to me) computer is to change the graphics card refresh rate from 60 hrtz to something faster that isn't going to drive me mad with it's constant flickering.

I really believe that different people have different sensitivities to fast motion and that my own eyes are more sensitive to motion than average. 24p drives me nuts. I can hardly watch PAL TV on a CRT. When I go to a movie theater, it am constantly distracted by the judder. My favorite image quality so far isn't any movie, but rather the look of the HD video on the Planet Earth series.
kairosmatt wrote on 12/14/2008, 6:03 PM
I just came across an interesting article on all this:

http://www.definitionmagazine.com/cameras/judder.htm
GlennChan wrote on 12/14/2008, 6:07 PM
To me, hanging on to 24p is like the the audio world being so used to the LPs ticks, pops, surface noise and inner-groove distortion that they carried it over to digital audio recording.
I think I prefer the look of 24p. There are some consumer TVs out there with Motionflo or some technology like it that converts the 24p video to a higher frame rate. It's IMO disturbing to watch as it doesn't look right.
fausseplanete wrote on 12/14/2008, 6:57 PM
Just shot my first progressive footage (720/25p) on an EX3. Normally I use 1080/50i and deinterlace and double the framerate (using AviSynth). Main reason (edit: for using 720/25p this time) is that on this camera it gives extra light sensitivity for a low-light situation - zebras visibly increase in the viewfinder. Fingers crossed for the eventual look on DVD. Depends on whether Vegas's re-interlacing is sufficient or whether motion-predictive re-interlacing is required - for this shoot and product. I'll report back if I get a problem with this.
fausseplanete wrote on 12/14/2008, 6:58 PM
I wonder if decoders of motion-predictive codecs could naturally deinterlace (and denoise) by motion-predictive means. Seems daft that apps have to do their own motion estimation afresh. Maybe it (interpretation of the coded video) could in principle be assisted by some kind of metadata. (?)
GlennChan wrote on 12/14/2008, 7:06 PM
The motion prediction vectors/information generated by these codecs may not be accurate. They might be optimized to lower bandwidth, rather than to provide accurate motion estimation. I'm not sure if it would work that well.
RalphM wrote on 12/14/2008, 8:34 PM
"Thankfully, early on, the car manufacturers didn't put a large horse's butt on the front of our cars because that's what folks were used to seeing when driving the buggys around. "

Thanks, John - best laugh I've had in a week.
Serena wrote on 12/14/2008, 11:50 PM
Often I think the horses butt is in fact there in relation to interlacing. Just as some people grew up watching film and find 24fps fine to watch, others born later are used to interlaced. Interlacing is a pain to watch on a simple progressive display (i.e. fields not converted to progressive). I suggest that progressive images are much superior to interlaced in all respects, especially in better depiction of motion, and that ought to be a given when discussing this issue. The issue is then about frame rate for satisfactory viewing, and if that rate is beyond current systems we start into finding compromises. Interlacing is a way to a pseudo higher frame rate. Gaining a stop is something rescued from that compromise, but no justification for doing it.
John_Cline wrote on 12/15/2008, 1:57 AM
I'm all for progressive recording and display, just not at 24p or 30p. Not enough temporal resolution. 60p would suit me just fine. We have 1280x720-60p, but that's not enough spatial resolution. 1920x1080-60p isn't far off and 3840x2160-60p is "right around the corner", too.
ingvarai wrote on 12/15/2008, 2:28 AM
Thanks, John - best laugh I've had in a week.

Ditto :-)
Rory Cooper wrote on 12/15/2008, 2:50 AM
"Thankfully, early on, the car manufacturers didn't put a large horse's butt on the front of our cars "

Nice hyperbole John

but I am not so sure!..... after reading your comment I suddenly realised where the inspiration for the Merc emblem could have came from
“could it be?? “ and its right in front for the driver to see all the time

Anyway I hope someone can clear this up for me

If I shoot 25p and place the content in Vegas this will automatically be read as 50 frames as all NLE’s will double up the frame rate so that visually the movement will be correct? Is this true?

Rory

ingvarai wrote on 12/15/2008, 5:52 AM
This is an interesting thread, because what was unclear suddenly became crystal clear, only later on to turn as unclear as a glass of muddy water..
I now wonder how the FX plug-ins work with interlaced video. What about the unsharp mask for example? Do all FX plug-ins work with interlaced material, do some or do none? And what if they don't, what happens behind the scenes, is there a deinterlace happening, and a "reinterlace" happening afterwards?
kairosmatt wrote on 12/15/2008, 6:54 AM
Rory-
I'm not the final authority on this, but-

There is nothing in Vegas (or other NLEs) that automatically converts it or reads it as 50.

The doubling of the frames will only happen if you render as 50i. And strictly speaking, it will only double if resample is disabled. If not, it will interpret the fields in between to get 50i.

If you render as 25p, there will be no doubling of the framerate.

kairosmatt
johnmeyer wrote on 12/15/2008, 7:26 AM
John is correct about 60p being the way to "get it all" (fluid motion and simpler editing). However, this will be a LONG time coming. I remember what it was like to edit my first DV video on a 450MHz PC. It wasn't bad at all. That was nine years ago. Now, in almost 2009, most people are still complaining -- even with the most modern computers -- about editing 1080i, because the timeline can't keep up. Reason? Computers are not getting faster anymore. In case you didn't notice, we've been stuck at about 3.0 GHz for a LONG time, and don't expect that to change. (I bought my last computer six years ago, and the CPU clock speeds are exactly the same now as they were then, and clock speed is the only thing that ultimately determines performance for linear operations).

Most of the "performance" improvements have come from parallelism in the form of instruction set enhancements, parallel threads, multi-cores, and multii-CPUs. This is great for rendering -- if the software handles things correctly -- but not for many other things. Thus, when you throw ten times the number of pixels at the software, it slows down by either 10x or 100x (depending on whether the operation gets slower linearly or by the square).

So, all this "new" (actually not so new anymore) stuff is a bitch to work with and this isn't going to change for a long, long, long time. 1080p can be done, for sure, but it will be expensive.

So far, out of my 200+ DirecTV channels, there is only one on the SD side that has changed to 16:9. The others are all still being delivered in 4:3, and I don't expect that to change after February.

Coursedesign wrote on 12/15/2008, 7:44 AM
Today we're looking at pillarboxed 4:3 film materials from 100 years ago, because that is how it was shot and we want to see the content.

I'm sure that 100 years from now, people will still be watching some material from 2008 in pillarboxed 4:3 for that same reason. It will probably be uprezzed and converted to a beefier color space though (with a debate to follow at that time about whether old materials should be seen as originally intended, whether monochrome or in ye olde NTSC color space.

There won't be any interlaced displays outside of particularly well stocked museums though.


johnmeyer wrote on 12/15/2008, 8:51 AM
In 100 years? Yes, we won't be watching 4:3 SD, I agree.
Grazie wrote on 12/15/2008, 9:01 AM
. . and that, dear hearts, is an example of laconic, with a sprinkling of sardonic and thoroughly doused in gallows!

Grazie
Coursedesign wrote on 12/15/2008, 10:07 AM
...and dry rather than sparkling...!

:O)