OT-Optimized camera setting for low light video

TeetimeNC wrote on 10/22/2013, 1:40 PM
If my objective is to get the least noise in low light video, is it true that 1080i60 at the default 1/60th shutter speed will produce lower noise video than 1080p30 or 720p30 also shot at the default 1/60th shutter speed? If so, why is this?

Also, I assume 1080p24 shot at the default 1/48th shutter speed would be the best low light footage of all these alternatives from noise standpoint. Correct?

/jerry

Comments

mudsmith wrote on 10/22/2013, 2:59 PM
While I cannot say I am expert on any of this, the basic rule here is how long a shutter is actually open (not how often it opens in any given second), how wide the iris is open (fStop) when the shutter is open, and how much electrical gain is applied to the image (noise) to make up for not enough light coming in from the first two settings.....though, since the inception of digital, more gain can be applied with less noise than the analogue days, but there are still limits.

In the stills world, this is all you would have to know. In the video/film world, the amount of time the shutter is open is not so clear to me, even though it is quite easy to alter the number of times per second it opens and make this quite different (in video, not film) than the actual framerate. This is something that was frequently employed by ENG camermen shooting computer monitors to get rid of the appearance of travelling lines on the screen.....

Considering how video is shot (lines are picked up over the full length of the time of the frame), it must be that the standard time for the shutter to be open (except when you induce an artificial frame rate like shooting 24p over a 29.97i tape format, which induces odd motion artifacts between frames) would be for the entire time of the frame.....in order for all lines to be picked up by the sensor/scanner mechanism......This is different from a stills (or film) situation, where the entire frame is exposed at the same time for whatever length of time the shutter is open.

So, in your question above, the 24p, with the same number of lines (i.e., same size picture) as something shot in NTSC or PAL, would be open longer, per line, than any format with a higher framerate......but it would not be open longer than an interlaced 24framerate, just as an interlaced nominal 30fps shutter would not be open a shorter time than a progressive would be, since the same number of lines would be recorded during that 30th of a second that the shutter was open.

Given that 60i seems to actually mean 60 fields, not 60 frames per second in most instances, the 60i would not mean lower noise, but be exactly the same as 30p in terms of time the shutter was open per line of video....In other words, even though there are 60 fields, there are only half the lines in each field. If you were actually shooting at 60 frames per second instead of 30, the time the shutter was open would be less, and therefore have less light available per line.....potentially causing more noise with everything else being equal.

......If all that is true, then the true noise component can only be determined based on the fStop in use and the light collection capacity of the sensor.....on top of any low light gain available electronically. Changing a lens would also potentially alter the light collection availability, and thus maybe create a situation with more or less noise.....using the same camera, fStop and shutter rate.

farss wrote on 10/22/2013, 3:43 PM
[I]"If my objective is to get the least noise in low light video, is it true that 1080i60 at the default 1/60th shutter speed will produce lower noise video than 1080p30 or 720p30 also shot at the default 1/60th shutter speed? If so, why is this?"[/I]

Yes, because when shooting interlaced compared to shooting progressive the camera will use [I]line pair averaging[/I] to avoid line twitter. This averaging process reduces noise.

Of course there's no free lunch here, the line pair averaging also reduces vertical resolution. That 's exactly the intent of employing it.

Bob.
farss wrote on 10/22/2013, 3:51 PM
mudmsith said:
[I]"Considering how video is shot (lines are picked up over the full length of the time of the frame), it must be that the standard time for the shutter to be open (except when you induce an artificial frame rate like shooting 24p over a 29.97i tape format, which induces odd motion artifacts between frames) would be for the entire time of the frame.....in order for all lines to be picked up by the sensor/scanner mechanism......This is different from a stills (or film) situation, where the entire frame is exposed at the same time for whatever length of time the shutter is open."[/I]

The global V rolling shutter is irrelevant here and even still cameras can employ a rolling shutter i.e. the "roller blind" shutters in old large format stills cameras create some fantastic motion artefacts. Video cameras can and do have a global or rolling shutter, CMOS sensors typically have a rolling shutter, CCD sensors have a global shutter. There's no imperative in video that the frame is scanned line by line.

Bob.
markymarkNY wrote on 10/22/2013, 5:14 PM
Re: interlaced vs progressive, my understanding is that many (or most) modern devices don't actually shoot interlaced footage; instead, it is recorded into a progressive container and then "tagged" as interlaced video. Therefore, 60i=30p in those cases.

Maximum aperture and a slower shutter speed will give the least noise, but shutter speed shouldn't go too slow, it should follow the 1/2x frame rate rule for good motion blur although this can be corrected with something like ReelSmart Motion Blur.

Some cameras have dynamic range optimizers which work well to bring out details in the shadows without changing highlights.
mudsmith wrote on 10/22/2013, 5:57 PM
"There's no imperative in video that the frame is scanned line by line."

.....Not being fully aware of the current mechanism, you may be correct in that statement, but there certainly was this imperative in the pre-digital pickup mechanism prior to the current HD revolution. A real time line scan was taking place at the camera sensor and being sent to the monitors/recorders and switchers in that form.

I could see that it would be possible these days for a full image scan to be put into a buffer and then converted to a line scan, and would love to know if that is really what is going on with cameras now.....but it certainly is not what was going on with SD video cameras when I was working as an EIC 12 years ago....or maybe I am missing something about the physics of the situation.....Just don't see how a camera sensor could be scanning two sets of lines (interlaced pickup) from top to bottom twice while the shutter was closed during part of that 30th of a second. The light has to be hitting the whole sensor in the first and second half of the open shutter period/framerate. I do see how a modern sensor could take a full frame image during a lessor period and place it in a buffer and clcok it out in the serial line format necessary for either progressive or interlaced output, but don't really see any other way of doing what TV caneras used to do in real time.

I am open to the explanation......A rolling shutter at a fixed frame rate of either 30 or 60 (having slats at the appropriate intervals for either, and rotated to achieve this) will work to do what I describe above, and when you alter the shutter rate in an old school TV camera away from the core line rate, you still get recordings at the core line rate, but get stutter across the shutter motion between the new artificial frames. In other words, the lines are still pumped out at the same rate from top to bottom, but the slats are creating break points in the motion that are noticable when something is moving quickly.

When I was overseeing live concert recordings, my video operator would sometimes change the shutter rate on her cameras to match the 60 cycle output of the lights so that there would be no pulsing when she "chipped" the cameras. When we began noticing finger stuttering during the closeups of the upright bass soloists hands, however, I realized what was going on and had her ignore the 60 cycle pulsing during setups from then on. This worked and fixed the problems. The lines were still being pumped out for 29.97frames and 59.94 interlaced fields while the 60fps shutter was working. The lines mattered and were imperative.

Even though this was digital video, it was still standard def and using the best technology from 1998, so I freely admit that this may have changed.....and would love to know if it had.....or if my understanding of how everything was working in 1998 is inaccurate. I don't claim to be all that deep in my understanding of everything.

farss wrote on 10/22/2013, 6:17 PM
mudmsith,
this a way too big a topic for me to cover here so I'm going to be brief, sorry.

CCD sensors transfer the charge from every photosite in a very short period of time to a Charge Coupled Device.

CMOS based sensors are different, each pixel or row or groups are read out sequentially. The time taken to read them out may very well not have any relationship to frame rate or shutter speed. In my EX1 the readout time is fixed at 1/60th, even when the "shutter speed" is 1/250.

The rolling band and other problems that you saw on cameras is not fixed by changing shutter speed, the Clear Scan mechanism works by controlling the phase between the shutter and the scan. My EX1 has a number of menu controls for this but sadly I've struck a number of problems with projectors and lights that it cannot mitigate.

Video today is basically transmitted as a data stream, gone are sync pulses and front and back porches. Everything you knew from back in the days of Plumbicon tubes is obsolete, flush it out of your head or it'll really hinder you coming to grips with how things work in the digital realm.

Bob.
mdindestin wrote on 10/22/2013, 6:28 PM
Yes, you're correct. 1080p at 24fps is best. That lets you slow the shutter down to 1/48th or 1/50th, whatever you got. I have done 1/30th in order to get enough light, but only if I'd be underexposed otherwise.

At least get some on camera mounted Chinese LED lights for when you're in a jam. And NeatVideo is your friend.
Laurence wrote on 10/22/2013, 8:13 PM
On low light stuff with a locked down camera and not too much movement I find I can get away with a longer shutter (1/30 for 30fps) and I have done this a couple of times with good results.
Serena Steuart wrote on 10/22/2013, 8:22 PM
On LEDs, be aware that they are not all equal. Some Chinese units produce noticeably green light, so check with other users before buying on the web. Some more expensive units aren't free of the green caste but can be corrected with a magenta filter.
Laurence wrote on 10/22/2013, 9:49 PM
What I find with the cheap LEDs is that they are missing parts of the visible spectrum and thus are fine for augmenting natural light, but horrible for completely lighting an otherwise dark scene. I love my Z96s for augmenting natural light, but I would never completely light a scene with LEDs.
Grazie wrote on 10/23/2013, 3:14 AM
Two words on this: Stokes Gap!

OK, some more words....

I've been a follower of this chap for the past 10 years: Mr Jonathan Harrison : here he is being interviewed at IBC 2013 about the Xmas present Bob is going to buy me - the CELEB400. Thanks Bob!



And this other interview, at IBC this year, where he explains in much detail the "holes" in the LED spectrum. I've attended 4 of Jonathan's seminars and if there are any holes (little pun there!) in what he is saying, I'd like to hear them.



Again, Stokes Gap! (o..k.. just for you Bob . .)

Toodles

Grazie

farss wrote on 10/23/2013, 4:20 AM
This is what we're getting: BBS Lighting AREA 48 LED

With a CRI of 97 I doubt there'll be too many complaints. Being able to change the light to five different types of light in seconds is a big plus and one of those options is chroma key green.

My go to light though is still my Z96s, I use them exclusively to light small objects, particularly candles Two Z96s, a couple of mic boom stands, roll of Cinefoil and gaffe tape, that's it.

Bob.

mudsmith wrote on 10/23/2013, 3:58 PM
"CCD sensors transfer the charge from every photosite in a very short period of time to a Charge Coupled Device.

CMOS based sensors are different, each pixel or row or groups are read out sequentially. The time taken to read them out may very well not have any relationship to frame rate or shutter speed. In my EX1 the readout time is fixed at 1/60th, even when the "shutter speed" is 1/250."

Thanks....this parallels what I was thinking was possible....i.e., that the modern sensor is essenially "buffered" (actual mechanism may be a bit different) into the sequential feed of line/pixel data.....which must be flowing through the output of the camera.....since this ultimate sequential feed will take place across the basic frame rate, there has to be a way that the bottom lines will transmit light that hit the bottom of the sensor, even if the shutter was not open at that point in time because the shutter speed was so much faster than the frame rate.

The CCD, according to your description above, makes this possibility obvious, but the CMOS does not, and implies a similar buffered readout in order to make the quick shutter rates possible. Everything has to be clockable at the ultimate frame and line rate at the output.

Too much true buffering would be unworkable in a live/broadcast kind of situation, so I am curious as to how it works.

I appreciated being pointed in the right direction.
farss wrote on 10/23/2013, 6:19 PM
mudsmith said: [I]"Too much true buffering would be unworkable in a live/broadcast kind of situation, so I am curious as to how it works.

I appreciated being pointed in the right direction. "[/I]

As you've already deduced in a CMOS sensor the sensor readout time has to be less than the time between each frame and this is neatly seen in the EX1/3 cameras where the maximum frame rate is 60fps and at reduced resolution, 720p60. The less than one frame delay is not relevant in OB situations and the SDI output is only one frame time behind real time. Modern vision switchers can also synchronise SDI feeds via frame buffers so worst case there's going to be a two frame delay between the switchers out put and reality. Not a problem for OBs and not even a problem where vision is fed back to live projection screens.

In fact wireless links used for OBs introduce even greater delays of around 8 frames due to the use of mpeg-2 compression. Not a problem for even the Olympics and every feed into the switcher is bought into sync by the use of digital delays but certainly 8 frames delay is very much a problem if the video is being fed to live screens in the venue.

The same goes for using firewire outputs from cameras, again the mpeg-2 encoding takes unavoidable time and that poses challenges not for OBs but for live feeds into the venue.

When it comes to OTA broadcasting digital video is probably a couple of seconds behind real time by the time we get to see the image on our TVs. This is not a problem but other issues do creep in for broadcasters running national networks. One I know of relates to the use of Dolby E encoding for the audio which is one frame behind vision. Switching from the national feed to a local studio is difficult without introducing an audio glitch.


Bob.