I am trying to match the true scan lines between 3 cameras for 1080P from camera and those produced by down rezing 4K video to 1080P. I guess if I shoot the exact same thing for the exact time and the file size is the same that might be an OK proxy. Thanks for helping.
Ah, so you are after the native resolution the camera shoots at. There isn't much info I can find on how to do this with video, but you can do it with video games, and I assume the methodology should be much the same. That said, you can probably do it much faster by typing "(model of your camera) native resolution" into Google.
I've read in a few places that all 1080P are not equal in that they can be as low as 800 pixels, it is one reason why 4K to 1080P produces superior 1080P. I don't think any of the manufacturers would want you to know which cameras produce full 1080 spec, which is why I was wondering if you can tell how many vertical pixels are in a video file instead of stated "1080P".
Not all cameras capture natively at the resolutions they claim to, but the days of non-native 1080p are long past for the most part. Even most pro-sumer 1080p cameras capture 1080p pixel for pixel (many capture above 1080p and downsample the higher resolution image from the sensor to 1080p upon compression).
"Even most pro-sumer 1080p cameras capture 1080p pixel for pixel (many capture above 1080p and downsample the higher resolution image from the sensor to 1080p upon compression)."
Yes, between sensor, sensor read-out, frame processing and recording there usually is a lot of math involved to combine, replace, recreate, interpolate pixels. Thus it even let recordings done with lenses which doesn't offer an appropriate resolution looking good. So within a "real-life" recording it will be hard to tell if this was "real" 1080p or not.
As mentioned above I think the topic starter is more interested in learning how to identify what a camera does internally. We don't really talk about scan lines anymore but there are lots of different types of sensors and how they are read.
One of those differences is a global shutter (all pixels at the same time) or a rolling shutter (reads pixels in a row from top to bottom each frame) the last can cause issues with fast movements.
Or for instance, some cameras have a 6MP sensor but only use 2MP in the middle while recording 1080p video, that's called pixel cropping but other cameras will record the video with the full 6MP of the sensor and then internally downscale it to 2MP producing a 1080p output. The last is generally preferred because you are getting more details and light on the sensor creating a superior quality footage. This process is called pixel binning.
On the other hand, a really big sensor with for instance 6MP which you can shoot natively and output 6K footage will probably still be better, that's what RED cameras do for instance.
Sadly, as also already mentioned, you can't really identify what the recording equipment was capable of with the output file. Even things such as too low bitrates can completely destroy details which the sensor captured without issue.
So yes, a 1080p file has 2 million pixels in it, but I guess the topic starter is wondering how to identify how it was recorded internally inside of the camera which shot it. Other then looking up how those cameras work internally, you won't really know.
Thus it even let recordings done with lenses which doesn't offer an appropriate resolution looking good.
The subject of glass, which most people ignore, is often the determining factor in output resolution. Now that sensors / encoders can out-resolve any glass lens (since mpeg-2, really), physical quality and $ alone is what makes cinema better than prosumer acquisition, in almost every case.
Yep, I have 2 cameras that have to use sections of the sensor (thus cropping) and one that does a native capture. What I've decided to do is shoot in 4K and down rez in render to keep things consistent.
This is the Zeiss, Roger has been publishing the mtf charts for other manufacturers also, both photo and cine.
Glass for sure is also important...
A small extract from comment by Roger Cicala ...
The highest resolution cinema lenses are generally the rehoused, recently made, photo lenses. In the case of both Canon and Zeiss, the Cine version of a lens is often a previous generation compared to the current photo version.
Classic, well regarded cinema lenses are rarely high-resolution. Why should they be? Until 4k and up became common resolution was of little importance.
Isn't there also the subject of native vs. interpolation in regards to camera resolution video recording funny business? I know the last time I was shopping for a camera that is what some people were pointing out in regards to some cams which claimed to record at 4K res. So I'm unsure if D7K records in 4K and downscales to 1080 if that really helps or not.
I'm also unsure how interpolation works but I would take a guess and say if 1 pixel is green and another is yellow, and then the pixel between them is not recorded natively and gets "interpolated" instead, then that pixel in the middle is likely blue(Yellow+Green) due to interpolation, where it seems something like that UniqueColorCounter plugin wouldn't detect something like that.
I could be totally wrong, video overall is a pretty weak point of knowledge for me. I rely on people who are much smarter and educated then me on the subject. 😋