I vaguely recall you giving a name to the principle that HD encoding requires less bits per pixel to than equivalent SD to achieve the same quality, or something along those lines. Do you recall what I might have read or imagined?
Thanks, Mark
When looking at the total number of pixels in the same video, the one with the higher resolution will generally have more redundancy, which makes compression more efficient. I write "generally" because if you have a lot of noise, blooming, or other artifacts then the encoder could actually use the increased pixel count if the HD video to encode these artifacts!
It has been my experience that with a good quality, clean source and a high quality AVCHD encoder SD video can be compressed down to 2mbps, while 1080p needs more like 4 or 5mpbs.
Many things to be learned just by searching "bits per pixel" on the internet.
Ben Waggoner's recent thoughts on the "^.75 Rule" (note that's "to the power of," not "times") can be found here:
Here's a brief example: The area of 1080p is 2.25 times greater than 720p. If our 720p video looked great at 8 Mbps, we would need about 16.7 Mbps for our 1080p version to play at optimal quality, not 18 Mbps as the plain math would suggest.