Sorry if this is a bit rambling. I'm not trying to draw any conclusions, just throwing a few things together for us to think about. Only reason I think any of this matters is more and more of us are going to be putting at least our toes into formats that offer a higher quality image, be it HD or 4:2:2 SD. Also some of my own observations come into this and at the time they didn't make a whole lot of sense to me.
I'll start with resolution. DV25 in PAL has 720x576 pixels but from my understanding you cannot encode an image containing alternating color pixels at anything like that resolution. On top of that you actually need something to record the image and that means a camera with a lens, image sensors and encoders. To make a quantitative evaluation of image quality we therefore use resolution charts, we measure how many lines per inch the system can resolve.
Except that doesn't quite tell us the whole story, humans don't quite work that way. To get a better idea of how we'll perceive the image we use a figure known as the Modulation Transfer Function. This tells us not only how well the system can resolve the lines but how much of the contrast between the lines is preserved. In other words given two images of the same resolution but if one has lower contrast we'll perceive it as being of lower resolution. As I understand it the term 'sharpness' comes into play here.
Now as I understand it, there's an inverse relationship as well. Our perception of contrast is highest at lower resolutions. You can see this pretty easily. Look at a resolution chart with your naked eyes, the wider spaced lines look clearly black and white, as they converge (increase in resolution) they loose contrast, looking closer to gray.
Now here comes my thoughts on all this. The first time I saw HD properly I wasn't actually all that impressed, in fact compared to the same scene through an SD camera it looked decidely 'flat'. I was expecting something that was going to knock my sox off and it was quite the opposite.
Given what I was rambling about before this is starting to makes sense, the same image at higher resolution appears to have less contrast i.e. it looks less 'sharp' or 'flat'. Now this might give the odd conclusion that HiDef is a con, higher definition images are going to look flat and washed out. Well not so fast! We don't 'see' resolution, what we perceive is detail.
This might explain why when we see the shot of the vivid tropical parrot on a big screen in HiDef we get the 'wow' factor. The higher definition gives us sharper edges and we can resolve/perceive the details in the feathers. But when we see a more muted scene the higher resolution works against our perception, the subtle transitions between the details are more accurately represented so the image seems to lack sharpness, in fact recording the image using a lower resolution system it may well appear better (even though it's less accurate).
Now here's an interesting story that relates to this. Today one of clients rings us up. Kind of along the lines of something might be wrong with our gear because all these tapes from Europe that they've captured looked 'blurry'. We assured them we didn't think anything was wrong with our gear and they had the good sense to say they'd check it out further. They rang back to say they'd run some local footage through the same chain and it looked just great and then we all realised what it was. The lighting conditions outdoors in Europe are vastly different to those down here. They'd been tricked by the natural difference in contrast into thinking the image had lost resolution.
Sorry that this has nothing to do with Vegas directly but it might make for some interesting discussion, I've probably gone off half baked on most of it so I welcome anyone who can correct my understanding.
Bob.
I'll start with resolution. DV25 in PAL has 720x576 pixels but from my understanding you cannot encode an image containing alternating color pixels at anything like that resolution. On top of that you actually need something to record the image and that means a camera with a lens, image sensors and encoders. To make a quantitative evaluation of image quality we therefore use resolution charts, we measure how many lines per inch the system can resolve.
Except that doesn't quite tell us the whole story, humans don't quite work that way. To get a better idea of how we'll perceive the image we use a figure known as the Modulation Transfer Function. This tells us not only how well the system can resolve the lines but how much of the contrast between the lines is preserved. In other words given two images of the same resolution but if one has lower contrast we'll perceive it as being of lower resolution. As I understand it the term 'sharpness' comes into play here.
Now as I understand it, there's an inverse relationship as well. Our perception of contrast is highest at lower resolutions. You can see this pretty easily. Look at a resolution chart with your naked eyes, the wider spaced lines look clearly black and white, as they converge (increase in resolution) they loose contrast, looking closer to gray.
Now here comes my thoughts on all this. The first time I saw HD properly I wasn't actually all that impressed, in fact compared to the same scene through an SD camera it looked decidely 'flat'. I was expecting something that was going to knock my sox off and it was quite the opposite.
Given what I was rambling about before this is starting to makes sense, the same image at higher resolution appears to have less contrast i.e. it looks less 'sharp' or 'flat'. Now this might give the odd conclusion that HiDef is a con, higher definition images are going to look flat and washed out. Well not so fast! We don't 'see' resolution, what we perceive is detail.
This might explain why when we see the shot of the vivid tropical parrot on a big screen in HiDef we get the 'wow' factor. The higher definition gives us sharper edges and we can resolve/perceive the details in the feathers. But when we see a more muted scene the higher resolution works against our perception, the subtle transitions between the details are more accurately represented so the image seems to lack sharpness, in fact recording the image using a lower resolution system it may well appear better (even though it's less accurate).
Now here's an interesting story that relates to this. Today one of clients rings us up. Kind of along the lines of something might be wrong with our gear because all these tapes from Europe that they've captured looked 'blurry'. We assured them we didn't think anything was wrong with our gear and they had the good sense to say they'd check it out further. They rang back to say they'd run some local footage through the same chain and it looked just great and then we all realised what it was. The lighting conditions outdoors in Europe are vastly different to those down here. They'd been tricked by the natural difference in contrast into thinking the image had lost resolution.
Sorry that this has nothing to do with Vegas directly but it might make for some interesting discussion, I've probably gone off half baked on most of it so I welcome anyone who can correct my understanding.
Bob.