Looking at this cam. What I want is 1080-60p and good low light performance though not necessarily at the same time. 30p would be fine with good low light performance.
I have allot of recording time with the VG10 and the VG20.
The kit lens is sharp and has good image stabilization. However, it's slow. (f3.5 at best) Great outdoor daylight lens though.
Low light performance is fantastic when you up mount a fast prime. It's amazing how clean the gain is. This thing laughs at +15db. Even +18 and 21db are surprisingly clean. Noise gets noticeable at +24 and then somehow doubles at +27 and doubles again at +30.
If you stay at +18db or lower with a fast prime, you will get amazing video.
Don't bother with the VG10. The VG20 surpasses it in every single category.
The only real issue with these cameras (and ALL high megapixel, large sensor cameras) are the strong moire pattern problems. Anytime you take 16 Mp image, throw out 14Mp and only keep 2,...you are gonna have artifacts from that process. No way to escape that.
>The only real issue with these cameras (and ALL high megapixel, large sensor cameras) are the strong moire pattern problems. Anytime you take 16 Mp image, throw out 14Mp and only keep 2,...you are gonna have artifacts from that process. No way to escape that.
Moving over from the VG10 functionality to VG20 is a bit strange as I got used to VG10. I need to spend a lot of time with this cam to become efficient and quick I l missed a lot of good images trying to figure out iris shutter on the inside and manual and jog on the outside and the focus on the outside.
I Love the 50p upgrade, what a blast. The image stabilization on the kit lens comes in handy, I was crawling in the bush to get some wildlife shots close-up but despite my lack of experience with the cams layout the stuff came out ok. Most of the animals looked confused so they probably hung around thinking “what the L is this guy doing” in fact I had a Knysna Lourrie come in very close I whispered and apologized for my incompetence and he obliged me. Not so with some Impeti antelope they are so jittery like cockroaches on crack.
I think I need some elephant paralyzer that combined with the stabilizer on cam should improve things a lot
Used a slider on some shots rest hand held mostly crawling on my knees.
Thanks for posting Rory. Are you seeing issues with moire? It disturbs me that the camera designers are not doing the appropriate filtering when subsampling the larger sensors. Moire does not have to be a problem if you take the trouble to design the camera properly.
I'll tell you this much: I will never buy a camera with moire issues again. My Nikon D5100 has this problem and I absolutely hate it. I hardly notice it indoors, and if I'm shooting a person outdoors and I can through the background out of focus, again it looks great, but if I am trying to shoot buildings or trees or grass or whatever... it looks horrible. Way worse than a simple point and shoot in video mode. Way worse than my GoPro. I will never buy a camera with a multi-megapixel sensor that doesn't do some sort of proper pixel averaging to downrez again.
Imagine a 16 million piece jig-saw puzzle with a beautifully detailed image on it. (Imagine each piece with the same connectors too)
Take that giant image, pick out 14 million pieces from the image and toss them into the trash can.
Then, take the remaining 2 million puzzle pieces and connect them together.
What will happen to that original beautiful image? It's ALLOT of missing space or "gaps" in the image. This is just simple science and not that much can be done to recover the original image resolution.
Yes there are tricks they can do with bayer patten, pixel shifting and filters but it's still a mathematical mountain to climb.
Cliff
(yes, I know the numbers are off because of the native 4x3 sensor and 16x9 readout)
2 million pixels are all you can use since that is HD resolution. There are well known ways to downsample the higher res and prevent moire. The simplest is to put a better optical lowpass filter in front of the sensor. Digital cameras already have this filter as it is essential or all of the images would be full of artifacts.
It is well known sampling theory usually referred to as "Shannon sampling theory" and the "Nyquist limit" You need to limit the spatial frequency content to less than one half the spatial sample rate. It's more complicated with bayer patterns but there is more than enough data there to do it right, you just need the will and engineering skill to do so.
Probably this is how they differentiate this camera from the more expensive ones.
Rocky The clips above are my first shoot all with the kit lens .
The issues as you could hear by the waterfall is mic clang bang still the same as VG10.
The images are very good the only time I noticed moiré is when resizing, so far I haven’t shot enough varying sequences to compare.
There was a clip from a guy from Sony discussing this issue and what they had planned for normalizing the issue but I can’t find it .
Maybe someone can remember the clip and post a link.
I need to spend a lot more time with the VG20, the cam is capable of giving you good footage I just need to get my side up to it.
For event stuff you have to be quick and try get the best from the least amount of time, for me filming should be like driving a car, you are doing the mechanics without thinking but your mind is focused on the creative side. to get to that level you need to film as much as you can and put in the time and effort.
If they lowered the low pass filter frequency, that would smooth out some of the artifacts. However, the trade off would be softer image details. That would also affect the quality of the 16mp still image capture.
Not exactly sure what the GH2 is doing to fight it. But the laws of math apply to all camera companies. Disguard 80% of your image and you will get aliasing artifacts. The only techniques to help that involve softening the image. Either mathematicaly or opticaly.
What the GH2 is doing is collecting all the pixels then averaging groups of them as if they were a single larger pixel when shooting video. That way the still resolution isn't affected. I believe this helps considerably when using gain in video mode as well.
Yes, preventing moire with an optical filter kills the still image quality. I'm not interested in taking stills but want the best video image.
Yes, it's not about cost to manufacture, but cost to design, the will to do so or as I said, market differentiation. You turn off the best quality on lower price products based on the same technology as the higher end ones, and turn it on for the higher price stuff. We do it in our products all of time.
Yes, the way the GH2 might be doing that sounds like a good way. However, lets say you have 6 or 8 pixels worth of image space and then collect that block and average those values down to one pixel. If you then do this block by block on the grid and quilt them together you will still have "missing" spaces that are "smooshed" together. An 8 pixel-wide set of "connecting image lines" or "smooth edged shapes" might have existed that are now being represented by only one single "blurred" pixel. It's good but it's still creating sensor/ image "gaps" and closing the remaining pixels together.
I have never seen resolution charts on a GH2. I'd be curious to see how they compare to an NEX7 or better yet, a NEX 5N.
I've read a few reports of aliasing problems with the GH2 and AF100 cameras.
Certainly they do a better job by not skipping lines however they apparently do skip pixels.
The user can get around these problems to some extent by using an antialiasing filter in front of the camera however the effect of the filter is determined by focal length and aperature. The filters are also expensive and you need a collection of them.
I guess, in the end, an 8:1 or even worse a 12:1 ratio will just never really cut it. Throw in the Bayer pattern RGB layout and it just gets tougher for a single sensor.
I'll take a good three sensor block to kill the Bayer problem and a full 1920x1080 raster image to deliver that almost 1:1 image capture ratio! (Like an EX1r/ EX3)
I guess only a setup like that will give you the best resolution that 1080 has to offer. (although that F3 is crazy good too)
>The user can get around these problems to some extent by using an antialiasing filter in front of the camera however the effect of the filter is determined by focal length and aperature. The filters are also expensive and you need a collection of them.
Can you point me in the direction of finding such filters? I must not be Googling with the right words because I can find nothing.
I guess, in the end, an 8:1 or even worse a 12:1 ratio will just never really cut it. Throw in the Bayer pattern RGB layout and it just gets tougher for a single sensor.
Okay, but think about this: take a 16MP shot with a high-end DLSR, then resize it to 1920x1080 in photoshop.
No scaling issues, no color issues to speak of, you can even correct out distortion and CA. Sharpen it even if you want. So it's not that a single-sensor can't do it, it's that a video camera can't read out the data and then perform all those steps at 30 or 60 fps and write it to a codec.
it's a processing problem, and at some point, processor speeds will make it viable. So maybe single-chip cams will replace 3-chip models at some point in the not-to-distant future. ??
I have thought that very same question myself in the past. I have no idea how PhotoShop does it.
I think I'll take a shot of a resolution chart with my VG20 at 16mp and try it out. I'm curious to see what artifacts it produces (or doesn't) when scaled down.
It's a good question.
Edit: I imagine that if you do average the color values of blocks of pixels and you were very careful to select an "even" amount of them that is proportional to the 16x9 ratio,...you might be able to faithfully produce the "normal" stair-stepping squares (pixel steps) that a standard 1920x1080 image would produce. I dunno though for sure.
This is an apples and oranges comparison. In Photoshop you are starting with all the pixels of an image then scaling them down. With a multi-megapixel camera you are never capturing those in between pixels to begin with. The information simply isn't there to downrez.