Ideal Bit Rate Converting AVI to MP4?

Jeff Cooper wrote on 1/18/2010, 12:10 PM
Hey folks,

I had started a thread last week trying to resolve some quality problems when rendering AVI footage as an MP4 file. Increasing the bit rate and checking the Two Pass seems to have greatly improved the image, but I'm wondering if there is an "ideal" bit rate setting that will render the best possible quality.

I'm editing with AVI Files - 720x480 Widescreen.

Rendering MP4 Main Concept - Best - Main Concept AVC/ACC - 620 x 480

So far my best render has been with the bit rate set to 50 million, but I'm still not 100% satisfied with the quality. I don't really know a lot of technical information on how bit rates work, so I'm wondering if anyone knows of an ideal bit rate for this type of render...

Thanks!! Jeff

Comments

Chienworks wrote on 1/18/2010, 12:14 PM
It depends entirely on the desired quality level and the amount of disk space you're willing to devote to getting it. Higher bitrates generally equal higher quality and larger files. We couldn't really even begin to guess without knowing what the file is destined for. If you want to post it on a website then 50M is probably about 20 times too high. For that matter DVDs usually max out around 8M and even BluRay is generally under 25M. DV .avi files are also 25M. If 50M isn't enough for a 640x480 file then maybe we need to reexamine your goals and expectations.
craftech wrote on 1/18/2010, 1:26 PM
Jeff,

Is it possible that you are expecting too much from a single chip camera?

John
Jeff Cooper wrote on 1/18/2010, 2:59 PM
The MP4's that I'm rendering are for a website that broadcasts streaming videos online. I also have to send in the videos, on CD, in AVI format, which is what I'm required to edit in.

I'm just trying to get the best quality possible.

Thanks, Jeff
Jeff Cooper wrote on 1/18/2010, 3:03 PM
All my cameras are broadcast quality three CCD, but these are animations which, for some reason, it seems that the pixelation, that occurs during compression, is much more obvious.

The original AVI videos are very clean, but when rendering as an MP4, the quality loss is obvious....

Thanks!! Jeff
John_Cline wrote on 1/18/2010, 3:19 PM
First of all, video compression is an art and a black art at that. There are people in the industry that make monster salaries because they know the tricks to good video compression.

Saying that it's animation that are causing you trouble sent up a red flag. Are you sure that it's really pixelation and not an artifact of 4:2:0 chroma subsampling? If it's a 4:2:0 sampling issue then a higher bitrate isn't going to make it better. Chroma subsampling artifacts show up the most on graphics and animations which contain solid color backgrounds with high-contrast, highly saturated text or other primarily diagonal elements. The effect is even worse when you start with 4:1:1 dv video compression and go to a 4:2:0 colorspace. The resulting sampling is 4:1:0 and can/will look really bad depending on your source material.
craftech wrote on 1/18/2010, 3:22 PM
All my cameras are broadcast quality three CCD
=============
Your profile shows a JVC GR-HD1 which is a single chip (AFAIK). I wen't by that.

John
John_Cline wrote on 1/18/2010, 3:54 PM
Do your artifacts look a little like this?



Here is a link to a discussion on color subsampling that took place on this forum back in 2005. Unfortunately, BillyBoy was involved and I got a little testy. Not my proudest moments here on the forum, but the technical discussion is still valid.

http://www.sonycreativesoftware.com/forums/ShowMessage.asp?ForumID=4&MessageID=390252
farss wrote on 1/18/2010, 4:49 PM
When you say "MP4" I assume you're using that container to hold H.264. MP4 (mpeg-4) can contain many codecs!

H.264 is a quite complicated beast, more so than mpeg-2. Not only is there the base encoder at work there's also an entropy encoder. The final entropy encoding phase has an impact on bitrate / filesize, ease of decoding and no impact at all on image quality. The Sony AVC (Sony Media Encoder in fact !) lets you select the entropy encoding scheme ( CAVLC or CABAC).
What I think all this means is that bitrate is not the simple arbiter of quality that it used to be. This is probably why Handbrake lets you define encoding quality rather than bitrate.
All of this only goes to what John Cline said above, people make careers out of specialising in encoding. It's a path that I have not followed so anything I say is only random onbservations, you need to do your own trials.

You said the problem you are having is with animations however animations of what?
H.264 seems to be a wavelet codec, not unlike Cineform. This class of codec makes an effort to wrangle the problem of macroblocking by changing block size to better preserve detail. From what I have seen this can mean that too much fine detail in combination with motion can cause bad outcomes. With an animation as opposed to something shot with a physical camera you might not have any motion blur. Motion blur reduces detail where there's motion and this would seem to make for better encoding BUT you really should do some simple tests. From my very limited experience when you really 'break' H.264 it becomes visually quite ugly, it's harder to break than mpeg-2 and when stressed it's better but when it's pushed too hard it really falls apart quite badly.

Sorry that I cannot give you a one click solution but only your eyes can judge the outcome, only you have the media in front of you and you can eat up a lot of time getting the best outcome. I'd certainly give both of the H.264 encoders that ship with Vegas a test. For some reason the Sony AVC encoder fails at over 16Mbps but even at 10Mbps I've had very good results out of it, as good if not better than what I've seen come out of iMovie.

Bob.
John_Cline wrote on 1/18/2010, 5:02 PM
h.264 is not a wavelet codec, it just a more efficient block-based codec like MPEG2. It is a 4:2:0 codec, although there are versions of h.264 that are 4:2:2 and even 4:4:4.

Cineform and MJPEG2000 are both wavelet based. When pushed to extremes, they just "go soft" instead of going blocky.

I find the Sony AVC implementation to be quite decent looking and useful.
farss wrote on 1/18/2010, 6:00 PM
Thank you for pointing that out, you are indeed correct. I had buried in my aging grey cells a page that I though was from Cineform showing the use of variable sized blocks to handle fine detail. A quick look at their site however reveals that they apply the wavelet transform to the whole frame.
Later versions of the H.264 spec provides for variable block sizes hence my confusion.

I don't know about anyone else but I find the array of technology that we're expected to understand bewildering. I'm trying my best to ignore it and simply 'get on with it'. Problem is without some understanding of how stuff works it might come back to bite me. I tried using the Sony AVC encoder and found I had a choice of entropy encoding. Search as I might nothing in the Vegas help to even tell me what the choices were, much less what they did. Googling led me into an array of mathematics that left my head spinning. Seeing as how SCS have now written their own encoder a bit more infomration for us mere mortals would seem in order.

Bob.
Widetrack wrote on 1/19/2010, 11:52 AM
farss:

you said:
". . .I find the array of technology that we're expected to understand bewildering. I'm trying my best to ignore it and simply 'get on with it'. Problem is without some understanding of how stuff works it might come back to bite me. . . ."

You got that right.

A friend used to enjoy saying that the great thing about computers is they let one person do jobs that used to require 5 people. The bad thing is that now one person is responsible for doing the work of 5 people.

My friend was a wise man.
John_Cline wrote on 1/19/2010, 4:29 PM
"We must embrace technology, else be trampled by it." If you can't keep up, maybe it's time to look into gardening as a career path.