2 hour Brick Wall - Fave DVD Templates? Pleas

Comments

Grazie wrote on 5/12/2006, 3:31 AM
If you tell the encoder to encode at 8Mb/sec

Great stuff Bob!! eh? Who said CBR? Not moi? Here is my recipe, this is what I used. Dumb? Interesting . .


MY RECIPE at top of thread .

VBR

8 MAX

5.044 AVE . . now doing 5 AVE

Default for the MIN.

Now doing single not 2-pass

Doing AC3.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


Dumb?

Best regards,

Grazie


farss wrote on 5/12/2006, 4:24 AM
Ah,
now the penny starts to drop!

Sorry, so many posts hard to sort out what you did versus what people were telling you to do.

OK,
well according to my bitrate calculator with those numbers it should have fitted. However remember that menus also eat up space and one thing to watch for: If you create a motion menu DVDA will render the audio as PCM even if there's no audio unless you make the default audio ac3. That can gobble up a few Mbytes!

So lets see if I get this right, you encoded using those numbers but the file was too big to fit the DVD?

So you ran the file through DVD shrink and then voila, it fitted?
And DVD shrink said it ditched nothing in the process?

If all this is correct them you've aroused my curiosity.
Typically when I use the figures from Bitcalc I only get the DVD around 90% full, which is fine as I don't like getting them right to the limit anyway however that varies quite a bit. I'm wondering if this has something to do with the audio part of the encode.

Need to do some more sexperimenting - research.

Bob.
Grazie wrote on 5/12/2006, 4:59 AM


Sorry, so many posts hard to sort out what you did versus what people were telling you to do. Bob? It was MY initial post. See above . .please?


So lets see if I get this right, you encoded using those numbers but the file was too big to fit the DVD? . .no no no . .after running it through and into DVDA3 and seeing I COULD get this on . . I then wanted to ADD a small amount more . .something like 2 sets of 8mins. Not a big ask. So I did various lowered rates on these 2 and only these 2 sets. It was coming down to EITHER one OR the other.

THEN Mike and others invited me NOT to be too concerned and just get DVDS to well .. Shrink the finished DVDA3 project into 4.7 . . .and THIS IT DID!!! It did it. This I like!!! This is what I want from software - not having to fossick about with maybe/can/sometimes/wind-in-correct-direction type adjustments to the fatal area between AVE and MAX. Yes? DVDS in fact gobbled up ANOTHER outtake and ANOTHER menu too!!! Whereas DVDA3 was stubbornly hell bent on fixing me on 5.3gb!! . . BTW, Bob, when DVDS had finished it did it in only 4.1gbs.

Believe me Bob, I did variations and so on, DVDA3 would not get me UNDER 4.7.

So knowing what I do - granted it isn't as thorough and as deep as your understanding - I did apply what John schooled me to look at and what a plethora of others invited me to do - BUT! But it wasn;t until Craftech really bent my ear about DVDS, that I downloaded this FREE s/w and EVEN while I was downloading and watching the s/w doing its slippery best with my masterpiece - I remained skeptical. But it WORKS! How it does it, and just how DVDA3 isn't doing this for me, remains a mystery.

As I said in my POST above. I got the 1hoiur 50 mins, but I wanted to add some outtakes. I could do one set but NOT the other and definitely not bot. And here I am with not only ADDING the 2 sets of outtakes but have added a third PLUS another menu!!! this was registering 5.3gbs on the DVDA3 "Richter" scale.


Did you mean this .. lol . ..

Need to do some more


Grazie


farss wrote on 5/12/2006, 6:18 AM
Good now I get the picture.

I too have and use DVD Shrink, but for another purpose, thanks to DSE who pinted me in the right direction, Vegas has a problem if you try to join mpeg-2 onto the T/L, looses the plot for a few frames at the join. With DVDS you can make then into one BIG file and then Vegas will handle it just fine.

Anyway back to your problem.

So you encoded at one set of parameters and then needed to add more content and it all wouldn't fit?
So you could have encoded the main title again at a slightly lower bitrate and it would have fitted???
But instead you used DVDS to make the file a bit smaller and all was well?

What I'm confused about now is this:
====================================
Believe me Bob, I did variations and so on, DVDA3 would not get me UNDER 4.7.
====================================

Are you using DVDA to encode?

What I would have done is gone back to Vegas and encoded the main title at just a slightly lower average bitrate.

Then again what you did using DVDS probably achieved exactly the same thing but faster.

Bob.
Grazie wrote on 5/12/2006, 6:47 AM
Then again what you did using DVDS probably achieved exactly the same thing

. . . TARAH!


Yes, yes, yes, . . . and all I wanted was the extras added. I didn't want to have to re-encode 1h50min again!!! I just wanted the extras crunched down.

But what I did get was a far BETTER result.

I got to know about DVDS and just what it and DVDA3 - IN MY EXPERIENCE - can't do.

NO I did NOT use DVDA3 to encode. All from Vegas. As I indicated I created and experimented with various templates. I wasn't aware I could create templates under DVDA3? But maybe you didn't pick up that I was asking for and was experimenting with various templates?

Best regards

Grazie
farss wrote on 5/12/2006, 7:13 AM
This is what has me confused:
================================================
Believe me Bob, I did variations and so on, DVDA3 would not get me UNDER 4.7.
================================================




No you cannot create templates in DVDA, you can only get it to do CBR at a bitrate you define.
johnmeyer wrote on 5/12/2006, 7:40 AM
The mpeg-2 encoder encodes frames not fields, in my case the flash was only one field long, so we have two adjoining frames and there is zero in common between them, trust me, that will totally spin out the encoder.

Bingo, Bob. This is one of the most important insights into MPEG-2 encoding, namely that it has all sorts of problems with interlaced material that do not show up with progressive material. This is one of several reasons why 24p film encodes (or 25 in PAL world) look SOOoo much better than what you get from interlaced video. The other, of course, is that you only have 24 objects a second to encode instead of 60 (two fields per frame times 29.97 fps, roughly speaking).

As to all the back and forth about what DVD Shrink is good for and whether it is better than simply encoding, using two pass at a lower bitrate, I think Bob you've also got that right, namely that if you could get total control over the variable bitrate, so you could allocate lots of extra bits at problem points like fades, smoke, strobe flashes, etc., then you'd get the best possible result. Even without that, my test (in the other post) was designed to prove that 2-pass VBR Vegas encoding will beat DVD Shrink in quality. While it failed to prove much of anything, I think a better test would probably show that simply encoding with the correct template in the first place is going to give better or, in some cases like my test, no worse than equal results, and will do so with less effort.

In my earlier post in this thread, I linked to the explanation from the DVD Shrink author himself as to how DVD Shrink works. If you read it, you will find that for small shrinking percentages, it is likely that you will see very little degradation. However, at some point -- and that point depends on the nature of the material and how it is encoded, so it will be different each time -- in technical terms, all hell breaks loose. The thing falls apart. I have seen this on movies that have been shrunk too much, and the result is pretty darn horrible. Much worse than a low bitrate encode (done properly).

As to workflow, a two pass VBR is going to take exactly twice as long as doing a single pass CBR at 8,000,000 bps. However, to get the best results from DVD Shrink, you MUST use both the "deep analysis" AND the adaptive error compensation. This dramatically increases the shrinking time from about ten or twenty minutes on up to two hours or more. This two hours pretty much balances the second pass on the VBR, so the total time for both is about the same. However, the DVD Shrink requires a completely new step which must be done after the authoring, so it is more "fiddly," a term my English wife uses.

So, for me, the way I would recommend using DVD Shrink is for those cases where your bitrate calculator gives you an encoding rate that turns out to give you a project that is a few percent too large. Rather than starting all over and killing hours to re-encode (or several days if you did the encode and rendering at the same time and you have all sorts of fX and compositing on the timeline), you can instead just shrink the result and no one will be the wiser. By contrast, using DVD Shrink on a regular basis to shrink by factors of 35-50% or more is, I think, asking for trouble, and that trouble will show up as some really ugly video if DVD Shrink happens to hit the point at which it has to start shrinking "I" and "P" frames rather than just "B" frames (which is the point at which all hell breaks loose).

Grazie wrote on 5/12/2006, 8:15 AM
"So, for me, the way I would recommend using DVD Shrink is for those cases where your bitrate calculator gives you an encoding rate that turns out to give you a project that is a few percent too large."

Well, guys, and for what ever reason, this is almost what I did, and will be doing in future. I will encode as per normal VBR MAX8 AVE5.044-ish and MIN 0.192 2-pass and of course AC-3. And IF DVDA3 is getting testy about adding another 10-20 minutes I'll use DVDS to repeat the success I've had. Of course this is on a main feature of 1:50 > 2:00 hours! ! !

Grazie

Grazie wrote on 5/12/2006, 8:22 AM
So Bob, here is your answer about whether I was encoding within Vegas or not . . Are you using DVDA to encode?

As I was experimenting with templates, your . . No you cannot create templates in DVDA, you can only get it to do CBR at a bitrate you define. . . gave you your own answer! That is I must have been encoding within Vegas as I was experimenting with templates!

TARAH!!!

So, using DVDS has given me an easy way of breaking through the 2 hour brick wall without the need to re-encode, as I was wanting to add some extras and DVDA3 wouldn't allow me.

Best regards,

Grazie

plasmavideo wrote on 5/12/2006, 8:50 AM
Grazie,

Thanks for starting this thread. It's been very informative on several levels.

Tom
Grazie wrote on 5/12/2006, 9:06 AM
Tom, thanks . . And precisely on what level would that be? Constant Bit-ranting or what?


.. nah .. joshin' !

Have a good weekend!

G
plasmavideo wrote on 5/12/2006, 9:28 AM
Sexperimenting and constant bit-ranting fer two!
Jayster wrote on 5/12/2006, 10:04 AM
Now at CBR the encoder can't make any use of the surplus bytes from the still part to handle the dissolve. Probably at 8Mb/sec it really doesn't matter. But at say 5Mbit/sec it will, two pass VBR gives the encoder the chance to better optimise where the bit budget gets allocated, even more passes can improve that.
Very informative thread! As I understand from this thread, the "bit budget" is the difference between AVE and MAX bitrates, since the encoder will never be allowed to exceed your setting for MAX. So it means the VBR pass can be cruising along at AVE bitrate, and then climb up to the MAX bitrate when it encounters some serious motion. Then it settles back down to AVE, and goes lower sometimes to put bits back into the bank. Thus it gets high bitrate when necessary, and encodes more efficiently (in terms of file size) when it doesn't need that much bitrate.

Seems like it can be assumed that VBR is only useful if you need to squeeze in more video than what would fit on a disk at high CBR bitrates. For a smaller project (where there's no need to budget the bitrate), it sounds like you can simply set a CBR equal to the MAX bitrate you would have set for VBR in the first place. Quality will never be anything less than what the VBR encode can achieve when MAX and CBR are the same. Encoding would be faster with CBR, too. Again, all this is CBR stuff is only for the cases where you aren't forced to squeeze a lot of video on one disc.

For VBR, It also sounds like (if I read this correctly) you don't want to have a really small gap between the AVE and MAX bitrates. Like, if AVE is 7 Mb/sec and MAX is 7.2 Mb/s, you aren't utilizing the real benefits of VBR.

the bitrate went from 192K to 8M in a few frames and the players simply could not track the disk. So my first word of advice, don't set the minimum bitrate too low or you can have problems.

That sounds like a very legitimate issue, a reason to avoid having a huge gap between min and max bitrates.
farss wrote on 5/12/2006, 1:50 PM
John,
I don't think anyone would dispute that progresive material is easier to encode than interlaced and lets not forget that MPEG is Motion Picture Engineering Group and not Video Engineering Group.

However it's arguable if 24fps is easier to encode that 30fps as between any two frames at 24p there'll be a greater difference than at 30fps.

The other big problem with 24p and mpeg-2 is that the standard only supports NTSC frame resolution. However 25p and 30p can still be encoded from 25PsF and 30PsF with the same benefits.

Bob.
johnmeyer wrote on 5/12/2006, 2:41 PM
Jayster,

I think you have it right on every point. As to not setting the minimum all the way down to 192, that advice comes from a very reputable source, and I don't have anything to refute it. However, I am not certain I would agree with it either, for two reasons. First, I doubt very much that the circuitry would have any problem making a transition from one bitrate to another. If it can handle 9,800,000 bps (which is required by the DVD spec) then if it is idling along at 192,000 and suddenly has to go back up to 8,000,000, I don't see how that is going to be a problem. This isn't a mechanical system where there is inertia involved.

My second reason for not wanting to agree with this is that 192,000 is the default minimum set by Sony/Mainconcept for the DVD Architect templates. I just opened up Vegas and selected this template, just to make sure, and these are the defaults (cutting/pasting directly from the dialog to this post):

Maximum: 8,000,000
Average: 6,000,000
Minimum: 192,000

It has been this way since Vegas 4, so if there was a big problem, I assume Sony/Sonic would have changed it.

Having said that, I just checked the Mainconcept external encoder, and the default minimum for their DVD template is 2,500,000. I just checked my old TMPGEnc, and the default minimum for the DVD template is 2,000,000.

I just did a quick Google search for more data, and the only thing I can find is lots of warnings about not using zero for the minimum.

I can tell you that I've always encoded with 192,000 minimum and sent out DVDs to hundreds of people, and never gotten a complaint. I DO use very good media, and this is clearly the single most important thing in getting discs that will play without problems.

riredale wrote on 5/12/2006, 2:48 PM
The MPEG2 standard can encode for either progressive or interlaced material. If interlaced, it breaks down the frame into two fields and treats them as separate and distinct images. So I see no reason why MPEG2 would theoretically suffer from interlaced material. In fact, one of the principal reasons for MPEG2 to exist at all was to be able to efficiently encode interlaced stuff.

Here's one reference site on the Internet that I found regarding this in general terms.

Secondly, I don't think going from very low to very high bitrates is much of a problem for most DVD players. My cheapo Apex has a slight issue with it, but it's the only player I've seen that has any stumble.

johnmeyer wrote on 5/12/2006, 3:15 PM
However it's arguable if 24fps is easier to encode that 30fps as between any two frames at 24p there'll be a greater difference than at 30fps.

That is a very good point, but the problem is more subtle than that. See my answer to the post that followed yours (below).

The MPEG2 standard can encode for either progressive or interlaced material. If interlaced, it breaks down the frame into two fields and treats them as separate and distinct images. So I see no reason why MPEG2 would theoretically suffer from interlaced material. In fact, one of the principal reasons for MPEG2 to exist at all was to be able to efficiently encode interlaced stuff.

The link you provided describes how MPEG-2 was designed to accommodate interlaced, whereas its predecessor (MPEG-1) was not. This is not to say that it excels at this.

I have just spent the past few months wrestling with this, and there are some subtle things that make the encoding of NTSC interlaced an issue. First, you have more "objects" to deal with per second. In round numbers, 60 compared to 24. It is true that the inter-frame differences are smaller, but there are more of them. If you encode an "I" frame (the complete, standalone frame) every 15 frames, that is once every 15 "objects" for progressive 24 fps, but only once every 30 objects for interlaced. With fewer objects, you can allocate far more bits to each object, and to the difference vectors. I am not enough of a math major to be able to explain the tradeoff, but empirically, the results sure seem better when you have more bits for each object, even if the diffence vectors are greater. It is not a linear tradeoff, I don't think.

The bigger deal is the subtle issue having to do with the spatial placement of fields. The two fields that make up a frame of interlaced video are not spatially in the same place. If you wanted to create a pathological case to show what this does to encoding, imagine taking a picture of thin horizontal alternating black and white stripes, aligned so that the white would be photographed by the even scan lines and the black by the odd scan lines. If you separated each frame of video into fields, the even fields would be entirely black, and the odd fields all white. If the encoder tries to treat each field as though it were a frame half the height of a normal frame, it will go nuts trying to figure out what to do as the "frames" go from black to white to black ...

If the encoding instead encodes from even to even and then does separate odd to odd encoding, and tries to combine them together, you end up with similar issues in the time domain. This is what makes interlacing so tricky: you have both spatial issues, because each field is one scan line higher/lower than the other; and you have temporal issues because each field is 1/60 of a second delayed. Fortunately, the additional issue that used to be true is no longer with us, namely that in older scanning type cameras (the original TV cameras up through the Vidicon) each individual scan line was created at a later moment in time than its predecessor. With the advent of CCD and CMOS imaging, all the scan lines in a field are captured at the same moment in time. This is how you can have a variable "shutter" speed in modern cameras, a concept that had no meaning with cameras that used scanning imaging. Of course the motion created by these modern cameras is subtly different from their older ancestors, but that's a whole 'nother issue.

This last point is admittedly not entirely fair because it involves the telecine process, but I can tell you that if you record a movie off the air and then encode it to DVD, you will not be happy with the result. By contrast, if you do Inverse Telecine (IVTC) to remove the pulldown, encode the resulting 24p using the progressive template, and then have the DVD player reinsert the pulldown during playback, the difference is night and day. Not subtle in any way. I learned this long before I ever owned my first DVD burner and was encoding onto CDs using VCD and SVCD templates in TMPGEnc. The stuff looked horrible, when the original source was film, until I learned how to properly use IVTC. Once I figure that out, I created SVCDs that, even when I look at them today, stand up very well indeed to that same material encoded on DVD. The point is, the lower the bitrate (and SVCD is very low), the more you are going to notice these differences.

farss wrote on 5/12/2006, 3:50 PM
John,
I've very much had a problem with 192K as the minimum.
I had several seconds of pure black and then a cut to in camera black. The camera was in fact irised down and then the iris opened onto an almost dark stage. At least one NEW DVD player refused to play past this point. The real bummer was it was owned by the head of the establishment we were making these DVDs for. From memory a Samsung VHS - DVD recorder combo unit.
Analysis of the encoded file revealed a transition from 192K to OVER 8M within one frame time (encoders regularly go over max!).
This is a mechanical process, when the bitrate is very low the disk slows down, listen closely on some DVDs it can be quite noticeable!
Re-encoding the whole program with the minimum at 2M fixed the problem.

I've noticed the same thing with Victor's 'Light It Right' DVD, it plays fine but the sound of the player (a new and expensive Sony) ramping up and down can be distracting, mostly this happens at the static titles.

Bob.
johnmeyer wrote on 5/12/2006, 5:33 PM
This is a mechanical process, when the bitrate is very low the disk slows down, listen closely on some DVDs it can be quite noticeable!

Very interesting. I'll have to look into this further. I would have though the DVD player's buffer would be big enough to avoid having to rely entirely on the motor speeding up and slowing down. Certainly it doesn't go up and down and up and down in speed as the bitrate constantly changes. It couldn't do that, because there are dozens of changes every few seconds. Not saying you're wrong, but just wondering if maybe there isn't something more to the story.

Grazie wrote on 5/12/2006, 9:50 PM
Eh? - Like a moth to the light I got attracted back, by some ideas.

Proposition: Beyond the 2-pass function, is there such an elegant solution that after scanning a file it then could then "hold" a self-adjusting rate?

At the risk of also attracting my colleague's castigation or opprobrium, how about a variable average? Meaning, what I see appearing within this discussion, is that there IS a need to adjust the figures to AND for each and every project during EACH and EVERY scan. Yes we can use the bit calculator and get an initial fixed in stone value for the settings. A good start and one that would appear to suit most instances. However, there does appear to me something that is coming to light and that is we are "suggesting" the need to apply some complex and infinitely adjustable self monitoring and adjusting rate index that WOULD allow for these variables? Presently the MAXIMUM is set, the AVERAGE is set and lastly the MINIMUM is set. Wouldn't it be good if there was some software that actually read/scanned, created a most appropriate variable profile THEN went ahead and applied it?

What would the benefit be? Well, the file would be encoded/compressed using the most economical, self-adjusting template - it being DRIVEN by the content - and hopefully the result would be a reduced size.

Grazie
farss wrote on 5/13/2006, 12:19 AM
As I understand it that's just what multi pass encoding does.
An 'average' is just an average, there's no such thing as an instantaneous average. The term average only applies over a population or time interval. So at any point in time along an mpeg-2 file there is simply a bit rate.
We tell the encoder to do the best it can without going below or over a certain value and to achieve an average value overall. The BIG question as I see it is what is the averaging period, a few frames or the whole movie?
To that I don't have an answer but I'd assume the purpose of say running a 100 pass encode is to have a really good stab at it.

I suspect what makes the whole process very tricky is there's more than one form of compression being used. Feel free to do some in depth reading.
In summary though there's both spatial and temporal compression being used here, at times perhaps the encoder would do better to reduce spatial resolution to reduce macroblocking due to motion.

Overall I'd imagine some maths genius could write a program that in a single pass produced the optimum bitrate, I suspect on our lowly number crunchers it might still be running long after we've expired.

The thing I don't understand is this.

During a 2 pass encode Vegas and the MC encoder seem to traverse the file only once, how is this two pass. Or is it making two passes over small sections?
Grazie wrote on 5/13/2006, 12:47 AM
The BIG question as I see it is what is the averaging period, a few frames or the whole movie? . . . well yes that IS my point - a MOVABLE average for this MOVABLE feast. I guess you don't get what I'm saying . . :-(


Feel free to do some in depth reading.

Really? .. Gee thanks Bob, you're too kind! ;-)

During a 2 pass encode Vegas and the MC encoder seem to traverse the file only once, how is this two pass. Or is it making two passes over small sections?

Well .. my experience . and I've had a lot of it of late, is that the Encoder takes twice as long AND I can see it repeat the whole process in the Preview AGAIN! Are you saying that you SEE this in the Preview but you are not noticing a repeat of the same frame numbers? Please explain. Maybe you ODN'T have 2-Pass invoked? Maybe your encoder is broke?

Grazie
farss wrote on 5/13/2006, 2:47 AM
1) You good, me bad 2 Pass VBR does go through the file twice, been a long time since I watched an encode despite doing 1000s of them. They're left to run at another location, thanks again to Peach Rocks Multirender, shameless plug but I've made the cost back a 100 tiems over.

2) Re the variable average idea. Yes, sounds good except I just tried a test.
Created a .veg of 50 seconds of black followed 10 seconds of really bad Vegas generated noise. Encoded using one of my templates, first 1 pass and then 2 pass. 2 pass file is bigger.
Then graphed the bitrate using Bitrate.

1 pass runs at defined minimum for 50 seconds then JUMPS to max with minor overshoot, there's a slight sag after the overshoot and it ramps up a little until the end.

2 pass is exactly the same for the first 50 seconds then it shoots up a little faster but it overshoots more than 1 pas and then sits at the max until the end.

Neither of these files come even vaguely close to the defined average.

You see when there's nothing happening the encoder is contrained by the minimum bitrate, it cannot save that part of the bit budget for a rainy day. When the poop hits the fan at 50 seconds there's plenty of space left in the file but again the encoder is contrained by the maximum bitrate. All that'd improve the result is a higher max bitrate in this case but we cannot due to physical limitations in the player. Some applications like D-Cinema use mpeg-2 at I think I read 100 Mb/sec, you can see why.

Now my test isn't very real world by a long stretch. However it does illustrate conventional wisdom. At average bitrates over 5 Mb/sec 2 pass is unlikely to gain you anything, what'll kill you is the maximum bitrate if anything. Hopefully some part of the video is 'slow' enough to perhaps leave some room for the fast action part but if it's really bad stuff like heaps of noise or a dissolve through smoke your problem is the maximum bitrate.
At low average bitrates where the average is constraining then optimisation of the bit budget does have an effect. Now we're into a best fit calculation and from my knowledge these are not trivial calculations, an associate of mine wrote such a beast to crunch a relatively trivial best fit and it took serious CPU grunt, to run it on motion vectors etc over long video would probably take forever.
I don't know if there's better algorithms today but how his worked was to calculate the standard deviation for every possible combination and select the combination with the lowest deviation.
Multipass encoding is probably quicker but beyond two pass I wonder how much is gained, obviously someone thinks it worthwhile, they're paying $1000s for encoders that'll do it, to say nothing of waiting long times for the encoding.

As I think I mentioned before though you can hand optimise the encoding, TMPEnc (and other apps) will stitch mpeg-2 files. You just split the encoding task into parts and encode with different average values and then join the files, sometimes grey matter works faster than the best CPUs.

And by the way, you cannot have an instantaneous average, you can have an average using a sliding window.

Grazie wrote on 5/13/2006, 3:15 AM

And by the way, you cannot have an instantaneous average,

Fine!!

. . you can have an average using a sliding window. OK, I'll go with this then . . 1st PASS works out a 'Sliding Window' and the the 2nd PASS applies it.

G