Question on Upscaling From 25 MPS to 50 MPS.

gsealy wrote on 5/6/2014, 8:17 PM
Just a general question -- If you have a project in which the video is shot at 25 MPS, what happens in SVP when it is rendered at a higher bit rate, say 50 MPS. Is the output truly 50 MPS? But perhaps does SVP 'interpolate" so that the video is smooth? Or is the video just poor?

As a related question what happens when say two cameras deliver 50 MPS, but a third camera delivers 25 MPS. The first two are on the first two tracks and the third is on the third track.

The completed video is a composite of the 3 tracks and rendered at 50 MPS. Is it basically junk from a 50 MPS point of view?

Thanks.

Comments

Chienworks wrote on 5/6/2014, 8:46 PM
It will look better if you render at 50mbps than if you render at 25mbps. However, both will look worse than the original. It's just the 50mbps version will look less worse than the 25mbps version.

In general, these three things are true:
- rendering to a higher bit rate will always look better than rendering to a lower bit rate
- no matter what the output bitrate is, it will look worse than the original
- both of the above are true no matter what the source bitrate is.

Takeaway is, unless you're in a smart-rendering situation, there's no "magical link" between input and output bitrates. The video editing/rendering engine always decompresses the source into an uncompressed stream, then recompresses to generate the output.
NormanPCN wrote on 5/6/2014, 9:12 PM
- no matter what the output bitrate is, it will look worse than the original

I would disagree with this is some specific conditions.

When comparing bitrates one must be comparing the same codec as source and output. Different codecs have different characteristics. For example a 25Mbps AVC stream should be significantly better than a 25Mbps mpeg-2 stream.

I have mostly worked with AVC and my comments are based on that experience.

In AVC once my encode bitrate is close to the camera bitrate I see pretty much identical visual quality. I have to pixel peep at static images to see differences. Mathematically they are certainly different.

I have never seen anything get better with a higher bitrate encode than camera source bitrate. Higher than camera bitrate seems wasteful, but harmless.

Typically we can get visually similar quality at lower bitrates than camera source from encodes off our computers. This is because camera output is minimally compressed by comparison. Cameras do not have the CPU power or the time to do the compression analysis that PC encoders do. I am specifically referencing various inter-frame compression analysis. Having said this, material that gets less benefit from inter-frame compression needs to be really close to camera bitrate to compare.

You mileage may vary. You eyes are always the best judge. You can do your own encode tests.
musicvid10 wrote on 5/6/2014, 9:31 PM
Your terminology is totally unfamiliar.
Was your footage shot at 25 Megabits per second (Mbps)?
Or are you talking about FRAME RATE (fps)?
"Upscaling" refers to neither of those terms, but to frame resolution.

Using precise, standard terminology will save you and others a lot of confusion.
farss wrote on 5/7/2014, 2:30 AM
Assuming you meant Mbps when you said MPS the difference is that mostly 25Mbps video is 4:2:0 chroma sampling with compressed audio and 50Mbps is 4:2:2.with uncompressed audio for the camera side of things.

For sure if you render from 4:2:0 to 4:2:2 your video will not magically look better however text and graphics may. The Vegas pipeline itself can be described as 4:4:4 as it's RGB. Visually it's hard to see the difference between 4:2:0 and 4:2:2 unless you're chroma keying or doing compositing.

Bob.
gsealy wrote on 5/7/2014, 9:54 AM
Thanks. Yes, I was loose in my terminology. Sorry for that.

I was talking about a hypothetical situation in which the video was shot at 25 Mbps and yet the final video from Sony Vegas Pro is rendered at 50 Mbps. I wanted to know what actually happens.
Laurence wrote on 5/7/2014, 10:00 AM
At every generation of data compression there is damage. Having said that, some of the high end intermediate formats (like Cineform, DNxHD and ProRes) are designed to go many generations without significant damage. When your footage was shot and encoded at 25Mbps, there was a certain amount of damage done. Rendering it to 25Mbps format will do this damage again. Rendering it to 50Mbps will do less damage on that successive generation. Rerendering into another format will never actually improve the footage.
musicvid10 wrote on 5/7/2014, 10:14 AM
A higher bitrate can reduce encoding losses up to a point, but not eliminate them.
There is a point of diminishing returns, above which there is no further help possible. It is actually a rather steep dropoff in net gains, from psnr/ssim tests I ran a few years back.

What the optimal bitrate is for your video depends on a lot of things, but I suspect that doubling the bitrate is well into the overkill category.

You can never improve the quality of your source video by encoding at higher bitrates.

gsealy wrote on 5/7/2014, 10:16 AM
Thanks for your answer. I really appreciate the time you took in responding.

So it seems from your answer that Sony basically adds in the extra bits of information to take 4:2:0 to 4:2:2.

Our situation is that we have cameras that do 4:2:2 recording. However, sometimes we might want to mix in footage for a video that was shot at 4:2:0. It happens. It seems as though while that piece will not look better at 50 Mbps rendering, it won't look worse than it did at 25 Mbps.

Any further insights and comments are greatly appreciated!
Laurence wrote on 5/7/2014, 10:28 AM
Yes, you have got the idea. These video standards have a lot to do with engineers trying to make to same compromises as our eyes so as to minimize the amount of data while preserving the things you actually see and notice. The 4:2:2 colorspace preserves more detail than you usually see, but there is an exception to this and that is very sharp colored edges. This can be important on text overlays and on chromakeying where sharp green lines are suddenly important. Footage shot in 4:2:2 won't necessarily jump out at you as being noticeably better quality, but when you go to chromakey it, that extra green information will make a world of difference in how the edges of the key will look.

As was discussed in a thread not to long ago, there is also the issue of the amount of detail in RGB as Vegas uses and the YUV encoding that is what most encoding formats use. 8 bits of RGB is appoximately equal in resolution to 9 bits of YUV encoding, so there actually is an advantage to rendering to a ten bit format after any sort of color correction or grading in Vegas (even though Vegas is structurally limited to 8 bit resolution and rounds down to 8 bits on 10 bit renders). You will see this in the smoothness of any gradients in the render quite clearly.

Most of us don't worry about 4:2:2 unless we are chroma keying.
musicvid10 wrote on 5/7/2014, 2:49 PM
Bitrate and chroma subsampling are completely independent considerations.
Subsampling is one of many factors that influence bitrate, but not the other way around.
Red Prince wrote on 5/7/2014, 3:45 PM
I wanted to know what actually happens.

That depends. If all you do is import the footage at 25 Mbps and export it at 50 Mbps, not only will you get no improvement, you may actually lose some minor detail.

If, on the other hand, you import it and modify it somehow, e.g., by color grading it or adding some text or combining it with some other footage, etc, then and only then will 50 Mbps look better than 25 Mbps.

So the point is modification. Re-rendering it without modification is a complete waste. Rendering some modified footage, however, is actually a new footage with additional detail, so the higher the bit rate the better.

He who knows does not speak; he who speaks does not know.
                    — Lao Tze in Tao Te Ching

Can you imagine the silence if everyone only said what he knows?
                    — Karel Čapek (The guy who gave us the word “robot” in R.U.R.)

Laurence wrote on 5/7/2014, 4:00 PM
I only use intermediates for editing when I have heavy noise, Beauty Box, or color correction which will make the timeline feel sluggish. When I do, I do my footage processing as I make the intermediates, since the things like noise and detail in the shadows are the things most damaged by the extra generation.
farss wrote on 5/7/2014, 4:56 PM
Laurence said:
[I]"even though Vegas is structurally limited to 8 bit resolution and rounds down to 8 bits on 10 bit renders"[/I]

Not so, Vegas offers a couple of options to use a 32bit bit floating point pipeline which will not round 10bit values to 8 bit. I tested this a few years back and it really does work.

We should all give [I]some[/I] consideration to acquiring 4:2:2, the BBC and others mandate 4:2:2 for acquisition of any content they're funding. Today one can buy a camera that'll record 4:2:2 for less than a HD camcorder cost when HD first started. The downside isn't the cost of the camera, it can be the cost of the recording media.

Bob.
Chienworks wrote on 5/7/2014, 8:21 PM
"If, on the other hand, you import it and modify it somehow, e.g., by color grading it or adding some text or combining it with some other footage, etc, then and only then will 50 Mbps look better than 25 Mbps."

Not entirely exactly true. Whether you modify it or not, rendering to 50mbps will look better than 25mbps. It may not look much better, in fact the difference may be invisible to the human eye, or it might not be. But, the 50mbps version *will* be better.

The major point to understand is that neither will be better than the original, no matter what bitrate the original is.
Red Prince wrote on 5/7/2014, 10:47 PM
Not entirely exactly true.

Oh, yes, it is. And here is why:

Whether you modify it or not, rendering to 50mbps will look better than 25mbps.

vs.

The major point to understand is that neither will be better than the original

Sorry, you can’t have it both ways. :)

We are talking about a 25 Mbps original here. If you do not modify it and re-render it at 50 Mbps, it will not look any better and might even lose some minor details.

Since the original was rendered at 25 Mbps, it threw away some detail to achieve that compression. And that detail is gone forever. Importing it to some software will not magically restore whatever the original compression threw away, except on Law and Order. And since it is not there, it will still not be there if you now render it at 50 Mbps, but the 50 Mbps may throw away some other detail.

I stand by what I said: Re-rendering the unmodified 25 Mbps original at 50 Mbps will not only not improve it, it may actually hurt it. And if it does not, it is a complete waste because now you have a larger file with no additional image information.

P.S. Compressing to 25 mbps or even 50 mbps will be worse than compressing to 25 Mbps. 50 mbps = 50 millibits (0.050 bits) per second, which is on the order of half a billion less information than in 25 Mbps, which is 25 megabits (25,000,000 bits) per second. The units are case sensitive.

He who knows does not speak; he who speaks does not know.
                    — Lao Tze in Tao Te Ching

Can you imagine the silence if everyone only said what he knows?
                    — Karel Čapek (The guy who gave us the word “robot” in R.U.R.)

Chienworks wrote on 5/8/2014, 7:30 AM
Red Prince, when you say "We are talking about a 25 Mbps original here. If you do not modify it and re-render it at 50 Mbps, it will not look any better and might even lose some minor details.", you are actually agreeing with me completely, and in fact agreeing with the very thing to which you respond "Sorry, you can’t have it both ways. :)"

I think your difficulty is that you are misunderstanding me. Let me restate with a little more emphasis: "Whether you modify it or not, rendering to 50mbps OUTPUT will look better than 25mbps OUTPUT."

I never said 50mbps output would look better than 25mbps input. I stand by exactly what i said, which is that rendering to a higher bitrate output will always look better than rendering to a lower bitrate output. This holds true no matter what the input bitrate is, and whether you've modified the video stream or not.
Red Prince wrote on 5/8/2014, 9:03 AM
Unfortunately, the math says otherwise. :)

When compressing to 25 Mbps, certain, presumably visually less important, data is replaced with zeros. The zeros are not stored in the file/stream, the decompression software knows it and just uses zeros in its math. That data is no longer recoverable.

When you then re-compress it at 50 Mbps, the zeros are still zeros, but you do not throw them away, just to keep the size at 50 Mbps. Which is a total waste of bandwidth with no improvement of the image. The data that was thrown away does not magically reappear. It is mathematically impossible.

Now, if your original was of great quality and was not compressed (or used non-lossy compression), then and only then compressing that data at 50 Mbps would give you more detail than compressing it at 25 Mbps. But that is not what the OP was asking about.

He who knows does not speak; he who speaks does not know.
                    — Lao Tze in Tao Te Ching

Can you imagine the silence if everyone only said what he knows?
                    — Karel Čapek (The guy who gave us the word “robot” in R.U.R.)

musicvid10 wrote on 5/8/2014, 2:04 PM
Red Prince,
Having spent many hours running PSNR/SSIM tests to find the optimum recompression bitrates for compressed source, calculated as a factor of the source, not once did I get results resembling anything like you are suggesting. The "math" you quote does not apply to predictive encoding, sorry to say.

Higher recompression bitrates improve quality up to a point, but never reaching that of the source.
The graph of diminishing returns is not unlike Y=1-(e^-x) (in Quad I, of course).

Red Prince wrote on 5/8/2014, 3:27 PM
Once again, that is the whole point. The OP was asking whether a 25 Mbps source will be improved by re-compressing it at a higher bit rate. It will not. You cannot go any better than the source.

I can’t believe we are even arguing over that.

As for the math of it, I suggest the book The H.264 Advanced Video Compression Standard, Second Edition, by Ian E. Richardson (published by Wiley). It discusses it in depth. You do lose information when you compress it. And that information is gone forever.

It would be a completely different matter if your source did not come out of the camera already compressed. Alas, let me recap the OP question:

If you have a project in which

I was answering that question, and I was told, “Not entirely exactly true.” Well, I’m sorry, but my reply that it will not improve and explained why was entirely exactly true, because improving the original is not mathematically possible. What is unclear about that? As I pointed out, only in fictional shows, such as Law and Order can they take a fuzzy image and enhance it to perfection. In real life it does not work that way.

I don’t understand how people are agreeing with me that you cannot go better than the source, then turn around and say that re-compressing a 25 Mbps source will improve it. That is why I said you can’t have it both way. You just can’t say you cannot go better than the source and, at the same time, that re-compressing the source will improve it. Not in the same Universe! :)

He who knows does not speak; he who speaks does not know.
                    — Lao Tze in Tao Te Ching

Can you imagine the silence if everyone only said what he knows?
                    — Karel Čapek (The guy who gave us the word “robot” in R.U.R.)

musicvid10 wrote on 5/8/2014, 9:10 PM
There is only one person here arguing over that. It is not Kelly, it is not me. There is nothing in the quote from the OP you posted just above that would compel me to believe otherwise.

The correct response to the question you quoted is, "Yes, it will be 50Mbps, but the quality will not be as good as the source."
Capish?

Why don't you try actually reading the responses?
Or maybe just look at my graph above, which illustrates a function that approaches, but never actually achieves source parity??

We all know you can't improve on SOURCE quality by re-encoding at a higher bitrate, or any bitrate for that matter! I even wrote about it in 2009*. Whether or not you read that into the question, that was NOT proposed in any of the responses. I teach math and reading for a living; the responses above looked fine until, well . . . . . .

If English is not your first language, I can easily understand missing some of the subtleties and nuance in syntax. Kelly (Chienworks) is correct, as we have come to expect.
That being said, best of luck.

* http://www.sonycreativesoftware.com/forums/ShowMessage.asp?ForumID=12&MessageID=660127
Chienworks wrote on 5/9/2014, 9:44 PM
Near as i can tell, RedPrince is operating under two slight misconceptions.

1 - That some of us think that 50Mbps output will be better than 25Mbps input. No one has said that. We all know it will be worse because it will have gone through another compression stage. We're only saying that 2xMbps output is better than 1xMbps output, both of which are worse than the input.

2 - That somehow, Vegas copies the original compression into the new file. Aside from the extremely few and rare cases of smart-rendering where there is no recompression, this is never true. The source is decompressed into full RGB discrete frame, and then compressed again with absolutely no consideration for the format or compression of the original. This is why i add "this is true no matter what the bitrate of the original was."

Consider also that if smart-rendering were involved, there would be no choice of output bitrate. It MUST match the source, and this entire thread would never have occurred. If it isn't an exact match then no smart-rendering is involved.