Rendering BUG - CPU outputs smaller file vs. GPU?

VegasAndrew wrote on 1/3/2016, 9:14 PM
Seems to be a bug in GPU rendering. I have this same issue as posted here:
https://forums.creativecow.net/thread/24/995596#995599

Vegas 13 on a Intel Core i7-4790K at 4.0Gz
I have a small test clip I am rendering with "Sony AVC (Internet 1920x1080-30p template)", and the first render with CPU only has a slightly lower bitrate than GPU render and file size is 41,285. If I simply change the same template to GPU render, it comes out slightly higher bitrate and 47,908 and as expected renders faster.

Some figures from MediaInfo:

GPU 51 secs Render Time
Bit rate mode : Variable
Bit rate : 15.4 Mbps
Maximum bit rate : 16.0 Mbps

CPU 56 secs Render Time
Bit rate mode : Variable
Bit rate : 13.9 Mbps
Maximum bit rate : 16.0 Mbps

Be nice to hear from Sony on what could be causing this.

Thanks.

Comments

NormanPCN wrote on 1/3/2016, 9:43 PM
I don't know as that is a bug. A different encoder is allowed to render different. The GPU and CPU code is certainly a slightly different algorithm relative to each other. Obviously the rate control algorithm is different between the two.

Grazie wrote on 1/4/2016, 3:11 AM
Norman: "Obviously the rate control algorithm is different between the two.

What is the/a Rate Control? And why/how would it affect the final outcome of the rendered-out file? Shouldn't the User expect the same outcome?

Andrew has quoted the COW link where I had done not a little work in presenting I feel 2 useful BitRate Viewer Graphs.

G
OldSmoke wrote on 1/4/2016, 5:23 AM
Shouldn't the User expect the same outcome?

For constant bit rate I would assume so but with variable bit rate, it can be different.

Proud owner of Sony Vegas Pro 7, 8, 9, 10, 11, 12 & 13 and now Magix VP15&16.

System Spec.:
Motherboard: ASUS X299 Prime-A

Ram: G.Skill 4x8GB DDR4 2666 XMP

CPU: i7-9800x @ 4.6GHz (custom water cooling system)
GPU: 1x AMD Vega Pro Frontier Edition (water cooled)
Hard drives: System Samsung 970Pro NVME, AV-Projects 1TB (4x Intel P7600 512GB VROC), 4x 2.5" Hotswap bays, 1x 3.5" Hotswap Bay, 1x LG BluRay Burner

PSU: Corsair 1200W
Monitor: 2x Dell Ultrasharp U2713HM (2560x1440)

Chienworks wrote on 1/4/2016, 7:20 AM
Bit rate : 15.4 Mbps - vs. - Bit rate : 13.9 Mbps

That's the entire answer right there. File size is determined by length and bitrate.

You don't tell us what your template settings are. Are you entering a specific average bitrate, or are you using a default output setting that chooses it's own bitrates?
NormanPCN wrote on 1/4/2016, 10:04 AM
What is the/a Rate Control

Rate control is the the term I was using to describe the effective resulting bitrate. It is a dynamic frame to frame decision. At the end of the encode you have a resulting average bitrate.

Sony AVC is single pass, variable vibrate. The value we give it in the template seems to be a max bitrate and the average will be something slightly below that. That resulting average value is different between CPU and GPU versions of the encoder. That is allowed and acceptable. They are typically using different algorithms and not not the same encoder/algorithm just running on GPU.

Would one complain or claim a bug if Mainconcept AVC and Sony AVC did not give the same variable average bitrate? Probably not, since they are different encoders.

I am not saying that anything on GPU will result different. Only when algorithms are different. People think GPUs are "faster" than CPU. Well they are not in serial computation. What they are faster at is massively parallel computation. Most video effects are very applicable to parallel computation. Consider a Sony Levels, in HD you have ~2 million pixels, and you can conceivably compute all of them in parallel at the same time.

Compressing a file, encoding a file, aka Render as, is not very applicable to massively parallel computation. You have to change many algorithms and compromises to put a square peg into a round hole.