Plea to New Users -- Please Do Not set 32 Bit Pixel Format

Musicvid wrote on 2/5/2019, 3:48 PM

Three times in the same week, new users have come forth with everything from colorspace confusion, preview or output leveling noncompliance, to very long render times, painfully slow preview and lots of stress, because of a single incorrect assumption, and the people here trying to help don't even hear about it until too late in the discussion.

There's so much of this showing up suddenly, that I suspect someone on the internet is spouting irresponsible hype and nonsense, again.

The default Pixel Format should not need to be changed. It should adopt the bit depth of the first Project media.

8-bit pixel format is correct for 8-bit source and output. That's 99% of all video.

32- bit pixel format is for 10- bit source AND 10- bit output together. If your source OR output is 8-bit, it won't accomplish a damn thing, except to screw with your native colorspace, cause unusually long render times and painfully slow preview performance.

Points to remember:

Comments

fan-boy wrote on 2/5/2019, 10:13 PM

I thought setting 32 bit provided more accuracy in interpreting the imported 8 bit video . when I do set it to 32 bit , the 8 bit video does look "Better" , in the Viewer .

Musicvid wrote on 2/5/2019, 10:17 PM

There are lots of people who think that; don't blame yourself. One way to test your "viewer" is to superimpose the two graded images with the Difference composite enabled. Or, for your convenience, I've already done that for you.

Or, you may run your own controlled test environment and post your conclusions, preferably in your own thread.

Welcome to the forums.

xberk wrote on 2/5/2019, 10:30 PM

These posts are a real library of knowledge for now and for the archive .. I, for one, appreciate them.

Musicvid wrote on 2/5/2019, 10:44 PM

You and Eagle Six may be the only ones this winter, I fear. But never fret, one or two new editors always come around, and it's who they eventually become as contributers that keeps me trying.

I tagged you as the Solution, not because you agree with me, nor because I think I am 100% right, but because of your capacity for rational thought and a well-chiseled cortex. Best.

klt wrote on 2/6/2019, 12:44 AM

+100

Should not this be sticked to the top?

Last changed by klt on 2/6/2019, 12:45 AM, changed a total of 1 times.

Camera: JVC GY-HM600

Desktop: AMD Ryzen 5 1600, 16GB RAM (dual channel 2400 MHz) - Videocard: Radeon R9 380 2GB

Laptop: i5 5200u, 8GB RAM (1600MHz single channel) Videocard: integrated HD5500

Musicvid wrote on 2/6/2019, 1:52 AM

Thank you, klt. You've been a solid resource on a lot of this legacy stuff.

PAP wrote on 2/6/2019, 1:58 AM

Great info thank you.

vkmast wrote on 2/6/2019, 4:41 AM

@klt as the recommendation is to keep the stickies to a minimum, I'll add a comment re the "Plea" to this thread. @Musicvid and others, note that he's klt, not kit :).

wwjd wrote on 2/6/2019, 7:34 AM

what if you are upscaling and using the "tween" colors?

OldSmoke wrote on 2/6/2019, 8:55 AM

what if you are upscaling and using the "tween" colors?

I only “know” of the reverse to “work”, down scaling 4K 420 to 1080 422 and that is not a true 422 either.

Proud owner of Sony Vegas Pro 7, 8, 9, 10, 11, 12 & 13 and now Magix VP15.

System Spec.:
Motherboard: Intel DX79SR
Ram: G.Skill 8x4GB DDR3 2133 (running at 1600 and lower latency)
CPU: 3930K @ 4.3GHz (custom water cooling system)
GPU: 1x ASUS Fury-X
Hard drives: 4x 2GB WD Red in RAID 5 (with Hot Spare), 2x Crucial 256GB SSD in RAID 0 (mulitcam project drive), 1x Samsung 850 Pro 256GB SSD (System), 1x Crucial 64GB SSD (temp files and swap file), 1x 3.5" Hotswap Bay, 1x LG BluRay Burner
PSU: Corsair 1200W
Monitor: 2x Dell Ultrasharp U2713HM, 1x Sony HDTV 32" preview monitor

Musicvid wrote on 2/6/2019, 10:29 AM

wwjd,

Being but a reluctant upscaler, I'm not connecting with your term. Can you show us?

Musicvid wrote on 2/6/2019, 10:37 AM

In my online world, I walk a fine line between perpetual OT and obscurity. Penning a sticky wasn't in my thinking, and so I am grateful that this was well-received.

Eagle Six wrote on 2/6/2019, 10:50 AM

Keep them coming old wise teacher, I yearn for learning. The more wisdom you post the more it will spread.

"Built it and they will come"......maybe!

System Specs......
Corsair Obsidian Series 450D ATX Mid Tower
Asus X99-A II LGA 2011-v3, Intel X99 SATA 6 Gb/s USB 3.1/3.0 ATX Intel Motherboard
Intel Core i7-6800K 15M Broadwell-E, 6 core 3.4 GHz LGA 2011-v3 (overclocked 20%)
64GB Corsair Vengeance LPX DDR4 3200
Corsair Hydro Series H110i GTX 280mm Extreme Performance Liquid CPU Cooler
MSI Radeon R9 390 DirectX 12 8GB Video Card
Corsair RMx Series RM750X 740W 80 Plus Gold power pack
Samsung 970 EVO NVMe M.2 boot drive
Corsair Neutron XT 2.5 480GB SATA III SSD - video work drive
Western Digitial 1TB 7200 RPM SATA - video work drive
Western Digital Black 6TB 7200 RPM SATA 6Bb/s 128MB Cache 3.5 data drive

Bluray Disc burner drive
2x 1080p monitors
Microsoft Window 10 Pro
DaVinci Resolve 15.2.3
SVP13, MVP15, MVP16, MVMS15

Musicvid wrote on 2/6/2019, 11:32 AM

I'm willing to play devil's advocate for a bit to see if I've missed something that other people are able to quantify and measure. This one gets quoted a lot, despite the huge processing collateral it visits on the unsuspecting and apparent absence of validating data:

When using 8-bit input/output, the 32-bit floating point (video levels) setting can prevent banding from compositing that contains fades, feathered edges, or gradients.

This sounds reasonable on the surface, especially if for 10 bit delivery; however, I've never been able to see or measure a difference using 8 bit source, nor have I ever found substantiated testing to back up that claim; God knows I've tried. For years.

So, the one factor that can not be ruled out are the observer's beliefs and expectations, which are a valid part of an inquiry.

I heartily support one's right to do all his editing in 32 bit float, despite multiple issues, if he believes it to be a worthwhile improvement and is mature enough to absorb the performance hit without claim to entitlements. I personally wish for once to be shown an advantage in cold numbers, rather than being told to accept it at face value.

Not reporting up front the choice to stray from the defaults, while at the same time issuing complaints about slow render times, preview performance, levels, and general indifference to peers is not a reasonable request for support.

Verbose encode logging, pretty please, Magix?

OldSmoke wrote on 2/6/2019, 11:57 AM

I think one must also state if they work in "32bit float video levels only" or "32bit full"; there is a huge difference, isn't there?

Proud owner of Sony Vegas Pro 7, 8, 9, 10, 11, 12 & 13 and now Magix VP15.

System Spec.:
Motherboard: Intel DX79SR
Ram: G.Skill 8x4GB DDR3 2133 (running at 1600 and lower latency)
CPU: 3930K @ 4.3GHz (custom water cooling system)
GPU: 1x ASUS Fury-X
Hard drives: 4x 2GB WD Red in RAID 5 (with Hot Spare), 2x Crucial 256GB SSD in RAID 0 (mulitcam project drive), 1x Samsung 850 Pro 256GB SSD (System), 1x Crucial 64GB SSD (temp files and swap file), 1x 3.5" Hotswap Bay, 1x LG BluRay Burner
PSU: Corsair 1200W
Monitor: 2x Dell Ultrasharp U2713HM, 1x Sony HDTV 32" preview monitor

Musicvid wrote on 2/6/2019, 12:06 PM

Yes, there would be.

You sure made a good call on that last forum rant, OldSmoke.

Turd wrote on 2/6/2019, 1:37 PM

I understand and fully accept the premise here for video that's shot with a camera, but what about a final render that's graphics intensive or a mix of camera/graphics? Especially graphic backgrounds that typically show banding? What do you think about those cases? 32 or 8?

Note to self (everyone else please look away -- the note that follows is a reminder for mine eyes only): Figure out a clever, kick-booty signature that suggests I'm completely aware of how to properly and exhaustively party on and that I, in fact, engage in said act on a frequent and spontaneous basis.

Musicvid wrote on 2/6/2019, 3:03 PM

Good question. Grade and level for the output, not the source.With mixed source, I stick with 8-bit absolutely, unless rendering for a 10-bit delivery. Graphics are RGB, so I sometimes put them on their own track and conform to the project at the track level.

fifonik wrote on 2/7/2019, 3:15 AM

> 2- bit pixel format is for 10- bit source AND 10- bit output together.

Some people will never agree with the statement.

However, I agree that for people who are just started video/photo editing it might be good advice.

wwjd wrote on 2/7/2019, 7:06 AM

wwjd,

Being but a reluctant upscaler, I'm not connecting with your term. Can you show us?

my tests were years ago when HD was still a thing. upscaling hd to 4K, then pixel peeping revealed the new tween pixels were not simply copies of next door pixels, but a new tween shade. I'd think wider 32bit color possibles would take advantage when generating the tween shades. Alas, thus was long ago and I have no clue where yon picture examples are buried to show ye. :)

Musicvid wrote on 2/7/2019, 7:23 AM

Some people will never agree with the statement.

Hadn't planned on everyone agreeing; now that would get boring. Yet in your thread, there appear to be those who do agree.

Marco. wrote on 1/31/2018, 2:46 AM

fifonik These are cases where I would also use float point processing. Sometimes.

One thing to take care of, for 8 bit outputs it would only help if your signals exceed the range from 0-1. But usually I carefully control the levels via scopes and try to avoid leaving the range from 0-1. Then there's no need for float point processing when your project would finally leave as an 8 bit video.
(And just to avoid confusing others: if the output is 8 bit or any other integer processing type, this recover of clipping only works inside this very float point project before rendering.)

Welcome to the discussion. To keep it from derailing, I've once again asked for comparative graphic examples using camera source, which I know can be difficult to construct, but not impossible.

Meanwhile, I'm working on just such a model to validate the narrow gains suggested in the statement that you and I have now both quoted, but not ruled out. Care to join me? No harm in a parallel inquiry.

[HINT /]

Musicvid wrote on 2/7/2019, 7:58 AM

wwjd, yes, almost all upscaling use some form of interpolation, which can be clearly seen at the pixel level. To double the dimensions, the number of pixels is multiplied by four.

Unfortunately, bit depth inflation doesn't work the same way in Vegas. Instead of filling in the gaps, it just leaves them empty, no new colors. A gallon of water in a five gallon bucket is still a gallon.

Tim L wrote on 2/7/2019, 6:47 PM

wwjd, yes, almost all upscaling use some form of interpolation, which can be clearly seen at the pixel level. To double the dimensions, the number of pixels is multiplied by four.

Unfortunately, bit depth inflation doesn't work the same way in Vegas. Instead of filling in the gaps, it just leaves them empty, no new colors. A gallon of water in a five gallon bucket is still a gallon.

I've only ever used 8-bit projects, and 8-bit sources, but could you explain what the graphic here represents or how it was generated?  Unless I'm misunderstanding, it looks like it DOES show intermediate pixels.  Most of the gaps don't go all the way to the floor, indicating that at least some pixels are being generated and tabulated in the gap values.  (But frankly, there's a real good chance that I just don't understand this yet...)

Musicvid wrote on 2/7/2019, 8:23 PM

These were done in 2014 when I was 65, wwjd has already seen these and we discussed them at SCS, so folks with questions should "run your own tests" and illuminate us all. My benchmark is the grayscale in my signature, but one could use any full range source. Render bit depths are self explanatory. The 8-8 and 8-10 renders were remarkably close in size.

;?)

8->8 Bit

10->10 Bit

8->10 Bit