It's worth checking into, however unless you are working with 10 bit or greater input material, and you intend on outputting to a 10 bit or greater output format (HEVC encoded with HDR10 preset), using 32 bit mode is pointless.
What I see in your project files is expected behaviour and correct video processing. You're using a gamma 2.2 alpha video level within a linear gamma full level float-point project.
If your source is gamma 2.2 video level you need to use 2.2 gamma video level float-point to get matching results.
Marco, please make an effort to understand what I'm trying to show. Let's say I want to make a video ONLY with RGB and RGBA images of 32-bit color depth, HOW can I trust Vegas Pro? In my photo editing software I can create a 32-bit RGB with a 50% opacity shadow, but it will not be a 50% shadow in Vegas Pro in 32-bit mode! Never! Do you understand where the root of the problem lies? Honestly, I'd really like Vegas Pro to be a WYSIWYG type program (what you see is what you get). But just not! If I make a PNG with 50% of transparency, it should look like a PNG with 50% transparency in any possible universe, regardless of whether the project is 8-bit with 2.222 compositing gamma or a 32-bit full range project.
"Let's say I want to make a video ONLY with RGB and RGBA images of 32-bit color depth, HOW can I trust Vegas Pro?"
By using the correct settings which is up to the user.
"If I make a PNG with 50% of transparency, it should look like a PNG with 50% transparency in any possible universe, regardless of whether the project is 8-bit with 2.222 compositing gamma or a 32-bit full range project."
But this would be like slipping into a child's pairs of shoes and wondering why that shoes don't fit. Or if an apple should appear with same green, regardless wether it is illuminated with white or red light.
That 50 % (or any kind of) transparency would be same in an 8 bit 2.222 gamma project and in a 32 bit floating point video level project. Which is because both projects would share same definition of black point, white point and the mapping curve between. And for the same reason that 50 % transparency cannot be same in a 32 bit floating point linear level project. There are different definitions of black point, white point and the mapping curve between.
Gamma 2.2 source in gamma 1.0 surrounding and vice versa needs correction. Video level source in full level surrounding and vice versa needs correction. And while there is a slight chance meta data could help for identifying the source levels and then apply an auto-level-correction (but which is not available in Vegas Pro and actually is different in many apps) same does not count for gamma (as there is no such meta data). If there is no information about which gamma, black and white point a source file uses, how should Vegas Pro correct them automatically?