I've read Glenn Chan's article explaining that certain Vegas filters can yield different results under 32 bit floating point vs. 8 bit pixel format. That's certainly been my experience so far; in fact, some of the filters seem not even to work under 32 bit. My question is, is this understandable and correct behaviour, or are there in fact bugs? Coming from the audio world, higher resolution audio can preserve audio details and fidelity better than lower resolution, but in my expereice the outcomes are close. In my limited experience with Vegas plug ins and 8 bit vs. 32 bit float, the results are nearly always significantly different, and often widely, outrageously different. This seems illogical.