(Complete title was "Are GPUs non-deterministic and/or how can I get them to render the expected video" but has been truncated by forum limitation.)
Hello,
My question has rather two subquestions:
- How must of these impressions/assumptions is right or wrong?
- Is there anything I’m doing wrong, a setting I should change?
Context: Using mostly Vegas Pro, but more often noticed using plug-ins such as Ignite Pro plug-ins and recently NeatVideo.
Very often, software and plug-ins offer to use GPU in applications such as video rendering. Very often GPU rendering is much faster than CPU rendering (say for hardware of the same era and comparable "performance expectations").
What I observed is that when I use GPU rendering, OK I get videos faster but, for the most demanding renderings, or even sometimes not necessarily demanding renderings, I often get a bad rendered result (which is then also different from preview), like frame drops, turning black, image disappearing for some time, and most recently some frames getting tainted red. And I’m really talking of random results, from and rendering to another, the same frames are not necessarily wrong.
When I turn GPU rendering off so I only use CPU, I never get that kind of stuff, I get the same result pixel by pixel at each rendering .
By the way I observed this over year with various GPUs, various computers, updated or non-updated drivers, …
I’m assuming that GPU must do non-deterministic calculations. What makes me think of that is that consumer GPUs are mostly built for gaming. And the first thing that gamers watch in GPUs is: frame rate. So I’m expecting the GPUs may tolerate doing some calculation errors or approximations provided that the game flow isn’t interrupted. They might even prefer achieving "more/faster computation" than "perfectly accurate computation", as consumer will mostly judge them based on framerate.
Though, I know people and companies who use GPUs for algorithms, such as artificial intelligence, but, in my understanding, machine learning can be error resilient too. (Or it will be biased due to false negative/false positive but maybe not enough that that it’s noticeable?)
And I actually found some references explaining how programmers can actually trigger deterministic mode or not when programming with TensorFlow.
So maybe it depends on how the software/plug-ins have been written in the first place?
So how much of this is right/wrong?
How could I use GPU rendering and get accurate result for my renderings? Because I would like to benefit from faster rendering too…