I've been saying this for years after reading a long post by the developer of VirtualDub saying that GPU encoding isn't the panacea everyone assumes it will be. All of the GPU-accelerated encoders I've tried so far produce lower quality results than just CPU encoding by itself. As far as I'm concerned, currently there is no substitute for tight code and a fast CPU.
GPUs are slower than CPUs. The advantage they have is massive parallelization. Only task that fit the massive parallel mold can really benefit from GPU use. Encoding a file to things like AVC and such is not terribly parallel. You can force the massive parallel issue, putting a square peg into a round hole, but that introduces compromises which typically affect quality.
Here is a link I've posted here before from a lead x264 developer about issues with GPU use and file encoding.
GPU accel is absolutely ideal for things like video effects, masks and compositing. These are inherently parallel tasks with little or no need for thread synchronization. Using GPU for this is a no brainer win.
Our brain elements are not terribly fast but massively parallel. There is the promise if we can only reverse engineer the brain. The trouble is that we focus on certain parts of a scene in fine detail or the whole scene in a general way, but the whole scene has to be available in fine detail because there is no way of anticipating what detail we want to look at next.
What happened to Artificial Neural Networks? I haven't heard people talking about them for some years.
(i)>Do you actually have a choice to render without GPU in Resolve?(/i)
No, but the render quality from Resolve with the GPU looks the same as Vegas without it. I don't see that hit in quality that Vegas is getting when I engage the GPU during render. l think it's pretty much a non-issue outside of the Vegas platform.