Hi all:
I spent part of the weekend researching how some of the FX in VP20 work on different GPU vendors, and while I will eventually tie this in with an article at Techgage, I wanted to share the info here first, since it does raise a number of interesting angles. I tested AMD's Radeon RX 6600, Intel's Arc A770, and NVIDIA's GeForce RTX 3060 - all roughly the same SRP - with the latest available drivers. Sorry for the subpar formatting here:
(GPU Usage % / CPU Usage %)
4K/60 AVC to AVC (VCE/QuickSync/NVENC Transcode)
AMD 6600: 1m 40s (39% / 15%)
Intel 770: 1m 33s (92% / 34%)
NVIDIA 3060: 3m 14s (30% / 17%)
4K/60 AVC to HEVC (VCE/QuickSync/NVENC Transcode)
AMD 6600: 1m 50 (39% / 15%)
Intel 770: 1m 25s (91% / 35%)
NVIDIA 3060: 4m 27s (23% / 15%)
Add Noise FX
AMD 6600: 28s (52% / 17%)
Intel 770: 33s (83% / 21%)
NVIDIA 3060: 41s (31% / 22%)
Black Bar Fill FX
AMD 6600: 1m 16s (74% / 8%)
Intel 770: 1m 6s (84% / 34%)
NVIDIA 3060: 3m 32s (83% / 28%)
Bump Map FX
AMD 6600: 28s (41% / 23%)
Intel 770: 35s (84% / 23%)
NVIDIA 3060: 1m 6s (20% / 8%)
Gaussian Blur FX
AMD 6600: 36s (62% / 11%)
Intel 770: 33s (82% / 16%)
NVIDIA 3060: 53s (33% / 18%)
Linear Blur FX
AMD 6600: 29s (61% / 13%)
Intel 770: 33s (81% / 25%)
NVIDIA 3060: 54s (43% / 15%)
Median FX
AMD 6600: 1m 21s (91% / 5%)
Intel 770: 1m 20s (96% / 17%)
NVIDIA 3060: 2m 41s (75% / 30%)
Min and Max FX
AMD 6600: 2m 27s (92% / 6%)
Intel 770: 2m 40s (95% / 21%)
NVIDIA 3060: 3m 28s (93% / 47%)
Pixelate FX
AMD 6600: 25s (58% / 18%)
Intel 770: 30s (81% / 20%)
NVIDIA 3060: 50s (44% / 18%)
Spherize FX
AMD 6600: 24s (51% / 18%)
Intel 770: 32s (79% / 15%)
NVIDIA 3060: 46s (26% / 12%)
PROBLEMATIC:
Defocus FX
AMD 6600: 7m 51s (4% / 87%)
Intel 770: 1m 16s (91% / 32%)
NVIDIA 3060: 2m 18s (73% / 22%)
Newsprint FX
AMD 6600: 37s (56% / 10%)
Intel 770: Causes PC to become unusable
NVIDIA 3060: 1m 8s (35% / 11%)
Starburst FX
AMD 6600: 30m 27s (2% / 90%)
Intel 770: 1m 30s (85% / 42%)
NVIDIA 3060: 3m 0s (77% / 21%)
Key takeaways I immediately see is that AMD and Intel are almost just as performant as one another - both trade blows depending on the test. Unfortunately, Intel couldn't use the Newsprint FX without lagging the PC to the point it needs a hard reboot, and AMD's GPU wasn't utilized at all in Defocus or Starburst, which means they took ages to run because they were relegated to the CPU.
Meanwhile, NVIDIA was the slowest of the bunch, but it seems that's primarily due to the fact that encoding even without FX is so much slower than the others (the first two results highlight that). I plan to test encoding with Voukoder when I can, but for now just wanted to stick to the built-in encoder. It could be that Voukoder may be much kinder to GeForce.
When I compared the output quality from each encode, none of them stood out as being any different from the others. There have been times in the past where one vendor's output would be broken somewhere, but I couldn't spot issues here.
While all of these tested FX perform differently from vendor to vendor when using equal GPUs, it's really hard to tell right now which ones would actually benefit from faster models, in the same way faster GPUs definitively speed up 3D rendering. There are few FX that use all three GPU vendors to great effect. Median and Min and Max seem to come closest.