NickHope. Did you actually watch the above video? The video above demonstrate 4K 60P with FX playing back at full frame rate. During playback the 5700XT gpu loading consistantly 68-70% at all time during playback. I don't understand why anyone keep saying this card is not supported in Vegas,
@BruceUSA I did watch the video, and it appears to demonstrate that the card works well for GPU acceleration of video processing, which uses OpenCL. I assume that accounts for the high "Video Encode" figure.
However, as far as I have read and researched, it is not supported for hardware decoding of supported formats, which you would find on the File I/O tab of the VEGAS preferences. That requires UVD, but the 5700XT has the newer VCN, not the older UVD. That is presumably why the "Video Decode" graph in the video does not register (unless the media is an unsupported format). Lack of support for hardware decoding may not be such a big deal, as I don't think it adds a lot in terms of performance. Personally I'm not bothering to run it, even though my VEGA64 supports it. In fact I think it actually slowed my timeline down a little when I tested it. But it might help more with some other formats.
More importantly, as far as I have read and researched (example), the 5700XT is not supported for hardware rendering, again because in VEGAS that requires the older VCE, not the newer VCN. So you won't find AMD VCE as a render option with this card (right, @Marcin?). Which could be a big deal for someone who wants fast rendering of AVC/HEVC.
This is basically a repeat of what I wrote on the thread that I linked to in my last comment. Not sure how else I can express it without actually having the card to demo.
[Edit: It turns out that this is wrong, and that NAVI-based GPUs ARE supposed to be fully supported now. See comment below]
I tested the those renderoptions with and without decoding with Nvidia Nvenc. It makes no or very little difference in rendertime. The only big advantage I see is the preview velocity of heavy sourcefiles.
When I did the benchmarking tests, I was getting average 5.8 fps preview with UVD on, and ave 11 fps with UVD off. Just repeated that again with the latest version of VEGAS and a different AMD driver (18.10.16) and it's about 10-13 fps whether UVD is on or off. If the VEGAS team can get VCN decode working, maybe it can bring a benefit.
I just ran some benchmarks myself on an Intel NUC. It has AMD Vega-M and Intel 630 gpus. Did 4k renders using all 3 codecs of Magix AVC with 4k footage from a Z-Cam E2 and with decoding on and off for each gpu. Measured negligible differences when rendering h.264 footage. But it was a totally different story rendering an h.265 clip.
Both clips were shot with the same camera settings. FX: Zlog2 LUT, Color Grade Lift/Gain, 3 instances of Color Corrector (Secondary), and Sharpen. The h.264 clip rendered a little faster but the h.265 clip was half the size. I was surprised to see the Intel Qsv taking longer than the AMD VCE... last time I compared them on my heftier Radeon VII 9900k system it was the other way around. But that was before Magix added AMD decoding support. Going to have to rerun this on my workstation once I get home.
@NickHope alerted me to this thread, and I wanted to state that VEGAS Pro 17, with the latest update (b421) should support hardware decoding for most flavors of 8-bit 4:2:0 AVC and 8-bit/10-bit 4:2:0 HEVC media with the newer NAVI-based GPUs (such as the RX 5500, 5600, 5700). You will see a more distinct advantage with HEVC decoding in the NAVI GPUs. Do note that you still need to set it up correctly in the FileIO tab (change "Hardware Decoder To Use" to "AMD UVD").
AMD calls the new architecture "VCN" for both encode/decode, while it was "VCE" for encode and "UVD" for decode in the earlier generations VEGAS supported. We will make a note to update the terminology in the next version of VEGAS.
Just got my hands on an rx-5700xt and benchmarked it against the rx-580 it's replacing and it moderately outperformed it on Vegas 17 build 421 across the board with about 10% improvement in VCE rendering. Display performance was pretty much equal on AVC but maybe a tad less playing 4k HEVC clips. One standout, however, was it's performance on the Sample Project benchmark from the Benchmarking thread. It ran that faster than my Radeon VII. That benchmark is dominated by FX processing performance so if that's what you're into, the RX-5700xt seems to be the gpu to get. I have an html table with full comparative stats with my other systems here.
Just did tests with v16 of Vegas and the bad news is it does not support rendering to a 4K frame size with the VCE codec with this card. Only renders to 4k with the Mainconcept codec with the exact same times as the RX-580. Rendering to a 1080p frame, however, is supported for both MC and VCE codecs. Note that v16 does not support decoding for AMD cards and the numbers reflect that. Apparently the lack of decoding load benefits the Sample Project benchmark because the v16 timing is even faster than with v17 while Red Car benchmarks that are dominated more by camera footage load run slower. Drilling down on the html table looks like this: