Comments

Reyfox wrote on 5/9/2020, 7:48 AM

Still waiting for it to be supported before buying a RX5000 series card. Hoping it will be soon.

In the mean time, still using my RX480 8GB which is working in VP17.

BruceUSA wrote on 5/9/2020, 8:16 AM

Does the 5700XT graphics card work well in build 421 VP17. Render Magix AVC UHD 2160p (AMD VCE) working

5700XT works very well . During Vegas playback, its is kicking 60-70% . What not to like here eh?. When the new NAVI 2 release its will be even more impressive.

NickHope wrote on 5/9/2020, 8:23 AM

It may work well for GPU acceleration of video processing but hardware decode/encode are not supported on it yet: https://www.vegascreativesoftware.info/us/forum/gpu-5700xt-not-showing-in-gpu-acceleration-vegas-17--120563/

BruceUSA wrote on 5/9/2020, 8:35 AM

NickHope. Did you actually watch the above video? The video above demonstrate 4K 60P with FX playing back at full frame rate. During playback the 5700XT gpu loading consistantly 68-70% at all time during playback. I don't understand why anyone keep saying this card is not supported in Vegas,

NickHope wrote on 5/9/2020, 3:13 PM

@BruceUSA I did watch the video, and it appears to demonstrate that the card works well for GPU acceleration of video processing, which uses OpenCL. I assume that accounts for the high "Video Encode" figure.

However, as far as I have read and researched, it is not supported for hardware decoding of supported formats, which you would find on the File I/O tab of the VEGAS preferences. That requires UVD, but the 5700XT has the newer VCN, not the older UVD. That is presumably why the "Video Decode" graph in the video does not register (unless the media is an unsupported format). Lack of support for hardware decoding may not be such a big deal, as I don't think it adds a lot in terms of performance. Personally I'm not bothering to run it, even though my VEGA64 supports it. In fact I think it actually slowed my timeline down a little when I tested it. But it might help more with some other formats.

More importantly, as far as I have read and researched (example), the 5700XT is not supported for hardware rendering, again because in VEGAS that requires the older VCE, not the newer VCN. So you won't find AMD VCE as a render option with this card (right, @Marcin?). Which could be a big deal for someone who wants fast rendering of AVC/HEVC.

This is basically a repeat of what I wrote on the thread that I linked to in my last comment. Not sure how else I can express it without actually having the card to demo.

[Edit: It turns out that this is wrong, and that NAVI-based GPUs ARE supposed to be fully supported now. See comment below]

BruceUSA wrote on 5/9/2020, 3:34 PM

NickHope. Thank you for the well explanation. I appreciated that, as I did not dig into the research. :)

j-v wrote on 5/9/2020, 3:40 PM

I tested the those renderoptions with and without decoding with Nvidia Nvenc.
It makes no or very little difference in rendertime. The only big advantage I see is the preview velocity of heavy sourcefiles.

met vriendelijke groet
Marten

Camera : Pan X900, GoPro Hero7 Hero Black, DJI Osmo Pocket, Samsung Galaxy A8
Desktop :MB Gigabyte Z390M, W10 home version 2004 build 19041.264, i7 9700 4.7Ghz,16 DDR4 GB RAM, Gef. GTX 1660 Ti.
Laptop Β :Asus ROG GL753VD, W10 home version 2004 build 19041.264, CPU i7 7700HQ, 8 GB RAM, GeF. GTX 1050 (2 GB) + Int. HD Graphics 630(2GB).VP 16,17 and VMS PL 16,17 are installed, all latest builds
Both Nvidia GPU's have driver version 451.48 Studio Driver, desktop the Studio DHC driver
TV Β  Β  Β :LG 4K 55EG960V

My slogan is: BE OR BECOME A STEMCELL DONOR !!!

NickHope wrote on 5/10/2020, 1:40 AM

When I did the benchmarking tests, I was getting average 5.8 fps preview with UVD on, and ave 11 fps with UVD off. Just repeated that again with the latest version of VEGAS and a different AMD driver (18.10.16) and it's about 10-13 fps whether UVD is on or off. If the VEGAS team can get VCN decode working, maybe it can bring a benefit.

Howard-Vigorita wrote on 5/10/2020, 3:12 AM

I just ran some benchmarks myself on an Intel NUC. It has AMD Vega-M and Intel 630 gpus. Did 4k renders using all 3 codecs of Magix AVC with 4k footage from a Z-Cam E2 and with decoding on and off for each gpu. Measured negligible differences when rendering h.264 footage. But it was a totally different story rendering an h.265 clip.

Both clips were shot with the same camera settings. FX: Zlog2 LUT, Color Grade Lift/Gain, 3 instances of Color Corrector (Secondary), and Sharpen. The h.264 clip rendered a little faster but the h.265 clip was half the size. I was surprised to see the Intel Qsv taking longer than the AMD VCE... last time I compared them on my heftier Radeon VII 9900k system it was the other way around. But that was before Magix added AMD decoding support. Going to have to rerun this on my workstation once I get home.

Last changed by Howard-Vigorita on 5/10/2020, 12:39 PM, changed a total of 3 times.

Cameras: Canon XF305, JVC GV-LS2, Canon 6D w/L-glass line. (Mar 2020: testing Zcam E2)
Laptop: Dell XPS15-9570; i7-8750h 32gb (integrated Intel UHD-630 & Nvidia GTX-1050Ti)
Road: Intel NUC i7 8809g 32gb (integrated AMD VegaM 4gb graphics and Intel HD630)
Workstation: i9 9900k 32 gb (Sapphire AMD Radeon VII 16gb graphics and integrated Intel UHD630)
Workstation2: e5 1650v4 128 gb (Sapphire Nitro+ RX5700xt Navi 8gb graphics)
Workstation3: i7-980X 24gb (Sapphire Nitro+ Vega64 8gb graphics)
currently Vegas 17.452

VEGASHeman wrote on 5/13/2020, 10:42 AM

@NickHope alerted me to this thread, and I wanted to state that VEGAS Pro 17, with the latest update (b421) should support hardware decoding for most flavors of 8-bit 4:2:0 AVC and 8-bit/10-bit 4:2:0 HEVC media with the newer NAVI-based GPUs (such as the RX 5500, 5600, 5700). You will see a more distinct advantage with HEVC decoding in the NAVI GPUs. Do note that you still need to set it up correctly in the FileIO tab (change "Hardware Decoder To Use" to "AMD UVD").

AMD calls the new architecture "VCN" for both encode/decode, while it was "VCE" for encode and "UVD" for decode in the earlier generations VEGAS supported. We will make a note to update the terminology in the next version of VEGAS.

NickHope wrote on 5/13/2020, 1:29 PM

To be totally clear, VCN encoding is now supported as well as decoding, but the Encode Mode in the render settings is still referred to as "AMD VCE" for now. Is that correct, @VEGASHeman?

VEGASHeman wrote on 5/13/2020, 1:31 PM

Yes. We will address the naming confusion in VP18.

NickHope wrote on 5/13/2020, 1:39 PM

Thanks VEGASHeman.

Tagging @Reyfox & @BruceUSA πŸ‘†πŸ‘†πŸ‘† Sorry for the out-of-date info 😳

Reyfox wrote on 5/13/2020, 1:45 PM

@NickHope no problem! Glad to hear that!!

Howard-Vigorita wrote on 5/25/2020, 12:29 AM

Just got my hands on an rx-5700xt and benchmarked it against the rx-580 it's replacing and it moderately outperformed it on Vegas 17 build 421 across the board with about 10% improvement in VCE rendering. Display performance was pretty much equal on AVC but maybe a tad less playing 4k HEVC clips. One standout, however, was it's performance on the Sample Project benchmark from the Benchmarking thread. It ran that faster than my Radeon VII. That benchmark is dominated by FX processing performance so if that's what you're into, the RX-5700xt seems to be the gpu to get. I have an html table with full comparative stats with my other systems here.

Just did tests with v16 of Vegas and the bad news is it does not support rendering to a 4K frame size with the VCE codec with this card. Only renders to 4k with the Mainconcept codec with the exact same times as the RX-580. Rendering to a 1080p frame, however, is supported for both MC and VCE codecs. Note that v16 does not support decoding for AMD cards and the numbers reflect that. Apparently the lack of decoding load benefits the Sample Project benchmark because the v16 timing is even faster than with v17 while Red Car benchmarks that are dominated more by camera footage load run slower. Drilling down on the html table looks like this:

Last changed by Howard-Vigorita on 5/25/2020, 11:06 AM, changed a total of 5 times.

Cameras: Canon XF305, JVC GV-LS2, Canon 6D w/L-glass line. (Mar 2020: testing Zcam E2)
Laptop: Dell XPS15-9570; i7-8750h 32gb (integrated Intel UHD-630 & Nvidia GTX-1050Ti)
Road: Intel NUC i7 8809g 32gb (integrated AMD VegaM 4gb graphics and Intel HD630)
Workstation: i9 9900k 32 gb (Sapphire AMD Radeon VII 16gb graphics and integrated Intel UHD630)
Workstation2: e5 1650v4 128 gb (Sapphire Nitro+ RX5700xt Navi 8gb graphics)
Workstation3: i7-980X 24gb (Sapphire Nitro+ Vega64 8gb graphics)
currently Vegas 17.452

Howard-Vigorita wrote on 6/12/2020, 1:23 PM

I recently picked up a Sapphire Vega64 for my e5-1650 Xeon system thinking it would dramatically outperform the rx-5700xt. Wrong. Surprisingly, the 2 boards had only small render time differences on Red Car, some identical, others plus and minus. Navi still dominated on the Sample Project benchmark. But the Vega64 with its faster hbm2 memory was better on playback so that board stays in. Somewhat perplexed, I decided to swap the Navi board into my 9900k system to see how it did there compared to the Radeon VII. Same deal. Gotta say, the Navi is heck of a performer for the money with Vegas 17 (not so much with earlier versions). Looking forward to Big Navi. I've updated the roundup table to include the new timings with the latest drivers from amd and intel.

Last changed by Howard-Vigorita on 6/12/2020, 1:34 PM, changed a total of 1 times.

Cameras: Canon XF305, JVC GV-LS2, Canon 6D w/L-glass line. (Mar 2020: testing Zcam E2)
Laptop: Dell XPS15-9570; i7-8750h 32gb (integrated Intel UHD-630 & Nvidia GTX-1050Ti)
Road: Intel NUC i7 8809g 32gb (integrated AMD VegaM 4gb graphics and Intel HD630)
Workstation: i9 9900k 32 gb (Sapphire AMD Radeon VII 16gb graphics and integrated Intel UHD630)
Workstation2: e5 1650v4 128 gb (Sapphire Nitro+ RX5700xt Navi 8gb graphics)
Workstation3: i7-980X 24gb (Sapphire Nitro+ Vega64 8gb graphics)
currently Vegas 17.452

fr0sty wrote on 6/12/2020, 1:52 PM

I'm waiting on Big Navi myself, though I am also looking forward to how well the 3000 series Nvidia RTX GPUs do.

Systems:

Desktop

AMD Ryzen 7 1800x 8 core 16 thread at stock speed

64GB 3000mhz DDR4

Radeon VII

Windows 10

Laptop:

ASUS Zenbook Pro Duo 32GB (9980HK CPU, RTX 2060 GPU, dual 4K touch screens, main one OLED HDR)

fifonik wrote on 6/12/2020, 6:02 PM

My understanding is that AMD do not improve VCN/VCE much. Unfortunately, for video editing this is the most important part of the chip (for decoding & encoding streams) and texture/shader blocks can mostly be used by some plugins (like NeatVideo, if they implemented in such way).

Last changed by fifonik on 6/12/2020, 6:03 PM, changed a total of 1 times.

Camcorder: Panasonic X920

Desktop: MB: MSI B450M MORTAR TITANIUM, CPU: AMD Ryzen 3700X (not OC), RAM: G'Skill 16 GB DDR4@3200 (not OC), Graphics card: MSI RX580 8GB (factory OC), SSD: Samsung 970 Evo+ NVMe 500MB (OS), HDDs: Seagate & Toshiba 2TB, OS: Windows 10 Pro 1909

NLE: Vegas Pro [Edit] 11, 12, 13, 15, 17

matthias-krutz wrote on 6/13/2020, 2:59 AM

I have installed an RX5700. Although the performance is very good, I can currently not give a clear recommendation for it, since some functions of Vegas can not cope with the card. So the new video stabilization always leads to hanging Vegas. PiP is not working properly. I hope that the card will be fully supported in the near future. But only if it is a driver issue would earlier versions of Vegas benefit from a solution.

Desktop: Ryzen R7 2700, RAM 2 x Ballistix DIMM 16 GB DDR4-2666, X470 Aorus Ultra Gaming, Radeon R9 380 4GB, Win10

Laptop: T420, W7 SP1, i5-2520M 4GB, SSD, HD Graphics 3000

VEGAS Pro 14-17, Movie Studio 12 Platinum, Vegasaur, HitfilmPro

Editor_101 wrote on 6/14/2020, 12:17 PM

@BruceUSA I did watch the video, and it appears to demonstrate that the card works well for GPU acceleration of video processing, which uses OpenCL. I assume that accounts for the high "Video Encode" figure.

However, as far as I have read and researched, it is not supported for hardware decoding of supported formats, which you would find on the File I/O tab of the VEGAS preferences. That requires UVD, but the 5700XT has the newer VCN, not the older UVD. That is presumably why the "Video Decode" graph in the video does not register (unless the media is an unsupported format). Lack of support for hardware decoding may not be such a big deal, as I don't think it adds a lot in terms of performance. Personally I'm not bothering to run it, even though my VEGA64 supports it. In fact I think it actually slowed my timeline down a little when I tested it. But it might help more with some other formats.

More importantly, as far as I have read and researched (example), the 5700XT is not supported for hardware rendering, again because in VEGAS that requires the older VCE, not the newer VCN. So you won't find AMD VCE as a render option with this card (right, @Marcin?). Which could be a big deal for someone who wants fast rendering of AVC/HEVC.

This is basically a repeat of what I wrote on the thread that I linked to in my last comment. Not sure how else I can express it without actually having the card to demo.

[Edit: It turns out that this is wrong, and that NAVI-based GPUs ARE supposed to be fully supported now. See comment below]

Hardware Decoding adds performance by offloading decoding of compressed formats like H.264 and HEVC from the CPU to a dedicated core in the GPU that is much faster at decoding (and encoding) these formats. There is no reason - at all - not to use it, and some people with weaker machines may depend on it for usable performance while editing. There is no way you're LOSING performance using GPU Decoding over the CPU when editing these formats.

VEGAS is used by a lot of consumers/prosumers who record video with devices that only capture these formats (Smartphones, DSLRs, etc.).

Encoding is the one you may want to skip, as AMD doesn't have the best H.264 CODEC (their HEVC CODEC is really good in NAVI, though). I would not, personally, use VCE/VCN to Encode H.264 for delivery - but I'd probably consider it for HEVC ;-)

Editor_101 wrote on 6/14/2020, 12:27 PM

I recently picked up a Sapphire Vega64 for my e5-1650 Xeon system thinking it would dramatically outperform the rx-5700xt. Wrong. Surprisingly, the 2 boards had only small render time differences on Red Car, some identical, others plus and minus. Navi still dominated on the Sample Project benchmark. But the Vega64 with its faster hbm2 memory was better on playback so that board stays in. Somewhat perplexed, I decided to swap the Navi board into my 9900k system to see how it did there compared to the Radeon VII. Same deal. Gotta say, the Navi is heck of a performer for the money with Vegas 17 (not so much with earlier versions). Looking forward to Big Navi. I've updated the roundup table to include the new timings with the latest drivers from amd and intel.

You're going ot get weird results with these cards, because the application used doesn't really tax the GPU. An NLE like Resolve, which does image processing on the GPU, will see the 5700XT fall behind due to the limited Compute Cores as well as the slower RAM. The Radeon VII will blow it away.

In Resolve, a 5700XT is worse than an O.G. RTX 2060 - by a significant margin. So, if you use Resolve (among other software applications), the 5700XT is a card to avoid, not buy. Get the 2060 SUPER, instead, or wait for "Big Navi." In addition to that, Nvidia's H.264 CODEC is significantly better than AMD's, so it's a lot more usable for rendering deliverables than AMD's card (which pumps out lower quality footage). AMD's newer HEVC CODEC is pretty good, though (and fast).

The 5700XT is a card optimized for gaming. It has good GPU performance, but slower RAM and far less compute cores than the other Radeons. In applications which bias to Compute and tons of data shuffling tp/from/on the GPU, this puts it at a pretty hefting disadvantage.

It's a really bad investment vs. Nvidia for content creators. I'd wait on it, personally.

Lots of applications seem fairly buggy with VCN, as well. I am not sure why AMD cannot just stabilize one technology. They seem to always want to change things up every few years. Intel/Nvidia never have this issue!

JN- wrote on 6/15/2020, 9:26 AM

@TheRhinoΒ I know it’s not directly relating to the topic, but for those that like to run multiple sessions, Nvidia has increased the Nvenc concurrent sessions limit from 2 to 3. https://developer.nvidia.com/video-encode-decode-gpu-support-matrix

Last changed by JN- on 6/15/2020, 9:27 AM, changed a total of 1 times.

---------------------------------------------

Benchmarking thread

Codec Render Quality tables

---------------------------------------------

PC ... Corsair case, own build ...

CPU .. i9 9900K, iGpu UHD 630

Memory .. 32GB DDR4

Graphics card .. MSI RTX 2080 ti

Graphics driver .. latest studio

PSU .. Corsair 850i

Mboard .. Asus Z390 Code

Β 

Laptop ... (Acer Predator G9-793-77AC)

CPU .. i7-6700HQ Skylake-H

Memory ..32 GB DDR4, was previously 16 GB

Graphics card .. Nvidia GTX 1070

Graphics driver .. latest studio

TheRhino wrote on 6/15/2020, 10:42 AM

@JN- Thanks. I have a RTX 2060 on my laptop that is now able to render (3) a once but at that point the 6-core CPU & 16 GB of RAM is also maxed-out whereas my 9990K / VEGA 64 / 32GB workstation can handle a little more load... On earlier, more stable versions of Vegas, I used to setup batch processing overnight so my files were ready the next morning.... However, V13-V17 I would find that it sometimes crashed within 1 hour of leaving the studio, so NOTHING was ready to go the next day... So currently I have a 2nd workstation rendering 3-4 instances while I start a new project on a different system... This way I can babysit Vegas by watching the renders on multiple screens...

Workstation D with $1,350 USD of upgrades in April, 2019
--$500 9900K @ 5.0ghz
--$140 Corsair H150i liquid cooling with 360mm radiator (3 fans)
--$200 open box Asus Z390 WS (PLX chip manages 4/5 PCIe slots)
--$160 32GB of G.Skill DDR4 3000
--$350 refurbished, but like-new Radeon Vega 64 LQ (liquid cooled)

Renders Vegas11 "Red Car Test" (AMD VCE) in 13s when clocked at 4.9 ghz
(note: BOTH onboard Intel & Vega64 show utilization during QSV & VCE renders...)

Source Video1 = 4TB RAID0--(2) 2TB M.2 on motherboard in RAID0
Source Video2 = 4TB RAID0--(2) 2TB M.2 (1) via U.2 adapter & (1) on separate PCIe card
Target Video1 = 16TB RAID10--(4) 8TB SATA hot-swap drives on PCIe RAID10 card

10G Network using used $30 Mellanox2 Adapters & new $135 10G Switch
Copy of Work Files, Source & Output Video, OS Images on QNAP 653b NAS
Blackmagic Decklink PCie card for capturing from tape, etc.
(2) internal BR Burners connected via USB 3.0 to SATA adapters
Old Cooler Master CM Stacker ATX case with (13) 5.25" front drive-bays holds & cools everything.

Workstations A, B & C are older 6-core 4.0ghz Xeon 5660 or I7 980x on Asus P6T6 motherboards.

$999 Walmart Evoo 17 Laptop with I7-9750H 6-core CPU, RTX 2060, (2) M.2 bays & (1) SSD bay...

JN- wrote on 6/15/2020, 11:24 AM

@TheRhino Indeed, stability is everything really.

---------------------------------------------

Benchmarking thread

Codec Render Quality tables

---------------------------------------------

PC ... Corsair case, own build ...

CPU .. i9 9900K, iGpu UHD 630

Memory .. 32GB DDR4

Graphics card .. MSI RTX 2080 ti

Graphics driver .. latest studio

PSU .. Corsair 850i

Mboard .. Asus Z390 Code

Β 

Laptop ... (Acer Predator G9-793-77AC)

CPU .. i7-6700HQ Skylake-H

Memory ..32 GB DDR4, was previously 16 GB

Graphics card .. Nvidia GTX 1070

Graphics driver .. latest studio