OpenCL benchmarks and preview performance?

cliff_622 wrote on 7/23/2020, 6:50 PM

Quick question. I have been told that OpenCL is what Vegas 17 uses for it's hardware playback acceleration. If this is the case, does that mean that I can just pick a new graphic card that has the highest OpenCL benchmarks? There are OpenCL benchmarks charts all over Google.

I run an AMD Vega56 card and 12 core Ryzen 9 with 80 gigs of RAM. 8bit files play as smooth as butter but 10bit GH5 files are much slower and choppy.

Do I just pick the fastest OpenCL rated card I can afford?

CT

Comments

john_dennis wrote on 7/23/2020, 7:52 PM

@cliff_622

"Do I just pick the fastest OpenCL rated card I can afford?"

It once was that simple, but, with the inclusion of hardware decode (QSV, AMD UVD, NVIDIA NVDEC) in Vegas Pro 17, it's more complicated.

 

john_dennis wrote on 7/23/2020, 8:10 PM

"The Radeon RX Vega 56 is the weakest member of AMD's Vega GPU family. The Vega architecture is built on 14 nm silicon and contains next-generation compute units (nCUs). Each NCU houses 64 steam processors, of which the Vega 56 has 3584 vs. 4096 in the Vega 64. The new architecture employs 8GB of second generation high-bandwidth memory (HBM2)."

Read Here: https://www.vegascreativesoftware.info/us/forum/amd-radeon-vll-or-nvidia-2070-super--118415/#ca739363

Musicvid wrote on 7/23/2020, 8:23 PM

I am less than knowledgeable, but isn't OpenCL kind of an "also ran" horse in today's acceleration race?

john_dennis wrote on 7/23/2020, 8:27 PM

@Musicvid

"I am less than knowledgeable..."

I am less than interested, but, I would guess that it is still quite viable since it's analogous to adding CPU cores.

Former user wrote on 7/23/2020, 11:13 PM

I run an AMD Vega56 card and 12 core Ryzen 9 with 80 gigs of RAM. 8bit files play as smooth as butter but 10bit GH5 files are much slower and choppy.

Do I just pick the fastest OpenCL rated card I can afford?

CT

No you'd be wasting your money. VegasPro does not use any GPU acceleration for playback of your file. Also there is no GPU decode for your files. GPU decode is a limitation of GPU's not of Vegas

This shows playback in vp17,Davinci Resolve, and a media player of GH5 10bit 422 4k30p. In Vegas 100% cpu, dropped frames, Resolve allowing smooth playback, but CPU is still high, and the media player is able to utilise GPU the best, and very little cpu is used.

cliff_622 wrote on 7/23/2020, 11:13 PM

Oddly enough. These same exact 10bit files play very smooth in another famous Windows editor under the exact same PC. I prefer Vegas and I'm willing to spend more money on hardware to get it to play here like it does on the other NLE. I have a lean and clean Windows 10 with very few apps. 80 gigs or RAM and 12 core Ryzen 9.

TheRhino wrote on 7/24/2020, 2:51 PM

Search the forum for "10bit GH5" to see what others have experienced... Before buying a GPU, we're so close, I would wait for the Vegas 18 release in August, Nvidia's Ampere GPU & AMD's Big Navi GPU releases in September before making a decision. Vegas 18 may give Nvidia or AMD a bigger edge & the new GPUs will drive the last generation to lower prices...

I chose an open box $350 VEGA 64 liquid-cooled for my 9900K build 16 months ago because the Radeon 7 was $700 & the Nvidia 2080ti was over $1000... I then installed a used $200 VEGA 56 in an older 6-core Xeon to bump-up my HEVC & MP4 render speeds in that system, which I mostly use for rendering large projects so they don't tie-up system resources while I am editing on my 9900K...

Workstation C with $600 USD of upgrades in April, 2021
--$360 11700K @ 5.0ghz
--$200 ASRock W480 Creator (onboard 10G net, TB3, etc.)
Borrowed from my 9900K until prices drop:
--32GB of G.Skill DDR4 3200 ($100 on Black Friday...)
Reused from same Tower Case that housed the Xeon:
--Used VEGA 56 GPU ($200 on eBay before mining craze...)
--Noctua Cooler, 750W PSU, OS SSD, LSI RAID Controller, SATAs, etc.

Performs VERY close to my overclocked 9900K (below), but at stock settings with no tweaking...

Workstation D with $1,350 USD of upgrades in April, 2019
--$500 9900K @ 5.0ghz
--$140 Corsair H150i liquid cooling with 360mm radiator (3 fans)
--$200 open box Asus Z390 WS (PLX chip manages 4/5 PCIe slots)
--$160 32GB of G.Skill DDR4 3000 (added another 32GB later...)
--$350 refurbished, but like-new Radeon Vega 64 LQ (liquid cooled)

Renders Vegas11 "Red Car Test" (AMD VCE) in 13s when clocked at 4.9 ghz
(note: BOTH onboard Intel & Vega64 show utilization during QSV & VCE renders...)

Source Video1 = 4TB RAID0--(2) 2TB M.2 on motherboard in RAID0
Source Video2 = 4TB RAID0--(2) 2TB M.2 (1) via U.2 adapter & (1) on separate PCIe card
Target Video1 = 32TB RAID0--(4) 8TB SATA hot-swap drives on PCIe RAID card with backups elsewhere

10G Network using used $30 Mellanox2 Adapters & Qnap QSW-M408-2C 10G Switch
Copy of Work Files, Source & Output Video, OS Images on QNAP 653b NAS with (6) 14TB WD RED
Blackmagic Decklink PCie card for capturing from tape, etc.
(2) internal BR Burners connected via USB 3.0 to SATA adapters
Old Cooler Master CM Stacker ATX case with (13) 5.25" front drive-bays holds & cools everything.

Workstations A & B are the 2 remaining 6-core 4.0ghz Xeon 5660 or I7 980x on Asus P6T6 motherboards.

$999 Walmart Evoo 17 Laptop with I7-9750H 6-core CPU, RTX 2060, (2) M.2 bays & (1) SSD bay...

Musicvid wrote on 7/24/2020, 3:06 PM

@cliff_622

Some "other famous Windows editors" use background rendering, a feature that carries its own price on system performance. Attributing that to OpenCL is probably a mistake.

cliff_622 wrote on 7/24/2020, 8:56 PM

Good to know!

OK,..so improving OpenCL in hardware might be a busted idea.

CPU? Do I go from 12 core to 16 core? I mean,...all I want is for 10bit files to play as smooth as the big company's NLE. BTW,...I'm playing out off four, 1TB SSD in a RAID 0 config. It's got 2,600 megaBYTES per second read speed. So, drive latency is nothing. 80 gigs or RAM is a non-issue too.

Vegas guys,....will V-18 have preview playback improvements in 10 bit GH5 files?

TheRhino wrote on 7/24/2020, 9:27 PM

@cliff_622

Some "other famous Windows editors" use background rendering, a feature that carries its own price on system performance. Attributing that to OpenCL is probably a mistake.

The other NLE's were getting complaints about 10-bit GH5 before they released updates to address it..., so let's hope V18 does the same... @cliff_622 's AMD 3900X & VEGA 56 & M.2 storage should be able to handle the hardware requirements... That is a good bang/buck setup... You could spend 2X as much & only see marginal 10-15% performance gains... Adding more cores beyond 8-10 does not increase Vegas performances as much as higher clock speeds... I'm actually thinking of upgrading a 2nd workstation to a 5.2 ghz 10-core 10900K, but I'm waiting to see how V18 performs on my 9900K... Currently my 8-core 4.9 ghz 9900K out-performs a $1800 < 4 ghz 32-core 3970x Threadripper because Vegas, like Photoshop, After Effects, games, etc., does not utilize the extra cores...

Last changed by TheRhino on 7/24/2020, 9:35 PM, changed a total of 1 times.

Workstation C with $600 USD of upgrades in April, 2021
--$360 11700K @ 5.0ghz
--$200 ASRock W480 Creator (onboard 10G net, TB3, etc.)
Borrowed from my 9900K until prices drop:
--32GB of G.Skill DDR4 3200 ($100 on Black Friday...)
Reused from same Tower Case that housed the Xeon:
--Used VEGA 56 GPU ($200 on eBay before mining craze...)
--Noctua Cooler, 750W PSU, OS SSD, LSI RAID Controller, SATAs, etc.

Performs VERY close to my overclocked 9900K (below), but at stock settings with no tweaking...

Workstation D with $1,350 USD of upgrades in April, 2019
--$500 9900K @ 5.0ghz
--$140 Corsair H150i liquid cooling with 360mm radiator (3 fans)
--$200 open box Asus Z390 WS (PLX chip manages 4/5 PCIe slots)
--$160 32GB of G.Skill DDR4 3000 (added another 32GB later...)
--$350 refurbished, but like-new Radeon Vega 64 LQ (liquid cooled)

Renders Vegas11 "Red Car Test" (AMD VCE) in 13s when clocked at 4.9 ghz
(note: BOTH onboard Intel & Vega64 show utilization during QSV & VCE renders...)

Source Video1 = 4TB RAID0--(2) 2TB M.2 on motherboard in RAID0
Source Video2 = 4TB RAID0--(2) 2TB M.2 (1) via U.2 adapter & (1) on separate PCIe card
Target Video1 = 32TB RAID0--(4) 8TB SATA hot-swap drives on PCIe RAID card with backups elsewhere

10G Network using used $30 Mellanox2 Adapters & Qnap QSW-M408-2C 10G Switch
Copy of Work Files, Source & Output Video, OS Images on QNAP 653b NAS with (6) 14TB WD RED
Blackmagic Decklink PCie card for capturing from tape, etc.
(2) internal BR Burners connected via USB 3.0 to SATA adapters
Old Cooler Master CM Stacker ATX case with (13) 5.25" front drive-bays holds & cools everything.

Workstations A & B are the 2 remaining 6-core 4.0ghz Xeon 5660 or I7 980x on Asus P6T6 motherboards.

$999 Walmart Evoo 17 Laptop with I7-9750H 6-core CPU, RTX 2060, (2) M.2 bays & (1) SSD bay...

Former user wrote on 7/25/2020, 1:54 AM

Good to know!

OK,..so improving OpenCL in hardware might be a busted idea.

There was some talk about certain mxf formats being gpu accelerated in the future, maybe vp18. Your GPU use would be much higher and cpu lower for 4k playback and you would possibly benefit from having a more powerful card. For your GH5 422 video files there's been no mention of gpu acceleration

Howard-Vigorita wrote on 7/28/2020, 11:37 AM

@cliff_622 I think the biggest obstacle is gpu decoding of 4:2:2 color. I know Nvidia cuda does not support it and I don't think AMD opencl/navi does either. Intel hardware opencl/oneapi definitely does not but it's runtime implementation does... but that's cpu-based right now.

If you really think the extra color depth during edit will show in the finished product you could employ a proxy. If seeing 4k resolution is important to getting the edit right, just see if you can make a 4:2:0 4k transcoded mp4 copy and change the file type to sfvp0 and put it in the same folder as the original clip... that'll make the 4k 4:2:0 an edit proxy for the 4k 4:2:2 file. Giving you the benefit of gpu decoding during edit and playback. When all's said and done you could even swap the file types of the original and proxy then re-render to see what the difference actually looks like. I'd be interested in seeing that myself.

Former user wrote on 7/28/2020, 9:53 PM

I think the biggest obstacle is gpu decoding of 4:2:2 color. I know Nvidia cuda does not support it and I don't think AMD opencl/navi does either. Intel hardware opencl/oneapi definitely does not but it's runtime implementation does... but that's cpu-based right now.

The Icelake intel cpu's with gen11 graphics have HEVC 422 decode which makes me think the new Nvidia gpu's will also have HEVC 422 decode. So for camera such as A7s III you can encode in HEVC XAVC-L 422 and potentially get hardware decoding via intel quicksync or encode h.264 XAVC-L 422 and not get hardware decoding. Both are poor options. Also currently vegas doesn't use hardware decode for even h.264 8bit 420 XAVC-L but I am guessing they are fixing that for vp18, which might mean as long as GPU decoder supports it then it could work for other formats

Also cuda/opencl acceleration is something that other editors/players have that does reduce the workload of cpu even for 422 playback on timeline, vegas is not so good at that