System build advice for Vegas pro 14

werner-v wrote on 11/27/2018, 12:30 AM

Hi. I am upgrading my system for VP14 and would like to know if any of these combinations would be OK or to be avoided.

1) Low budget - Ryzen 5 2400G APU

 

2) Higher budget - Ryzen 5 2600 with either Radeon Rx 570 or Nvidia 1060 GPU.

Any other recommendations for this level of system would be appreciated as well.

Thanks.

 

Comments

NickHope wrote on 11/27/2018, 1:25 AM

Get AMD Radeon, not NVIDIA for Vegas Pro 14 & earlier.

werner-v wrote on 11/27/2018, 2:19 AM

Thanks Nick. Will do. I pressume the 2400G APU is no good then, or is it just a lot slower? From the reviews I've looked at, it would seem to be about 30% slower than something like a Ryzen 5 2600.

I have also seen recommendations in previous posts of going for something like an Intel Core i7 8700 because of advantages in rendering derived from its onboard graphics in conjunction with a discrete GPU.

Is that true for VP14? If so would a Radeon 570 be better than Nvidia 1060 with a i7 8700 Cpu?

Thanks.

NickHope wrote on 11/27/2018, 4:24 AM

My comment was only about the discrete GPU in the 2nd option. I'm afraid I don't know much about the current market for CPUs. Plenty of other forum members can give you better info than me about that.

OldSmoke wrote on 11/27/2018, 7:24 AM

Hi. I am upgrading my system for VP14 and would like to know if any of these combinations would be OK or to be avoided.

1) Low budget - Ryzen 5 2400G APU

 

2) Higher budget - Ryzen 5 2600 with either Radeon Rx 570 or Nvidia 1060 GPU.

Any other recommendations for this level of system would be appreciated as well.

Thanks.

What are projects like? 1080 60p or all the way to 4K 60p? Multicam? What is your budget like?

Last changed by OldSmoke on 11/27/2018, 7:24 AM, changed a total of 1 times.

Proud owner of Sony Vegas Pro 7, 8, 9, 10, 11, 12 & 13 and now Magix VP15&16.

System Spec.:
Motherboard: ASUS X299 Prime-A

Ram: G.Skill 4x8GB DDR4 2666 XMP

CPU: i7-9800x @ 4.6GHz (custom water cooling system)
GPU: 1x AMD Vega Pro Frontier Edition (water cooled)
Hard drives: System Samsung 970Pro NVME, AV-Projects 1TB (4x Intel P7600 512GB VROC), 4x 2.5" Hotswap bays, 1x 3.5" Hotswap Bay, 1x LG BluRay Burner

PSU: Corsair 1200W
Monitor: 2x Dell Ultrasharp U2713HM (2560x1440)

werner-v wrote on 11/27/2018, 9:08 AM

My camera is a Lumix DMC G6, so max quality will be 19280x1080 50p 28Mbps mp4 format or 19280 x1080 50P 28Mbps AVCHD Progressive.

Not multicam.

Budget is around $1300.

OldSmoke wrote on 11/27/2018, 12:03 PM

I would think the i7-8700k with the RX570 should be a good build for your kind of work. Make sure have at least 32GB of DDR4-2666 and a good motherboard. I am n ASUS fan but I believe some have reported that MSI is better for this CPU.

Last changed by OldSmoke on 11/27/2018, 4:37 PM, changed a total of 1 times.

Proud owner of Sony Vegas Pro 7, 8, 9, 10, 11, 12 & 13 and now Magix VP15&16.

System Spec.:
Motherboard: ASUS X299 Prime-A

Ram: G.Skill 4x8GB DDR4 2666 XMP

CPU: i7-9800x @ 4.6GHz (custom water cooling system)
GPU: 1x AMD Vega Pro Frontier Edition (water cooled)
Hard drives: System Samsung 970Pro NVME, AV-Projects 1TB (4x Intel P7600 512GB VROC), 4x 2.5" Hotswap bays, 1x 3.5" Hotswap Bay, 1x LG BluRay Burner

PSU: Corsair 1200W
Monitor: 2x Dell Ultrasharp U2713HM (2560x1440)

Reyfox wrote on 11/27/2018, 4:13 PM

If you plan on overclocking, Asus VRM's are not as good as MSI or Asrock.

Happily editing on my "old" Ryzen 1700X, MSI RX480 8GB, 16GB DDR4 3200 system. AMD VCE uses 100% of the GPU.

Last changed by Reyfox on 11/27/2018, 4:15 PM, changed a total of 1 times.

Newbie😁

Vegas Pro 22 (VP18-21 also installed)

Win 11 Pro always updated

AMD Ryzen 9 5950X 16 cores / 32 threads

32GB DDR4 3200

Sapphire RX6700XT 12GB Driver: 25.5.1

Gigabyte X570 Elite Motherboard

Panasonic G9, G7, FZ300

Rainer wrote on 11/29/2018, 12:08 AM

Does your budget include a monitor or two? If not, and you're talking US$ you should be able to go a step higher, say Ryzen 7 2700 and RX580 and still have money left over. DDR4-2666 RAM is OK, but I don't think you'll get any benefit from more than 16GB.

werner-v wrote on 11/29/2018, 1:44 AM

I am partial to the Ryzen CPU's, but I have seen comments on other threads saying that something like the Intel 8700 can use its integrated graphics to process something (I think its was VCE) or something like that, in addition to using the discrete GPU.

Apparently something the Ryzen chips can't do.

Any opinion on that? Thanks.

NickHope wrote on 11/29/2018, 2:49 AM

The Intel 8700 has integrated Intel HD graphics which supports encoding via QSV. The Ryzen 5 2400G has integrated Radeon RX Vega 11 graphics which (I'm fairly sure but not 100%) supports encoding via VCE 4.0. The Ryzen 5 2600 does not have integrated graphics, so no VCE, but the Radeon RX 570 has VCE 3.4. The NVIDIA 1060 has NVENC.

I can't speak for the speed/quality of one hardware encoder vs the others.

Trensharo wrote on 12/7/2018, 9:09 AM

The Intel 8700 has integrated Intel HD graphics which supports encoding via QSV. The Ryzen 5 2400G has integrated Radeon RX Vega 11 graphics which (I'm fairly sure but not 100%) supports encoding via VCE 4.0. The Ryzen 5 2600 does not have integrated graphics, so no VCE, but the Radeon RX 570 has VCE 3.4. The NVIDIA 1060 has NVENC.

I can't speak for the speed/quality of one hardware encoder vs the others.

I think a lot of people don't understand how this stuff works:

Intel QuickSync Video is Intel's Solution for Hardware Decoding and Encoding.

VCE is AMD's solution for Encoding. They also have UVD for Decoding. VP 16 does not use UVD.

NVENC is NVIDIA's solution for Encoding. They also have NVDEC for Decoding. VP16 does not use NVDEC.

The NLE only Accelerates Decode with QSV. Decode is part of the rendering pipeline... The NLE has the Decode the video before rendering the frames that have to be encoded into the final product ;-)

Again: Vegas Pro 16 does not uses AMD UVD or NVIDIA NVDEC for Acclerated Decoding of supported formats. It only uses Intel QuickSync Video for this. If you work primarily with DSLR, Smartphone, etc. footage in supported formats (H.264/5, VP8/9, etc.) then an Intel CPU with QSV is optimal for VEGAS Pro 16. It will Accelerate Decode with the Intel GPU, speeding things up timeline editing and "renders", while still using QSV, VCE or NVENC to Accelerate the Encode.

The GPU Acceleration that is done while rendering is fairly barebones You can probably get by with only an Intel iGPU if you use QSV everywhere it can be used in the NLE (Decode and Encode Acceleration); particularly if you have an Iris Plus or Pro.

A more powerful GPU will speed up the middle [render] process, though. VP16 seems to work properly with GPU Accelerated [Effects] Rendering on dGPUs… So, the better your GPU is, the faster it will render those effects. Some of them are more GPU-heavy than others. You just set it to use your dGPU for Accelerated Processing (which it did automatically on my system - and worked properly - when I tried the trial... hours ago).

This is the rendering pipeline in an the NLE:

Decode Video -> Render Effects on the Video Frames -> Encode Video

^- CPU or QSV Only -> ------------ ^- CPU or GPU --------------- -> ^- CPU, QSV, VCE, or NVENC

The hardware encoders aren't as good as CPU only. You should not render your masters with QSV, VCE, or NVENC IMO (then again... you should probably render those to an Intermediate/Mastering CODEC, which those encoders don't support, anyways). In that case, the Intel CPU is still advantageous if you're coming from a supported CODEC due to the fact that it can Accelerate the DECODE of those files during the Render pipeline.

Other Professional NLEs area getting around this quality disparity by implementing CUDA-based Encoders, which are hardware accelerated by independent of the NVENC hardware on the Nvidia GPU package (for example). Resolve Studio is an example of this. Some Intermediate/Mastering CODECs like Cineform (and even DNxHR, IIRC) are Hardware Accelerated in NLEs like Premiere Pro CC.

I'm going to reiterate this, bolded, for emphasis: If you edit predominantly H.264, HEVC, and other CODECS supported by VEGAS Pro and Intel QuickSync Video... going with an AMD CPU is going to nerf your Timeline and Render performance in comparison to Intel. The extra cores in Ryzen are nice, but they are no match for the dedicated Decoder Hardware module, and VEGAS Pro uses that to decode supported formats... both on the timeline and in the Render pipeline.

Assuming you have at least a Kaby Lake CPU with QSV Support, there are the applicable CODECs that QSV Can Encode and Decode (Exceptions noted) - support in the VP16 NLE is obvious needed even to use these, naturally :-P

  • MPEG-2
  • H.264
  • VC-1 (Decode Only)
  • JPEG
  • VP8
  • HEVC
  • HEVC 10-Bit
  • VP9
  • VP9 10-Bit (Decode Only)

I'd consider this prerequisite if you want to edit any 4K+ HEVC on the timeline without Transcoding - unless you are running VP ono the IBM Watson super computer.

Former user wrote on 12/7/2018, 9:40 AM

@Trensharo Good information indeed, but ... With my own testing re: hardware rendering with either Nvenc or Qsv I found that the difference in render times was very slight. For my setup, on balance, the best combination for render time was Nvidia as HW Acc. and Qsv for hardware render. That was with the Red Car and Running Man tests, the first has some fx, the latter none. Thing is if what you’re saying is correct then Intel graphics as HW acc. + Qsv as render should produce better results, but it doesn’t.

OldSmoke wrote on 12/7/2018, 10:16 AM

The Intel 8700 has integrated Intel HD graphics which supports encoding via QSV. The Ryzen 5 2400G has integrated Radeon RX Vega 11 graphics which (I'm fairly sure but not 100%) supports encoding via VCE 4.0. The Ryzen 5 2600 does not have integrated graphics, so no VCE, but the Radeon RX 570 has VCE 3.4. The NVIDIA 1060 has NVENC.

I can't speak for the speed/quality of one hardware encoder vs the others.

I think a lot of people don't understand how this stuff works:

Intel QuickSync Video is Intel's Solution for Hardware Decoding and Encoding.

VCE is AMD's solution for Encoding. They also have UVD for Decoding. VP 16 does not use UVD.

NVENC is NVIDIA's solution for Encoding. They also have NVDEC for Decoding. VP16 does not use NVDEC.

The NLE only Accelerates Decode with QSV. Decode is part of the rendering pipeline... The NLE has the Decode the video before rendering the frames that have to be encoded into the final product ;-)

Again: Vegas Pro 16 does not uses AMD UVD or NVIDIA NVDEC for Acclerated Decoding of supported formats. It only uses Intel QuickSync Video for this. If you work primarily with DSLR, Smartphone, etc. footage in supported formats (H.264/5, VP8/9, etc.) then an Intel CPU with QSV is optimal for VEGAS Pro 16. It will Accelerate Decode with the Intel GPU, speeding things up timeline editing and "renders", while still using QSV, VCE or NVENC to Accelerate the Encode.

The GPU Acceleration that is done while rendering is fairly barebones You can probably get by with only an Intel iGPU if you use QSV everywhere it can be used in the NLE (Decode and Encode Acceleration); particularly if you have an Iris Plus or Pro.

A more powerful GPU will speed up the middle [render] process, though. VP16 seems to work properly with GPU Accelerated [Effects] Rendering on dGPUs… So, the better your GPU is, the faster it will render those effects. Some of them are more GPU-heavy than others. You just set it to use your dGPU for Accelerated Processing (which it did automatically on my system - and worked properly - when I tried the trial... hours ago).

This is the rendering pipeline in an the NLE:

Decode Video -> Render Effects on the Video Frames -> Encode Video

^- CPU or QSV Only -> ------------ ^- CPU or GPU --------------- -> ^- CPU, QSV, VCE, or NVENC

The hardware encoders aren't as good as CPU only. You should not render your masters with QSV, VCE, or NVENC IMO (then again... you should probably render those to an Intermediate/Mastering CODEC, which those encoders don't support, anyways). In that case, the Intel CPU is still advantageous if you're coming from a supported CODEC due to the fact that it can Accelerate the DECODE of those files during the Render pipeline.

Other Professional NLEs area getting around this quality disparity by implementing CUDA-based Encoders, which are hardware accelerated by independent of the NVENC hardware on the Nvidia GPU package (for example). Resolve Studio is an example of this. Some Intermediate/Mastering CODECs like Cineform (and even DNxHR, IIRC) are Hardware Accelerated in NLEs like Premiere Pro CC.

I'm going to reiterate this, bolded, for emphasis: If you edit predominantly H.264, HEVC, and other CODECS supported by VEGAS Pro and Intel QuickSync Video... going with an AMD CPU is going to nerf your Timeline and Render performance in comparison to Intel. The extra cores in Ryzen are nice, but they are no match for the dedicated Decoder Hardware module, and VEGAS Pro uses that to decode supported formats... both on the timeline and in the Render pipeline.

Assuming you have at least a Kaby Lake CPU with QSV Support, there are the applicable CODECs that QSV Can Encode and Decode (Exceptions noted) - support in the VP16 NLE is obvious needed even to use these, naturally :-P

  • MPEG-2
  • H.264
  • VC-1 (Decode Only)
  • JPEG
  • VP8
  • HEVC
  • HEVC 10-Bit
  • VP9
  • VP9 10-Bit (Decode Only)

I'd consider this prerequisite if you want to edit any 4K+ HEVC on the timeline without Transcoding - unless you are running VP ono the IBM Watson super computer.

I only partially agree with what you are stating, especially when you talk about AMD cards. I came from NVIDIA card, including 1080Ti and it was in no way faster then my current Fury X. Vegas uses OpenCL/GL to process FX and others for timeline acceleration. Recently in another thread I did a render test of my aging i7-3930k with my Fury X using VEC and it was as fast as others with their Nidia cards and/or Intel QSV on a 8700k. Also keep in mind that QSV is only available to mainstream desktop CPUs, HEDT CPUs don't have an iGPU… not yet.

Last changed by OldSmoke on 12/7/2018, 10:18 AM, changed a total of 1 times.

Proud owner of Sony Vegas Pro 7, 8, 9, 10, 11, 12 & 13 and now Magix VP15&16.

System Spec.:
Motherboard: ASUS X299 Prime-A

Ram: G.Skill 4x8GB DDR4 2666 XMP

CPU: i7-9800x @ 4.6GHz (custom water cooling system)
GPU: 1x AMD Vega Pro Frontier Edition (water cooled)
Hard drives: System Samsung 970Pro NVME, AV-Projects 1TB (4x Intel P7600 512GB VROC), 4x 2.5" Hotswap bays, 1x 3.5" Hotswap Bay, 1x LG BluRay Burner

PSU: Corsair 1200W
Monitor: 2x Dell Ultrasharp U2713HM (2560x1440)

Trensharo wrote on 12/7/2018, 2:01 PM

OldSmoke…

You're ignoring some things...

1. Nvidia has improved their OpenCL performance since the days when "AMD was optimal" for VEGAS Pro - a lot.

You can harp on about OpenCL all your want (OpenGL is not a factor, since a 1080Ti will completely destroy a Fury X in OpenGL performance), but it simply isn't the factor that you seem to think it is. VEGAS' use of OpenCL is fairly generic and similar to the average cheap Consumer Editor. The reason why AMD worked well in older versions of VEGAS (like VP14) is because GPU Acceleration was pretty much broken in that NLE - particularly for later GTX cards (i.e. Pascal cards). This is not the case for VP16. It works as expected with NVIDIA GPUs.

In fact, if I try to run VEGAS Pro 14 on one of my laptops with a GTX Card with the latest Nvidia drivers, it causes a BSOD if I try to use the GPU Acceleration with that card. It's completely unusable on that setup. This is, in fact, why I was checking out the Trial of VP16.

Effects like Ignite accelerate with OpenCL and OpenGL on other NLEs as well, and Nvidia GPUs are still superior there - even when the NLE itself isn't biased towards CUDA (i.e. Edius Pro, Avid Media Composer). The effects still have to be rendered, and the rendering horsepower of the better GPU outpaces the disparity in OpenCL performance, because the OpenCL stuff isn't as big a factor as you seem to think it is. VEGAS is not a special case, and it's "historically' bad Timeline Playback performance is part of the reason why many users have switched off of it. Why push users to a direction where they will have to deal with even more of that just because "more cores?"

2. You seem to have missed the point. If you are using a better NVIDIA GPU with NVENC on an Intel Platform, then VEGAS Pro can use BOTH QSV and NVENC - CONCURRENTLY - during the "Render" Pipeline. I made this clear in my post above... The people upthread are talking about editing H.264/HEVC and AVCHD from a DSLR. VEGAS' QSV Decode Acceleration is in play for this scenario... Eliminating it will degrade Timeline/Edit Performance with those CODECs, as well as destroy your render speed, because you will be decoding on the CPU. You will also get worse Real-Time Playback performance (forced to go down to Preview/Draft where QSV Acceleration would allow you to play back at higher quality settings, due to offloading Decode Tasks to the hardware SIP).

QSV will be used to Accelerate Decode of the Media in Question. NVENC will be used to Accelerate Encode of the media. Or VCE, if you have an AMD GPU. But nothing replaces QSV Hardware Decode Acceleration in this NLE when it is eliminated... Just the CPU, and it's almost never faster than QSV.

Comparing VCE and NVENC needs to be done on a level beyond "which is faster," because quality is also a huge factor. File Sizes, Bitrates, etc. need to be examined... although I PERSONALLY don't care which of those is faster because I'd never render anything final with the GPU Encoder, anyways.

- Fury X is an AWFUL GPU compared to GTX 1080Ti. The difference in Graphics Performance between an 1080Ti and Fury X (in the GTX's favor) is MUCH larger than the disparity in OpenCL performance between these cards. This applies in VEGAS Pro and every other NLE, Graphics Application or Game. Objectively... it is awful (compared to a 1080Ti), but keep convincing yourself that it is as good as you think it is. There is a reason why other NLEs that use OpenCL (and not CUDA) still bias to Nvidia cards, and not AMD - because the GPUs are better, faster, and more efficient.

And you may have other Media Applications on the machine that actually are optimized for CUDA. If you finish in Resolve, the Nvidia card will blow away the Radeon, for example.

3. VEGAS Pro 16 does not support the equivalent DECODERs from AMD (UVD) or NVIDIA (NVDEC), period. My post had more to do with the CPU choices than the GPU choices. IF I were building a $2,500+ workstation for video editing, with Intel Xeon CPUs and high end GPUs... I would not be doing it to run VEGAS Pro, as this NLE doesn't scale well and a lot of that hardware power is wasted running it. It would be to run something like Resolve, Premiere Pro, or Media Composer.

Trensharo wrote on 12/7/2018, 2:08 PM

@Trensharo Good information indeed, but ... With my own testing re: hardware rendering with either Nvenc or Qsv I found that the difference in render times was very slight. For my setup, on balance, the best combination for render time was Nvidia as HW Acc. and Qsv for hardware render. That was with the Red Car and Running Man tests, the first has some fx, the latter none. Thing is if what you’re saying is correct then Intel graphics as HW acc. + Qsv as render should produce better results, but it doesn’t.

Your test ignores the fact that QSV Decodes supported media in hardware, if it's available, as it's enabled by default in VP16.

Nvidia is going to outpace Intel for GPU Acceleration of Effects. A GTX1050 is like 10x (or moroe) better at this than Intel's UHD 630.

Whether QSV or NVENC is faster at encoding the media is really a wash. You still need the better GPU for rendering the frames fatser (otherwise that will kill your "Render Times"), and QSV is still needed to Accelerate Decode of supported formats on the Timeline and during the Render Pipeline (otherwise you get longer "Render Times, and worse playback performance, as it's WAY faster than the CPU for this..).

Whether or not you Accelerate the encode with QSV or NVENC (or VCE) is a wash, really... and depends largely on the quality of the output file and any outstanding disparities that exist. People who encode everything in hardware are likely not to care about such disparities, as it's probably less than the disparity between CPU and Hardware SIP Encoding, and they're choosing the Hardware SIP anyways :-P

Intel does have a REALLY good Hardware Decoder & Encoder (this has always been the case), so it would not surprise me if QSV was faster than NVENC and/or VCE. This is why so many endors default to QSV for Timeline & Decode Acceleration (Apple (iMovie/Final Cut Pro X), Grass Valley (Edius), VEGAS Pro, Video Pro X, etc.).

Ultimately, it doesn't matter, because the real "advantage" of having QSV for VP16 is the Decode and Timeline acceleration, not the Encode - since Vegas will Accelerate Encode with any vendor's hardware SIP (Intel, NV, AMD)!

DECODE Acceleration is not the same as "GPU Accelerated Processing" (or whatever it's called) on the Video tab in settings. That's choosing which GPU you render effects on. Any discrete budget GPU is going to outpace Intel's iGPUs there ;-) People just tended to set it to Intel or turn it off because the past few releases of VEGAS have been very buggy with GPU Acceleration.

From what I've experience with VP16, it works properly - at least with Pascal-era Nvidia GPUs. On VP14, it can BSOD my PC (and with latest driver release, the GTX card no longer even shows up in Vegas Pro 14 to select).

I would not use VEGAS Pro 14 at this point in time. Just upgrade to VP16 Edit for the cheap'ish $149. VP14 will only do CPU renders unless you have an ancient GPU in your machine - a GPU that is going to bottleneck every other media application that has proper support for newer GPUs. VP14 has some of the worst playback performance and rendering times on the market for Video Editors that cost more than $0.

Former user wrote on 12/7/2018, 3:16 PM

@Trensharo

“Your test ignores the fact that QSV Decodes supported media in hardware, if it's available, as it's enabled by default in VP16.” ???

I suspect this is merly obfuscation, at best, and maybe the rest of your above post also. A lot of theory.

I gave a real, practical example, of two different types of projects, the result was the same, a small advantage, on my system, to Nvidia as HW Acc. combined with QSV as render choice. The results are in the long i9 9900K thread, pages 2 and 4.

I mentioned it that others might benefit, rather than be misled by your meandering theorising, some of which may indeed be true, but frankly too much detail and no practical examples. Why not give us some real world playback and render examples to assist in making an informed decision, based on any of your system(s)?

I based my tests on the two benchmarks mentioned above, there’s also a Magix sample project supplied with VP16 that you could use, or is it just easier to theorise?

OldSmoke wrote on 12/7/2018, 3:17 PM

@Trensharo You can write stuff until the cows come home. I did my testing with a GTX1080Ti and I send it back as it wasn’t any better then my Fury X. While Nvidia has improved their OpenCL support, it still isn’t as good as AMD. I also don’t care much for benchmark tests, but I do very much care about my tests, on my system, with my projects and with the official “Red Car” project. In those tests, the GTX1080Ti with NVENC was behind my old Fury X and VCE and I know that users in here use a Frontier Edition and that runs even better.

When did you make your comparison of the two cards in your system?

Last changed by OldSmoke on 12/7/2018, 3:18 PM, changed a total of 1 times.

Proud owner of Sony Vegas Pro 7, 8, 9, 10, 11, 12 & 13 and now Magix VP15&16.

System Spec.:
Motherboard: ASUS X299 Prime-A

Ram: G.Skill 4x8GB DDR4 2666 XMP

CPU: i7-9800x @ 4.6GHz (custom water cooling system)
GPU: 1x AMD Vega Pro Frontier Edition (water cooled)
Hard drives: System Samsung 970Pro NVME, AV-Projects 1TB (4x Intel P7600 512GB VROC), 4x 2.5" Hotswap bays, 1x 3.5" Hotswap Bay, 1x LG BluRay Burner

PSU: Corsair 1200W
Monitor: 2x Dell Ultrasharp U2713HM (2560x1440)

BruceUSA wrote on 12/7/2018, 3:28 PM

Wow Wow Wow... I Don't know where to start because so much to read with all the technical stuffs. On paper of course 1080ti is a more power gpu then let say AMD gpu. But how the two perform in Vegas. 1080ti is still sucks. I have not seen any 1080ti card on Vegas can out perform my AMD Vega 10 in TL and rendering . .

CPU:  i9 Core Ultra 285K OCed @5.6Ghz  
MBO: MSI Z890 MEG ACE Gaming Wifi 7 10G Super Lan, thunderbolt 4
RAM: 48GB RGB DDR5 8200mhz
GPU: NVidia RTX 5080 16GB Triple fan OCed 3100mhz, Bandwidth 1152 GB/s     
NVMe: 2TB T705 Gen5 OS, 4TB Gen4 storage
MSI PSU 1250W. OS: Windows 11 Pro. Custom built hard tube watercooling

 

                                   

                 

               

 

BruceUSA wrote on 12/7/2018, 4:18 PM

I just want add on this questions. Can your 1080ti render the official Red Car in 14s. Can your 1080ti card render 1080P project to 1080P in 150+ frames. Can your 1980ti card render multiple 4k tracks with FX applied with almost 2X the speed. Can your 1080ti card handle 32 bit project settings with best/full in multicam editing and cutting in real time. [2 tracks] 10 bit gh5 footage.

 

CPU:  i9 Core Ultra 285K OCed @5.6Ghz  
MBO: MSI Z890 MEG ACE Gaming Wifi 7 10G Super Lan, thunderbolt 4
RAM: 48GB RGB DDR5 8200mhz
GPU: NVidia RTX 5080 16GB Triple fan OCed 3100mhz, Bandwidth 1152 GB/s     
NVMe: 2TB T705 Gen5 OS, 4TB Gen4 storage
MSI PSU 1250W. OS: Windows 11 Pro. Custom built hard tube watercooling

 

                                   

                 

               

 

Trensharo wrote on 12/7/2018, 5:04 PM

@Trensharo You can write stuff until the cows come home. I did my testing with a GTX1080Ti and I send it back as it wasn’t any better then my Fury X. While Nvidia has improved their OpenCL support, it still isn’t as good as AMD. I also don’t care much for benchmark tests, but I do very much care about my tests, on my system, with my projects and with the official “Red Car” project. In those tests, the GTX1080Ti with NVENC was behind my old Fury X and VCE and I know that users in here use a Frontier Edition and that runs even better.

When did you make your comparison of the two cards in your system?

Your clueless. And you should try to be less abrasive. You chimed in and replied to me. Don't be salty just because I replied back in a comprehensive manner. It avoids misinterpretation to be thorough, so I will continue to do it this way.

This is basic stuff. VCE isn't going to render significantly faster than NVENC or QSV. All three of them will accelerate at, frankly, comparable speeds. QSV may actually be faster, frankly. The difference is probably < 10% in most cases, and that's ONLY for H.264 and HEVC (and the formats listed above). There is also the question of quality when using an Older GPU. NVENC supports 4:4:4 and Lossless Encoding, as well.

Formats like ProRes, DNxHR, Cineform, etc. won't use those Encode Accelerators at all, anyways. This is why my initial comment didn't focus on the GPU. I only touched on that because you seem to believe that your Fury X is as good as a GTX 1080 Ti for Render Performance, and it is not. You also completely overrate the OpenCL advantage of AMD because you seem to completely ignore just how many frames need to be rendered when you are producing a video file. VCE does not render frames, the GPU does this. You also talk as if VEGAS has some secret sauce OpenCL optimizations... and mention OpenGL as if AMD is some performance leader in that metric...

The onus is not on my to validate my tests. I didn't make the claim. You did. I tested VEGAS Pro 16 last night. I have AMD and Intel/Nvidia machines here, as well as Optimus Laptops with high end Discrete Mobile GPUs.

Encode Performance for NVENC and VCE is basically wash, and not worth caring about. The only thing worth caring about is whether or not the NLE actually supports what your GPU (or CPU, in the case of Intel) makes available to you. OpenCL has nothing to do with VCE or NVEnc. OpenCL use is common in all practically all Professional NLEs, and its use in VEGAS Pro 16 is fairly generic.

The biggest Performance Gains in VEGAS Pro 16 come from:

1. Decode Acceleration with QSV for Supported CODECs (like H.264). This vastly increases scrubbing and playback performance.

2. dGPUs like Nvidia Pascal Cards actually work properly for Accelerated Effects. This didn't work properly in either VP14 or 15 on my machines. Nothing got Accelerated on my machines with the Nvidia cards selected, and Changing Effects parameters required me to close down the Effects dialog just to update the preview. VP16 feels 3x faster than VP14 or 15 with the Nvidia GPU enabled; because it actually works properly now. This also helps playback performance.

3. QSV, NVENC, VCE renders faster than the CPU (albeit at lower quality), which is good for people who are in the "below professional" market and don't need the highest quality master renders. VEGAS CPU Rendering speeds are still industry-trailing, but the QSV Decoding helps that a lot by offloading the media decode from the CPU to the QSV Decoder SIP.

The QSV Decode Acceleration is the one feature that makes this upgrade a no-brainer for many people. It has a huge impact on making the NLE feels significantly less sluggish than it has in past releases.

And keep in mind, my initial reply did not really focus on the GPU beyond the obvious bits, largely because the GPU was irrelevant as I was replying to a recommendation for Ryzen CPU over Intel. QSV is the only thing that Decode Accelerates in VEGAS Pro 16, and that doesn't come on AMD or NVIDIA GPUs. The difference between VCE and NVENC in Encode speeds are ignorable. Much of the performance Gap in OpenGL is going to be completely overrun by the superior power of the 1080 Ti. It's about the Decode Acceleration.

Without that, H.264 is a massive PITA to edit on the timeline, and HEVC is practically off limits, especially at 4K+ Rasters.

Kinvermark wrote on 12/7/2018, 5:08 PM

Wow. I see Mr. T is having a meltdown again. Please write some more … we all love reading your opinions over and over and over....

Trensharo wrote on 12/7/2018, 5:34 PM

Wow. I see Mr. T is having a meltdown again. Please write some more … we all love reading your opinions over and over and over....

Meltdown? I think the ridiculous defensiveness and aggressive manner in which you people are responding is more indicative of that than my replies.

I mean, do you have any actual information to add, or just weak insults and trolling?

Former user wrote on 12/7/2018, 6:06 PM

Hi @Trensharo I don’t wish to labour the point but without practical examples, some benchmarks, a lot of what you say is well, theoretical, just your opinion. I do believe that some of it is probably based on established facts of the technology you mention.

What may be an obvious truth to yourself may to others appear as just another opinion, so there’s a need to back up the opinion with hard evidence if you genuinely want to show the relative merits of a particular system.

In reading through your long posts I find it hard to grasp all of it, this is certainly partly due to my own lack of knowledge of some of this, but I also feel due to a lack of coherence in your posts?

Anyway i’m sure you mean well and in your own way as I do wish to give a steer to the original poster.

Kinvermark wrote on 12/7/2018, 6:32 PM

@Trensharo

You called Oldsmoke "clueless." You do recognize that this is a clear insult do you not?

Anyway, your posting history here and on other forums where you use the same display name demonstrates a tendency towards aggressive, longwinded posts where you harang anyone who has an alternative opinion.

If you want to discuss properly, then limit your posts to a paragraph and give others a chance to express themselves - its just good manners.