AMD. I am using a Ryzen 7 1800x on my desktop, a I9 9980HK on my laptop, and I don't get much if any of a performance advantage in VEGAS over my desktop's 3 year old CPU.
While I might not disagree with your conclusion, I don't think using a processor targeted for low power work (TDP =45 Watts) as an example in a comparison of two desktop CPUs supports your case.
There is no "standard" for coming up with a TDP for a processor. It's left up to each company how they determine the figure. Although I would think that laptops run on the less powerful side to save battery life for sure.
But still, there is Techgage that has done the CPU testing already....
The problem with the Techgage's V18 CPU testing is that they turn-off GPU rendering so for that particular benchmark you do not get the big picture. For VEGAS, you really want to know how well a particular CPU & GPU work together unless you intend to turn-off GPU rendering because you believe there is a quality difference, which I personally have not been able to visually notice after 1,000's of MP4 renders... (My clients receive the high quality intermediates along with an easy-to-view MP4 file set, so if they don't like my MP4 renders they can make their own...)
In the case of my 9900K, when GPU rendering is turned ON, I get the benefit of the 9900K's onboard Quicksync (iGPU) video PLUS my PCIe GPU, a VEGA 64 LQ. The end result are MP4 & HEVC render speeds that rival a much pricier AMD 3950X & 2080ti combo... For instance, my 9900K @ 5.1 ghz does the RedCar MP4 benchmark in 13 seconds... If I unplug the monitor connected to my iGPU, renders slow a bit, so I know that the iGPU is contributing...
Personally I am waiting to upgrade a 2nd workstation until new CPUs are released, but if I needed something ASAP, especially with TB3, I would likely consider the <$450 10-core 10850K and one of the new Nvidia GPUs, provided one can be purchased at normal retail pricing....
The AMD Ryzen 9 3900X is the best one for Vegas Pro....
Another issue I have with Techgage's benchmarks is that we cannot replicate the tests on our own systems. However, these forums have benchmarks available to all & we're getting similar performance from similar configurations, drivers, etc... The fastest times are from a 9900K & 5700XT combo running V18... Show me a 3900X & currently available GPU (5700XT, VEGA 64, Radeon 7) that completes the RedCar benchmark in <11 seconds or RedCar 4K benchmark in < 25 seconds and I will believe you... Also, I can render (3-4) instance of Vegas at once, render to/import from Thunderbolt 3 RAID devices, etc. with 100% stability... When multiple instances are running, all (3), my CPU, iGPU & PCIe GPU are contributing to each render instance...
Considering a Legion 5 gaming laptop (Core i7-7700HQ @2.8, Intel graphics + AMD Radeon RX560 w/8GB + 16GB laptop RAM) .. two options, both which will offer CPU boost over 3x:
Ryzen 7 5800H, RTX 3060 6GB, 32GB RAM OR
11th Gen Intel Core i7-11800H, RTX 3060 6GB, 16GB RAM
I'm leaning towards the Ryzen config... thoughts/
Former user
wrote on 3/25/2022, 9:46 PM
I'd get an Intel12700H, if it's priced reasonably and also an option with your preferred laptop. For this benchmark he says the main difference is due to CPU, you might be able to gauge the GPU factor by comparing the 2 12700H results and extrapolate to your CPU choices with the lower powered GPU
Just for fun, I also do now and then an online benchtest.
I think such comparison can lead to no end. It depends on how you combine the hardware together and configure them, assumed that you understand something about PC.
I'm planning to replace my GPU with an RX6800XT, but along with the time not so sure to invest more than 1000Euro for such a replacement.
Here you see my online benchtest just minutes ago:
@RedRob-CandlelightProdctns I'd definitely choose Intel over AMD, especially in a laptop, because IMO Intel CPUs handle the heat better/last longer, support Thunderbolt 3/4 connections and Vegas can utilize BOTH the Intel iGPU and the additional Nvidia GPU at the same time.
A while back I got a $999 Walmart Evoo 17" laptop with the I7-9750H CPU & Nvidia 2060 GPU that has (2) M.2 slots and (1) 2.5" bay. It's based on the Eluktronics 17" case.
The 12700H is an even better CPU option, but if you pair it with a lesser GPU to keep the price down, the CPU/GPU combo will not perform as well in Vegas as a slightly lesser CPU with a better GPU. A lot of times Vegas doesn't even max-out the CPU unless I have (2-3) instances of Vegas running at the same time...
For Vegas I'd rather have the Intel QSV decoding as it is a known quantity and performs well.
1+ on what @RogerS said. The built-in igpu added to a pci gpu yields a double-barrelled video processing setup. I have 9900k & 11900k systems, the 11900k is the faster and cooler running of the two. Newer 12th gen should be even better.
Here's an example to put the whole (current) Intel vs. AMD CPU matter to rest if your primary NLE is Vegas...
My current client's work is based on 10+ hours of 16mm films that were transferred to AVI and then stabilized in Mercalli 5.0 SOL to Lossless AVI. He wants 4K HEVC, 1080p AVC, and 4K ProRes. If I render the 4K HEVC using an AMD VCE template & 1080p using an Intel QSV template, Window's Task Manager reports 91% CPU, 87% VEGA 64 & 82% Intel iGPU usage. The VCE files are created at 60 fps at the same time the QSV are created at 75 fps, so 135 fps combined.
While both MP4 file sets are batch-rendering, I have my 2nd workstation (11700K / VEGA 56) creating intermediates (ProRes / AVID DNxHR) from a copy of the source video & final VEG shared across an affordable 10G network (using $30 Mellanox PCIe cards...)
I was able to upgrade (2) older I7-980X / Xeon workstations to the 9900K / VEGA 64 & 11700K / VEGA 56 for about $2,000 (combined). Therefore, IMO, Intel makes the best bang/buck CPU for Vegas... IMO the newer 12700K combined with affordable DDR4 motherboard/memory would make a current best bang/buck combo...
However, IMO currently Intel also offers fastest (Vegas-performing) CPU with the recent release of the 12900KS clocked at 5.5 ghz. Since Vegas thrives on core speed, not core count, and benefits from the internal iGPU, the 12900KS is gonna have some great Vegas numbers especially when paired with an AMD 6800 XT GPU...
I actually tested a $1300 AMD 6800 XT & was very happy with the increased performance over my VEGA 64 & 56 GPUs. However, I returned it & will wait until the price drops back down to the initial retail release prices of $700-ish... If you are building a new workstation, you could get by with the 12900KS' iGPU for now until GPUs drop in price. The money saved by waiting would actually offset the higher price of the 12900KS & DDR5 memory, etc...
Former user
wrote on 4/2/2022, 11:04 AM
@RedRob-CandlelightProdctns I can't go into details like in the comments above but i bought an AMD CPU, I'm stuck with it now but i sort of wish I'd got an Intel, it seems a lot of fxs/software are built with Intel in mind then AMD comes second, so Intel is prob a safer bet, & as is shown in the benchmark results RX GPU beats my RTX 3090, at the moment the best i can get is about 0:43secs
The 12700H is an even better CPU option, but if you pair it with a lesser GPU to keep the price down, the CPU/GPU combo will not perform as well in Vegas as a slightly lesser CPU with a better GPU. A lot of times Vegas doesn't even max-out the CPU unless I have (2-3) instances of Vegas running at the same time...
I jumped the gun and purchased the Lenovo with lovely screen, 32 GB, RTX 3060 and Ryzen 7 5800H. What I found is performance about the same as my 3 or 4YO desktop. Not bad.. but not "HOLY (creative) COW!" Think that's going back to Amazon... still have 10 days.
Considering DELL G15 i7-12700 w/ RTX 3060 w/16GB RAM, and swapping out one or both cards to with Crucial RAM to boost it to 24 or 32 GB (thunderbolt.. yay!). OR.. to get better display (2560x1440 400nit) and RTX 3070 Ti (same processor) would cost $400 more.. wondering if the 3070 Ti would really be worth the extra $$, or only the better display.
Looked at the Eluktronics MAX.. don't want the 17" and definitely would still want to upgrade the RAM.. never heard of Eluktronics before now... looks interesting, but don't love having to wait until mid-May.
The Lenovo (32 GB DDR4, RTX 3060 Ryzen 7 5800H) actually performed *excellent* for playback of a MV that contained 8 cameras -- 4@1080p + 3@4K + 1@2K. Full 29.97 fps even at 3x speed! It rendered out a 1-minute segment of my footage (some w/4K, some 1080p) using the Magix NVENC Codec in 0:33 (Avg 53.1 fps).
I returned it and just got in my Dell G15 Special Edition (32GB DDR5, i7-12700H, RTX 3070 Ti). Ran identical tests and found that the new config:
Renders 2x faster -- same render took 0:18.. similar results to other render CODECs I tested too.
FFMPEG ran 2x faster and used full 100% CPU (latest version of ffmpeg)
Neat Video was SUPER happy using CPU Cores+GPU, giving 14fps preview!
BUT -- playback of that same MV isn't as smooth, bouncing around 27-28 fps and much slower in fast forward. And I only got that speed when Dynamic RAM Preview was 200 -- if I set that to 0 (like I usually do) playback was horribly slow (7fps-ish).
I tried different combinations with the Video->GPU-Acceleration-of-Video-Processing (NVIDIA vs Intel Iris Xe) and FileIO->Hardware-Decoder-To-Use (NVIDIA vs Intel QSV). And I watched Performance Monitor for the CPU and GPUs to settle before doing testing, as it seems media is cached and decoded even while sitting idle on the timeline, and I didn't want that to skew results. Accepting the defaults (NVIDIA for GPU-Accel and Intel for Decode), performance monitor confirms the Intel is doing the decoding for playback... but I didn't see better performance when the NVIDIA was doing both either.
Gotta say -- I'm surprised and disappointed by the playback, that the Ryzen+RTX3060 seemed to be smoother. How can this be? Is this Intel Iris Xe a problem?
The Ryzen 5800H's integrated GPU is likely faster for playback decoding than the Intel's. Maybe that is what contributed to the better performance.
Former user
wrote on 5/16/2022, 1:45 AM
BUT -- playback of that same MV isn't as smooth, bouncing around 27-28 fps and much slower in fast forward. And I only got that speed when Dynamic RAM Preview was 200 -- if I set that to 0 (like I usually do) playback was horribly slow (7fps-ish).
Gotta say -- I'm surprised and disappointed by the playback, that the Ryzen+RTX3060 seemed to be smoother. How can this be? Is this Intel Iris Xe a problem?
It doesn't sound right. Your encode speed being 2x compared to AMD laptop means it's able to utilize the extra resources of Intel Laptop but then you have this problem. Assuming all your camera files are AVC and compatible with Legacy AVC , what happens when you choose that for decoder?
You won't have GPU decode , 12700 may be strong enough to take up the extra processing load of those 8 cameras, Legacy AVC is more efficient at using CPU than SO4. Do you get your full 30fps now, and what about playback at 2x or 3x?
This isn't a fix, just interested in the result without GPU decoding or SO4. (If any of your AVC's are 10bit they will force SO4)