OT: Cores vs. Ghz

Dach wrote on 1/25/2013, 8:47 AM
One objective I have this year is to build a new system. Does any one have some insight with Vegas when it comes to making a decision on the CPU. I know I will building an AMD system.

What will Vegas appreciate more. A CPU with up to 8 cores or faster GHz? I am currently running a 3.0 GHz Phenom II chip and am looking at the newer 4.0 GHz (8 cores) FX chip, but there is also a 4.2 GHz (4 cores).

It's my opinion that money should be put into the processor versus the GPU.

Thanks,
Chad

Comments

musicvid10 wrote on 1/25/2013, 8:51 AM
Cores vs. Ghz is not a very good comparison model for video rendering.
Look at the cpu charts on tomshardware (for video rendering, not gaming), pick your price point, and go from there.

GPU assist, as you've undoubtedly read on these forums, is a WIP. Start with a video card that is known to work, and defer purchasing a monster card until the technology is truly ready for prime time, maybe in 18-36 months.
TheHappyFriar wrote on 1/25/2013, 10:10 AM
I've never considered the advantage of multiple cores a lighting speed render. I've considered the advantage that I can run multiple programs with less of a performance hit.

That being said, the PPro charts on tomshardware says the new AMD 8 core renders in 212 seconds for $200 (newegg). The Intel Newegg has in stock (i7-3770) is 15 seconds faster @ 197.

$100 isn't worth 12 seconds to me. $100 can get me a new drive, a better monitor, more RAM, an upgrade to the next-up of the OS I want, etc.
rmack350 wrote on 1/25/2013, 2:31 PM
My understanding has been that speed trumps cores when it comes to rendering. Most codecs don't make great use of multiple cores.

CPU frequency isn't an apples to apples comparison, though. One CPU may be faster at a specific frequency than another would be. But within a CPU type, faster would be more useful than throwing more but slower cores at the problem.

In the end, you'll be getting a multicore CPU no matter what. As HF notes, this will help with overall responsiveness. I'd think a fast quad would be more useful than a slower six or 8 core within the same CPU line.

<edit>As for GPU...No GPU and a successful render in 60 seconds trumps a fast GPU and 10 failed 6-second renders.</edit>

Rob
rfpd619 wrote on 1/26/2013, 3:24 PM
I upgraded my system this week and had some good results. I was using an AMD FX4100 Quad core @3.6 GHz Win 7 64bit with 8 Gb ram, I upgraded to a AMD FX8350 8 core @4.0 GHz and added another 8 Gb ram (16 Gb total).

Before installing the new proc and ram I did a test render of a small project to wmv11 8mbps HD 1080-30p.

With the old proc and ram it took 7:38.
With the new proc and ram it took 3:08.

Also scubbing on the timeline seems more smooth and less stuttering.

I set GPU rendering to off. I never had any luck using GPU rendering. My video card is Geforce GTX550i.

Rick

TheRhino wrote on 1/26/2013, 7:22 PM
I say get the best (bang/buck) CPU/MB/RAM you can afford because you can always add hard drives or a faster video card later but you have to reinstall your OS, Programs & Settings everytime you update the CPU/MB/RAM trio.

For instance, nearly 3 years ago I bought a $1000 6-core 980X which easily overclocks to 4.0ghz on air. Almost all of my HD projects render in near real-time. For instance, a one hour VEG takes about one hour to render with 24GB or RAM and (2) fast RAIDs

Three years later the only single CPUs to beat it are the $600 3930K or $1100 3960/70K which only perform about 20% faster. (This is the first time I can remember where (3) years have produced such limited gains in single CPU speed...)

Since my 6-core CPU is fast I can still get by with my 4+ year-old GPU which was only $100 at the time... IMO spending $300+ for a GPU (for Vegas) is wasting money not only because GPU rendering is currently unreliable but because not all codecs utilize the GPU anyway...

That said, I suggest the 3770K for an affordable home NLE & the 3930K (overclocked to 4.5ghz) if you edit for $$$. The next step-up is a dual CPU server which will cost 4X as much. Even if you have the funds, IMO it is better to spend that kind of money on (2) workstations vs. all of it on one. For instance, I currently have two workstations sharing the same dual monitor system. This setup has allowed me to utilize my time much better. I allow one system to render-out various file types of finished projects while I start the next project on the adjacent system...

Workstation C with $600 USD of upgrades in April, 2021
--$360 11700K @ 5.0ghz
--$200 ASRock W480 Creator (onboard 10G net, TB3, etc.)
Borrowed from my 9900K until prices drop:
--32GB of G.Skill DDR4 3200 ($100 on Black Friday...)
Reused from same Tower Case that housed the Xeon:
--Used VEGA 56 GPU ($200 on eBay before mining craze...)
--Noctua Cooler, 750W PSU, OS SSD, LSI RAID Controller, SATAs, etc.

Performs VERY close to my overclocked 9900K (below), but at stock settings with no tweaking...

Workstation D with $1,350 USD of upgrades in April, 2019
--$500 9900K @ 5.0ghz
--$140 Corsair H150i liquid cooling with 360mm radiator (3 fans)
--$200 open box Asus Z390 WS (PLX chip manages 4/5 PCIe slots)
--$160 32GB of G.Skill DDR4 3000 (added another 32GB later...)
--$350 refurbished, but like-new Radeon Vega 64 LQ (liquid cooled)

Renders Vegas11 "Red Car Test" (AMD VCE) in 13s when clocked at 4.9 ghz
(note: BOTH onboard Intel & Vega64 show utilization during QSV & VCE renders...)

Source Video1 = 4TB RAID0--(2) 2TB M.2 on motherboard in RAID0
Source Video2 = 4TB RAID0--(2) 2TB M.2 (1) via U.2 adapter & (1) on separate PCIe card
Target Video1 = 32TB RAID0--(4) 8TB SATA hot-swap drives on PCIe RAID card with backups elsewhere

10G Network using used $30 Mellanox2 Adapters & Qnap QSW-M408-2C 10G Switch
Copy of Work Files, Source & Output Video, OS Images on QNAP 653b NAS with (6) 14TB WD RED
Blackmagic Decklink PCie card for capturing from tape, etc.
(2) internal BR Burners connected via USB 3.0 to SATA adapters
Old Cooler Master CM Stacker ATX case with (13) 5.25" front drive-bays holds & cools everything.

Workstations A & B are the 2 remaining 6-core 4.0ghz Xeon 5660 or I7 980x on Asus P6T6 motherboards.

$999 Walmart Evoo 17 Laptop with I7-9750H 6-core CPU, RTX 2060, (2) M.2 bays & (1) SSD bay...

TheHappyFriar wrote on 1/26/2013, 8:31 PM
You can add ram w/o reinstalling anything. That & extra drives/expansion cards seem to be the only things. Replacing the CPU or GPU (a completely new & different generation) always caused me trouble of I didn't reinstall the OS.
JohnnyRoy wrote on 1/27/2013, 9:03 AM
> "What will Vegas appreciate more. A CPU with up to 8 cores or faster GHz? I am currently running a 3.0 GHz Phenom II chip and am looking at the newer 4.0 GHz (8 cores) FX chip, but there is also a 4.2 GHz (4 cores). "

Since Vegas Pro can uses all of the cores and video rendering is highly multithreaded, I would compare raw power:

8 cores x 4.0 GHz = 32 Ghz total
4 cores x 4.2 GHz = 16.8 GHz total

I would definitely buy the 8 core (besides 0.2 GHz faster is not that much faster).

~jr
TheRhino wrote on 1/27/2013, 12:45 PM
Remember, you cannot simply multiply Cores X Mhz to determine a processor's rendering potential... For instance, AMD uses different standards to advertise how many "cores" they have in their CPUs. Therefore a 4-core Intel 3770K @ 3.5 ghz will outpeform an 8-core AMD FX-8350 @ 4.0 ghz.

I have found that the Cinebench CPU benchmark does a pretty good job of showing how different processors' rendering speeds will compare within Vegas. For instance, my aging 6-core 980X overclocked to 4.0 ghz scores a 10 in Cinebench. In comparison, the new 4-core Intel 3770K @3.5 ghz scores a 7.5 and the 8-core AMD FX-8350 @ 4.0 ghz even less.

Not long ago I rendered a typical Vegas project of mine on a 3770K machine built for a relative. The 3770K took 35% longer to render the project vs. my (3) year-old 980X. I rendered the same project on a new 6-core 3930K running at 4ghz and it only performed 15%-20% faster than my 980X so I decided to hold-off on upgrades until faster single-CPU solutions are available.

IMO this is the first time in recent history that Moore's Law has not applied to single CPU performance. (Moore's Law states that every 2 years processor, etc. speeds double within the same price-range because the transistor count doubles...) I have had my 980X for (3) years and the fastest/affordable single CPU solution is only 20% - 30% faster. The only way I can double my speed after (3) years is to spend 4X the price I paid and move to a dual Xeon platform.

IMO loss of revenue at the high end (like with Apple Mac Pro lineup) combined with a global recession have forced everyone to focus their R&D on the the products the masses are buying like smartphones & smart HDTVs. We see a flurry of new mobile processors every 6 months for Smartphones & Smart HDTVs but very little changes at the high-end.

So with that said, it would be nice if Sony's V12 would better-support the latest GTX 6xx GPUs which have been out now for over a year...

Workstation C with $600 USD of upgrades in April, 2021
--$360 11700K @ 5.0ghz
--$200 ASRock W480 Creator (onboard 10G net, TB3, etc.)
Borrowed from my 9900K until prices drop:
--32GB of G.Skill DDR4 3200 ($100 on Black Friday...)
Reused from same Tower Case that housed the Xeon:
--Used VEGA 56 GPU ($200 on eBay before mining craze...)
--Noctua Cooler, 750W PSU, OS SSD, LSI RAID Controller, SATAs, etc.

Performs VERY close to my overclocked 9900K (below), but at stock settings with no tweaking...

Workstation D with $1,350 USD of upgrades in April, 2019
--$500 9900K @ 5.0ghz
--$140 Corsair H150i liquid cooling with 360mm radiator (3 fans)
--$200 open box Asus Z390 WS (PLX chip manages 4/5 PCIe slots)
--$160 32GB of G.Skill DDR4 3000 (added another 32GB later...)
--$350 refurbished, but like-new Radeon Vega 64 LQ (liquid cooled)

Renders Vegas11 "Red Car Test" (AMD VCE) in 13s when clocked at 4.9 ghz
(note: BOTH onboard Intel & Vega64 show utilization during QSV & VCE renders...)

Source Video1 = 4TB RAID0--(2) 2TB M.2 on motherboard in RAID0
Source Video2 = 4TB RAID0--(2) 2TB M.2 (1) via U.2 adapter & (1) on separate PCIe card
Target Video1 = 32TB RAID0--(4) 8TB SATA hot-swap drives on PCIe RAID card with backups elsewhere

10G Network using used $30 Mellanox2 Adapters & Qnap QSW-M408-2C 10G Switch
Copy of Work Files, Source & Output Video, OS Images on QNAP 653b NAS with (6) 14TB WD RED
Blackmagic Decklink PCie card for capturing from tape, etc.
(2) internal BR Burners connected via USB 3.0 to SATA adapters
Old Cooler Master CM Stacker ATX case with (13) 5.25" front drive-bays holds & cools everything.

Workstations A & B are the 2 remaining 6-core 4.0ghz Xeon 5660 or I7 980x on Asus P6T6 motherboards.

$999 Walmart Evoo 17 Laptop with I7-9750H 6-core CPU, RTX 2060, (2) M.2 bays & (1) SSD bay...

Mindmatter wrote on 1/27/2013, 12:55 PM
I'm not so much concerned about rendering speed, but I'd rather finally get a decent preview in Vegas. Will faster/more cores make that possible? Can't help thinking my investment in the GTX570 didn't bring much improvement.

AMD Ryzen 9 5900X, 12x 3.7 GHz
32 GB DDR4-3200 MHz (2x16GB), Dual-Channel
NVIDIA GeForce RTX 3070, 8GB GDDR6, HDMI, DP, studio drivers
ASUS PRIME B550M-K, AMD B550, AM4, mATX
7.1 (8-chanel) Surround-Sound, Digital Audio, onboard
Samsung 970 EVO Plus 250GB, NVMe M.2 PCIe x4 SSD
be quiet! System Power 9 700W CM, 80+ Bronze, modular
2x WD red 6TB
2x Samsung 2TB SSD

JohnnyRoy wrote on 1/27/2013, 1:07 PM
> "Remember, you cannot simply multiply Cores X Mhz to determine a processor's rendering potential... For instance, AMD uses different standards to advertise how many "cores" they have in their CPUs. Therefore a 4-core Intel 3770K @ 3.5 ghz will outpeform an 8-core AMD FX-8350 @ 4.0 ghz."

Yea, I knew that you couldn't compare Intel specs to AMD specs but these are two AMD processors we are talking about so I thought it was safe to compare their raw power available for rendering. It's a whole other things as to whether Vegas will use all that power or not.

I did not, however, realize that a 4-core Intel 3770K @ 3.5 ghz will outpeform an 8-core AMD FX-8350 @ 4.0 ghz. I knew AMD was really bad but I didn't realize they were that bad. Wow!

> "I have found that the Cinebench CPU benchmark does a pretty good job of showing how different processors' rendering speeds will compare within Vegas. For instance, my aging 6-core 980X overclocked to 4.0 ghz scores a 10 in Cinebench. In comparison, the new 4-core Intel 3770K @3.5 ghz scores a 7.5 and the 8-core AMD FX-8350 @ 4.0 ghz even less. "

Interesting. My new Intel Core i7-3930K Sandy Bridge-E 3.2GHz scores a 10.01 pts in Cinebench version 11.5 and my Quadro 4000 scores 60.97 fps. I wonder what I can get if I overclock a bit? ;-)

~jr
Terje wrote on 1/28/2013, 5:09 PM
>> I know I will building an AMD system.

This one caught me by surprise. I have to ask "why"? At this stage in the CPU war, and for a few years already, Intel has a rather significant speed advantage over AMD. Even taking $$s into account, you'd generally be better off with Intel (and nVidia for GPU probably).