Download: 3D project for testing your CPU and GPU

Marton wrote on 10/26/2011, 3:45 PM
Hi.

I made a 3D project from 2 HDV files for testing and comparing
preview and rendering speeds with different CPU-s and GPU-s.
It's just 95MB.
You can download from here:
http://www.relaxvideo.hu/3d-project.zip

After unpacking you have 3 files:
- separate left and right video
- and the project file. Open it!

You will see 3 part on the timeline, the first is just the 3D video without any FX, the 2nd have some color correction, the 3rd also a levels filter and CC. Project is set to Anaglyph Red/Cyan.
Set your preview settings to Preview (full), fire up your GPU acceleration if supported and see how much FPS do you get when you play the timeline.

My current system is a 3.3GHz dual core cpu with 4GB RAM, Geforce 7600GT, XP SP3 and Vegas10.
Tomorrow i change my video card, install Win7 and Vegas11 so now i can only share my result without GPU acceleration.
But i like to see other's results.

Oh, one interesting thing: in Options/Preferences under video tab, if i set "maximum number of rendering threads" to 1 i get better result, than with 2 (but i have a dualcore cpu) So why is this? Vegas isn't great for multi processors? Isn't optimized?

So, my Playback fps result on

timeline part1: 18
timeline part2: 5,5
timeline part3: 4,7

I tried also some rendering, always timeline part2 with the color corrected clip!

Select the 15 sec part and render to:

Mainconcept AVC 1440x1080 25interlaced, 24mbit, Best quality, Stereoscopic mode: left only
2min 14sec

Sony AVCHD 1440x1080 50i template 15mbit, AC3 audio, Best quality, Stereoscopic mode: left only
1min 12sec

And finally render to 3D MVC file:
Sony AVC/MVC
MVC 1920x1080-24p template, 25mbit, Best quality
3min 30sec

As you can see, with my 3.3GHz CPU i need 14x realtime for 3D MVC rendering, this is too much. I'm interested which GPU can shorten this time a lot?

Thank you if you share your GPU accelerated results!
Marton

Comments

Marton wrote on 11/6/2011, 4:22 AM
OK, finally i installed win7/64bit and Vegas11.
Here are the result with passive cooled GT430 1GB DDR3 card:

Timeline anaglyph playback speed (part 1, full quality):
Vegas10: 18fps
Vegas11 CPU: 25 fps (realtime)
Vegas11 GPU 5 fps (!!!)

You can see GT430 DO NOT help anything in playback performance, just worse. But interesting GPU utilisatiion is only 40%. Why?

Rendering test (part2, always best quality, left only):

HDV (quality 31, prioritize quality over speed)
Vegas 11 CPU 1:08
Vegas 11 GPU 1:04 (not so much help)

Mainconcept AVC (1440 variable bitrate 24/12mbit, Best/Left)
Vegas10 2:14
Vegas11 CPU 1:41
Vegas11 GPU 0:28

Here GT430 helps a LOT!

Sony AVCHD (1440 15mbit AC3/192, Best/Left)
Vegas10 1:12
Vegas11 CPU 0:42
Vegas11 GPU 0:36 (a little help from the card)

And finally the 3D MVC rendering with part1
(1920x1080 24fps, 25mbit, Best)
Vegas10 2:52
Vegas11 CPU 2:43
Vegas11 GPU 2:35

Again, a little help from the card, but GPU utilisation is still only 20-30%. Why not use more GPU power?

To bad this passive card easily can go more than 100 Celsius when run at 100% gpu load even with open computer case. So a good airflow is important.

Please download my test project and share your result, i'm especially interested in 3D MVC rendering times!

thank you!


Marton wrote on 11/7/2011, 1:57 AM
Check this:
Vegas11 on XP with Geforce 210 card
Mainconcept MPEG4 rendering (part2, best, left):

CPU only 2:10
with GPU: 0:53 (!!!!!)
GPU under Win7 1:10

So Vegas11 rendering is faster under XP :-)
And even a dirty cheap ($40) card can speed up Mainconcept mpeg4 rendering!
Marton wrote on 11/9/2011, 10:00 AM
Too bad i talk only with myself! :-(
Wolfgang S. wrote on 11/9/2011, 5:29 PM
I have some numbers for you. I took third clip, and did some rendering, alway with the Sony AVC/MVC encoder to 720 50p.

Processor i7 2600K, 4.4 Ghz overclocked


Render Setting Preferences/Video Preview Setting to no acceleration
CPU only: 00:31
GPU if available: 00:29

Render Setting Preferences/Video Preview Setting to Geforce 570
CPU only: 00:23
GPU if available: 00:22

Render Setting Preferences/Video Preview Setting to Quadro 2000D
CPU only: 00:35
GPU if available: 00:50

Explaination: I had one time a Quadro 2000D, and one time a Geforce 570 in the system. The i7 2700K has a nice performance - you render part 3 your project even with the deinterlacing and resizing to 720 50p in 2x real time. The Geforce 570 improves that by something about 20%, while the Quadro 2000D decreases render performance.

Preview fps are always 25.


Desktop: PC AMD 3960X, 24x3,8 Mhz * RTX 3080 Ti (12 GB)* Blackmagic Extreme 4K 12G * QNAP Max8 10 Gb Lan * Resolve Studio 18 * Edius X* Blackmagic Pocket 6K/6K Pro, EVA1, FS7

Laptop: ProArt Studiobook 16 OLED * internal HDR preview * i9 12900H with i-GPU Iris XE * 32 GB Ram) * Geforce RTX 3070 TI 8GB * internal HDR preview on the laptop monitor * Blackmagic Ultrastudio 4K mini

HDR monitor: ProArt Monitor PA32 UCG-K 1600 nits, Atomos Sumo

Others: Edius NX (Canopus NX)-card in an old XP-System. Edius 4.6 and other systems

Wolfgang S. wrote on 11/10/2011, 9:34 AM
Well could be worser! :))

Desktop: PC AMD 3960X, 24x3,8 Mhz * RTX 3080 Ti (12 GB)* Blackmagic Extreme 4K 12G * QNAP Max8 10 Gb Lan * Resolve Studio 18 * Edius X* Blackmagic Pocket 6K/6K Pro, EVA1, FS7

Laptop: ProArt Studiobook 16 OLED * internal HDR preview * i9 12900H with i-GPU Iris XE * 32 GB Ram) * Geforce RTX 3070 TI 8GB * internal HDR preview on the laptop monitor * Blackmagic Ultrastudio 4K mini

HDR monitor: ProArt Monitor PA32 UCG-K 1600 nits, Atomos Sumo

Others: Edius NX (Canopus NX)-card in an old XP-System. Edius 4.6 and other systems

Marton wrote on 11/10/2011, 9:51 AM
LOL
Marton wrote on 11/14/2011, 1:19 AM
Wolfgang:
"alway with the Sony AVC/MVC encoder to 720 50p."

To AVC or MVC format?
Please render it to 3D MVC 24p format.
thx
Wolfgang S. wrote on 11/14/2011, 2:34 AM
That makes no sense. The conversion from 50i to 24p is of poor quality, if you simply render the footage out. That is why I take 720 50p with the Sony AVC/MVC encoder (and yes, it was an MVC template).

Desktop: PC AMD 3960X, 24x3,8 Mhz * RTX 3080 Ti (12 GB)* Blackmagic Extreme 4K 12G * QNAP Max8 10 Gb Lan * Resolve Studio 18 * Edius X* Blackmagic Pocket 6K/6K Pro, EVA1, FS7

Laptop: ProArt Studiobook 16 OLED * internal HDR preview * i9 12900H with i-GPU Iris XE * 32 GB Ram) * Geforce RTX 3070 TI 8GB * internal HDR preview on the laptop monitor * Blackmagic Ultrastudio 4K mini

HDR monitor: ProArt Monitor PA32 UCG-K 1600 nits, Atomos Sumo

Others: Edius NX (Canopus NX)-card in an old XP-System. Edius 4.6 and other systems

Marton wrote on 11/14/2011, 3:35 AM
OK, thanks so what is around 2:30 minutes by me with E5200,
is just 0:30 by you with i7, right?

Great CPU you have :-)
Wolfgang S. wrote on 11/14/2011, 9:33 AM
Maybe - did you render your part 3 too? I took the third clip.

And to render to 1080 24p and 720 50p will be different, too.

Desktop: PC AMD 3960X, 24x3,8 Mhz * RTX 3080 Ti (12 GB)* Blackmagic Extreme 4K 12G * QNAP Max8 10 Gb Lan * Resolve Studio 18 * Edius X* Blackmagic Pocket 6K/6K Pro, EVA1, FS7

Laptop: ProArt Studiobook 16 OLED * internal HDR preview * i9 12900H with i-GPU Iris XE * 32 GB Ram) * Geforce RTX 3070 TI 8GB * internal HDR preview on the laptop monitor * Blackmagic Ultrastudio 4K mini

HDR monitor: ProArt Monitor PA32 UCG-K 1600 nits, Atomos Sumo

Others: Edius NX (Canopus NX)-card in an old XP-System. Edius 4.6 and other systems

Wolfgang S. wrote on 11/15/2011, 2:32 AM
Maybe I find the time on the weakend, to render to 1080 24p one of the short testclips. Just to have a comparison.

Desktop: PC AMD 3960X, 24x3,8 Mhz * RTX 3080 Ti (12 GB)* Blackmagic Extreme 4K 12G * QNAP Max8 10 Gb Lan * Resolve Studio 18 * Edius X* Blackmagic Pocket 6K/6K Pro, EVA1, FS7

Laptop: ProArt Studiobook 16 OLED * internal HDR preview * i9 12900H with i-GPU Iris XE * 32 GB Ram) * Geforce RTX 3070 TI 8GB * internal HDR preview on the laptop monitor * Blackmagic Ultrastudio 4K mini

HDR monitor: ProArt Monitor PA32 UCG-K 1600 nits, Atomos Sumo

Others: Edius NX (Canopus NX)-card in an old XP-System. Edius 4.6 and other systems

Marton wrote on 11/15/2011, 4:04 AM
thanks
no, i rendered part #2
but in this case your cpu is even better!
Wolfgang S. wrote on 11/15/2011, 4:48 AM
Yes but that is even because the CPU is a new one. Here the task should be to generate some benchmarks for CUDA.

Desktop: PC AMD 3960X, 24x3,8 Mhz * RTX 3080 Ti (12 GB)* Blackmagic Extreme 4K 12G * QNAP Max8 10 Gb Lan * Resolve Studio 18 * Edius X* Blackmagic Pocket 6K/6K Pro, EVA1, FS7

Laptop: ProArt Studiobook 16 OLED * internal HDR preview * i9 12900H with i-GPU Iris XE * 32 GB Ram) * Geforce RTX 3070 TI 8GB * internal HDR preview on the laptop monitor * Blackmagic Ultrastudio 4K mini

HDR monitor: ProArt Monitor PA32 UCG-K 1600 nits, Atomos Sumo

Others: Edius NX (Canopus NX)-card in an old XP-System. Edius 4.6 and other systems

hthfriese wrote on 11/15/2011, 9:56 AM
System:
Vista 32bit
4 GB RAM
Intel Q9505 (Quadcore 2,83 GHz)
Powercolor AMD Radeon HD 6850 (passiv, with auxciliary fan)

Timeline with Vegas 10:
Part 1: 25
Part 2: 5
Part 3: 3

Timeline with Vegas 11:
Part 1: 25
Part 2: 25
Part 3: 25

GPU usage (measured with GPU shark): about 40 %
GPU temperature about 44°C



Tom

Wolfgang S. wrote on 11/15/2011, 12:51 PM
Render time of part 3, to 1080 24p:

CPU only, Cuda preview disabled: 38 seconds
CPU only, Cuda preview enabled to GTX 570: 33 seconds
Use GPU if available, Cuda preview enabled to GTX 570: 31 seconds

So it makes some difference if you render to 720 50p or 1080 24p. What is clear, since Vegas has to calulate every frame completely new.

Desktop: PC AMD 3960X, 24x3,8 Mhz * RTX 3080 Ti (12 GB)* Blackmagic Extreme 4K 12G * QNAP Max8 10 Gb Lan * Resolve Studio 18 * Edius X* Blackmagic Pocket 6K/6K Pro, EVA1, FS7

Laptop: ProArt Studiobook 16 OLED * internal HDR preview * i9 12900H with i-GPU Iris XE * 32 GB Ram) * Geforce RTX 3070 TI 8GB * internal HDR preview on the laptop monitor * Blackmagic Ultrastudio 4K mini

HDR monitor: ProArt Monitor PA32 UCG-K 1600 nits, Atomos Sumo

Others: Edius NX (Canopus NX)-card in an old XP-System. Edius 4.6 and other systems

pebcac wrote on 11/16/2011, 11:35 PM
I just put together a new PC with Vegas 11. This seemed like a good opportunity to run some stress tests/benchmarks and this thread seems to fit the bill.

All segments seemed to preview at 25fps (how can you tell for certain?)

Hopefully I set these up right -

Mainconcept AVC 1440x1080 25i, 24mbit, Best Q, Stereo: left only
(EDIT: actually progressive and w/GPU acceleration)
"clip 2, 15s portion" rendered in 6s

Sony AVCHD 1440x1080 50i template 15mbit, AC3 audio, Best Q, Stereo: left only
"clip 2, 15s portion" rendered in 10s

Sony AVC/MVC, MVC 1920x1080-24p template, 25mbit, Best Q
"clip 2, 15s portion" rendered in 21s

For those that are curious: MSI X79A-GD65 (8D), i7-3930K @ 4.3GHz, 24GB RAM, NVidia GTX 480, Windows 7 x64, Vegas Pro 11 x64
Wolfgang S. wrote on 11/17/2011, 2:34 AM
@ pebcac,

what settings did you use for
- CPU/GPU rendering
- preview cuda

You see, depening on what settings I use here, my render time is changed from 38 to 31 seconds, but for event 3. That could be compared with your third test, even if you have taken clip2 while I took clip 3.

Desktop: PC AMD 3960X, 24x3,8 Mhz * RTX 3080 Ti (12 GB)* Blackmagic Extreme 4K 12G * QNAP Max8 10 Gb Lan * Resolve Studio 18 * Edius X* Blackmagic Pocket 6K/6K Pro, EVA1, FS7

Laptop: ProArt Studiobook 16 OLED * internal HDR preview * i9 12900H with i-GPU Iris XE * 32 GB Ram) * Geforce RTX 3070 TI 8GB * internal HDR preview on the laptop monitor * Blackmagic Ultrastudio 4K mini

HDR monitor: ProArt Monitor PA32 UCG-K 1600 nits, Atomos Sumo

Others: Edius NX (Canopus NX)-card in an old XP-System. Edius 4.6 and other systems

Hulk wrote on 11/17/2011, 8:22 AM
@Wolfgang,

I have a 4.2GHz i2500k and was wondering what hyperthreading does for preview. Here are my playback CPU utilization results in VP10. Of course all three clips play at full frame rate.

Best/Full preview quality
clip 1 - 32%
clip 2 - 58%
clip 3 - 63%

This is the frame rate that Vegas seems to "sit" on most of the time. Like I said I'm wondering if the 2600k with HT will help my preview performance so I'm curious as to your CPU utilization.
Wolfgang S. wrote on 11/17/2011, 9:36 AM
@ Hulk,

I think that can be more complex, given the fact that Vegas Pro 11 will use also the GPU. So, I think what would make sense is to measure both CPU und GPU utilization - with different combinations of "preferences/view/cuda enabled" and "preferences/view/cuda disabled". And if you have different GPUs in your system, you will have even more combinations.

But I can try to do so - but to be able to compare the results you should tell us if you had enabled or disabled CUDA in the view option for your figures.

Desktop: PC AMD 3960X, 24x3,8 Mhz * RTX 3080 Ti (12 GB)* Blackmagic Extreme 4K 12G * QNAP Max8 10 Gb Lan * Resolve Studio 18 * Edius X* Blackmagic Pocket 6K/6K Pro, EVA1, FS7

Laptop: ProArt Studiobook 16 OLED * internal HDR preview * i9 12900H with i-GPU Iris XE * 32 GB Ram) * Geforce RTX 3070 TI 8GB * internal HDR preview on the laptop monitor * Blackmagic Ultrastudio 4K mini

HDR monitor: ProArt Monitor PA32 UCG-K 1600 nits, Atomos Sumo

Others: Edius NX (Canopus NX)-card in an old XP-System. Edius 4.6 and other systems

Hulk wrote on 11/17/2011, 9:54 AM
I should have been clearer in my post. I don't have GPU assist, just using the HD3000 graphics processing built into the 2500k. Can you disable your graphics for the preview and just use your 2600k?

Also I'm curious, what's your Vcore for the 2600k to be stable at 4.4GHz? I'm at 4.2GHz and under load I'm at 1.28-1.30 Volts. I generally run Prime 95 for a few hours and if there are no errors I'm good. I could probably lower the voltage a bit but my temps are good so I'd rather know everything is solid then try to push to the edge. My chip seems to need about 1.4Volts to get 4.5GHz and that voltage is higher than I want to go. Although there is no consensus on "safe" voltages for Sandybridge, some people say don't worry so much about Vcore it's more temperature, others quote Intel's 1.51Volts incorrectly as it's really an overshoot voltage spec. Anyway I've decided on 1.3Volts as being a conservative number that should provide a long life out of this CPU.

- Mark
TheRhino wrote on 11/17/2011, 10:50 AM
i7 980X 6-core overclocked to 4.0ghz. H4600 video, so no GPU assist...

EDIT: Vegas Windows - Full Preview: 1=25fps, 2=25fps, 3=25fps
External Monitor Full Preview: All 25fps, cpu = 11-33%

Mainconcept AVC 1440x1080 25i, 24mbit, Best Q, Stereo: left only
"clip 2, 15s portion"
Vegas 11 set to 3 threads = 0:05, cpu=87%

Sony AVCHD 1440x1080 50i template 15mbit, AC3 audio, Best Q, Stereo: left only
"clip 2, 15s portion"
Vegas 11 set to 3 threads = 0:14, cpu=33%

Sony AVC/MVC, MVC 1920x1080-24p template, 25mbit, Best Q
"clip 2, 15s portion"
Vegas 10 set to 16 threads =2:01, cpu=60%
Vegas 10 set to 2 threads = 0:38, cpu=70%
Vegas 11 set to 3 threads = 0:32, cpu=80%
EDIT: Vegas 11 set to 16 threads = 0:29, cpu=95%

3rd Clip - Sony AVC/MVC encoder to 720 50p
Vegas 11 set to 3 or 16 threads = 0:26, cpu=87%

This verifies that the overclocked 6-core Socket 2011 Sandy Bridge i7-3930K provides approx. a 35% increase over previous 1366 generation of 6-core processors.

We will continue using our (3) socket 1366 workstations for many years to come. However, any new builds will be based on Socket 2011.

EDIT: It appears Vegas 11 handles multi-threaded rendering better than V10... However, on our renders to AVID DNxHD, Vegas 11 appears 25% slower. On V10 I get nearly 90% CPU useage whereas with V11 only 70% - both using 16 threads & no improvement using less threads...

Workstation C with $600 USD of upgrades in April, 2021
--$360 11700K @ 5.0ghz
--$200 ASRock W480 Creator (onboard 10G net, TB3, etc.)
Borrowed from my 9900K until prices drop:
--32GB of G.Skill DDR4 3200 ($100 on Black Friday...)
Reused from same Tower Case that housed the Xeon:
--Used VEGA 56 GPU ($200 on eBay before mining craze...)
--Noctua Cooler, 750W PSU, OS SSD, LSI RAID Controller, SATAs, etc.

Performs VERY close to my overclocked 9900K (below), but at stock settings with no tweaking...

Workstation D with $1,350 USD of upgrades in April, 2019
--$500 9900K @ 5.0ghz
--$140 Corsair H150i liquid cooling with 360mm radiator (3 fans)
--$200 open box Asus Z390 WS (PLX chip manages 4/5 PCIe slots)
--$160 32GB of G.Skill DDR4 3000 (added another 32GB later...)
--$350 refurbished, but like-new Radeon Vega 64 LQ (liquid cooled)

Renders Vegas11 "Red Car Test" (AMD VCE) in 13s when clocked at 4.9 ghz
(note: BOTH onboard Intel & Vega64 show utilization during QSV & VCE renders...)

Source Video1 = 4TB RAID0--(2) 2TB M.2 on motherboard in RAID0
Source Video2 = 4TB RAID0--(2) 2TB M.2 (1) via U.2 adapter & (1) on separate PCIe card
Target Video1 = 32TB RAID0--(4) 8TB SATA hot-swap drives on PCIe RAID card with backups elsewhere

10G Network using used $30 Mellanox2 Adapters & Qnap QSW-M408-2C 10G Switch
Copy of Work Files, Source & Output Video, OS Images on QNAP 653b NAS with (6) 14TB WD RED
Blackmagic Decklink PCie card for capturing from tape, etc.
(2) internal BR Burners connected via USB 3.0 to SATA adapters
Old Cooler Master CM Stacker ATX case with (13) 5.25" front drive-bays holds & cools everything.

Workstations A & B are the 2 remaining 6-core 4.0ghz Xeon 5660 or I7 980x on Asus P6T6 motherboards.

$999 Walmart Evoo 17 Laptop with I7-9750H 6-core CPU, RTX 2060, (2) M.2 bays & (1) SSD bay...

Hulk wrote on 11/17/2011, 11:14 AM
@TheRhino,

It is curious that your system is not able to preview at full frame rate. My 2500k at 4.2GHz can preview this project at Best/Full with an external display using no more than 62% CPU.

It's hard to believe that Sandybridge E provides ~35% over Sandybridge. Various hardware websites show modest increases except for applications like cinebench where the increased memory bandwidth and PCIe bandwidth can be utilized. And Vegas is not one of those memory starved applications. http://www.anandtech.com/show/5091/intel-core-i7-3960x-sandy-bridge-e-review-keeping-the-high-end-alive/5 . I see that you ran the test with only 3 cores. Perhaps you could run them with all 6 cores operating?

- Mark
TheRhino wrote on 11/17/2011, 12:25 PM
Hulk-

Sorry I didn't clarify better... I meant to say "threads" instead of "cores"... Post has been edited... For projects like benchmarks Vegas seems to render faster on less threads - usually 2 or 3. However, the 1080p to 720p resize benefits from more threads as do most of the videos we edit...

My benchmarks are on a pre-Sandy Bridge Socket 1366 980X. Due to CPU optimization, the overclocked 4-core Sandy Bridge 2600K (Socket 1155) performs nearly as well. This means 2 more cores added to a Sandy Bridge (Socket 2011 only) achieves a 35% gain.

When the die size shrinks this spring/summer, and heat becomes less of an issue, Intel will likely release an 8-core Sandy Bridge to fit Socket 2011 only. That is why I have been encouraging folks to invest in Socket 2011 vs Socket 1155. It's much easier to replace a CPU than build a system from the ground-up. Our last upgrade was 1.5 years ago and all we did was substitute the 920 with a 980X and we saw a 40% - 50% increase in performance...

Workstation C with $600 USD of upgrades in April, 2021
--$360 11700K @ 5.0ghz
--$200 ASRock W480 Creator (onboard 10G net, TB3, etc.)
Borrowed from my 9900K until prices drop:
--32GB of G.Skill DDR4 3200 ($100 on Black Friday...)
Reused from same Tower Case that housed the Xeon:
--Used VEGA 56 GPU ($200 on eBay before mining craze...)
--Noctua Cooler, 750W PSU, OS SSD, LSI RAID Controller, SATAs, etc.

Performs VERY close to my overclocked 9900K (below), but at stock settings with no tweaking...

Workstation D with $1,350 USD of upgrades in April, 2019
--$500 9900K @ 5.0ghz
--$140 Corsair H150i liquid cooling with 360mm radiator (3 fans)
--$200 open box Asus Z390 WS (PLX chip manages 4/5 PCIe slots)
--$160 32GB of G.Skill DDR4 3000 (added another 32GB later...)
--$350 refurbished, but like-new Radeon Vega 64 LQ (liquid cooled)

Renders Vegas11 "Red Car Test" (AMD VCE) in 13s when clocked at 4.9 ghz
(note: BOTH onboard Intel & Vega64 show utilization during QSV & VCE renders...)

Source Video1 = 4TB RAID0--(2) 2TB M.2 on motherboard in RAID0
Source Video2 = 4TB RAID0--(2) 2TB M.2 (1) via U.2 adapter & (1) on separate PCIe card
Target Video1 = 32TB RAID0--(4) 8TB SATA hot-swap drives on PCIe RAID card with backups elsewhere

10G Network using used $30 Mellanox2 Adapters & Qnap QSW-M408-2C 10G Switch
Copy of Work Files, Source & Output Video, OS Images on QNAP 653b NAS with (6) 14TB WD RED
Blackmagic Decklink PCie card for capturing from tape, etc.
(2) internal BR Burners connected via USB 3.0 to SATA adapters
Old Cooler Master CM Stacker ATX case with (13) 5.25" front drive-bays holds & cools everything.

Workstations A & B are the 2 remaining 6-core 4.0ghz Xeon 5660 or I7 980x on Asus P6T6 motherboards.

$999 Walmart Evoo 17 Laptop with I7-9750H 6-core CPU, RTX 2060, (2) M.2 bays & (1) SSD bay...

Wolfgang S. wrote on 11/17/2011, 1:59 PM
I did some additional tests with the second clip:

CPU only, Cuda preview disabled: 36 seconds
CPU only, Cuda preview enabled to GTX 570: 33 seconds
Use GPU if available, Cuda preview enabled to GTX 570: 32 seconds

So very similar to my test with the third clip.

Yes, your machine is significant faster, but it is a newer generation and so it is fine.

My system behaves quite stable at 4.4 Ghz, I do not really see crashes from overclocking. I have not invested a lot of time to investigabe that, my ASUS board has here a nice feature and is performing the overclocking by itself.

Desktop: PC AMD 3960X, 24x3,8 Mhz * RTX 3080 Ti (12 GB)* Blackmagic Extreme 4K 12G * QNAP Max8 10 Gb Lan * Resolve Studio 18 * Edius X* Blackmagic Pocket 6K/6K Pro, EVA1, FS7

Laptop: ProArt Studiobook 16 OLED * internal HDR preview * i9 12900H with i-GPU Iris XE * 32 GB Ram) * Geforce RTX 3070 TI 8GB * internal HDR preview on the laptop monitor * Blackmagic Ultrastudio 4K mini

HDR monitor: ProArt Monitor PA32 UCG-K 1600 nits, Atomos Sumo

Others: Edius NX (Canopus NX)-card in an old XP-System. Edius 4.6 and other systems