NEW: Rendertest-2010

Comments

fordie wrote on 2/8/2013, 12:41 PM
i7 3770K @4.4 Ghz 16GB DDR 3 Vegas 12 build 486

CPU ONLY render time 160 s cpu usage 70%

GPU selected. render time 46 s cpu usage 15%

Nvidia GTX 560ti 448 core driver 310.90.

running problem free ..so far
john_dennis wrote on 2/8/2013, 2:35 PM
@fordie

Looks as if your overclock is working for your CPU-only measurement. GTX560ti doesn't suck, either.
OldSmoke wrote on 2/8/2013, 3:50 PM
3930k@4.3GHz
Win 7-64bit
VP 12, 486
16GB RAM
All SSDs; Sytem, Render & Projects
Storage is on a RAID1
GTX570 O.C. to 875MHz, driver 275.33 (the fastest in my system)

33sec. with GPU on
160sec. CPU only

How about that:
https://dl.dropbox.com/u/39278380/render-test2010.png

Proud owner of Sony Vegas Pro 7, 8, 9, 10, 11, 12 & 13 and now Magix VP15&16.

System Spec.:
Motherboard: ASUS X299 Prime-A

Ram: G.Skill 4x8GB DDR4 2666 XMP

CPU: i7-9800x @ 4.6GHz (custom water cooling system)
GPU: 1x AMD Vega Pro Frontier Edition (water cooled)
Hard drives: System Samsung 970Pro NVME, AV-Projects 1TB (4x Intel P7600 512GB VROC), 4x 2.5" Hotswap bays, 1x 3.5" Hotswap Bay, 1x LG BluRay Burner

PSU: Corsair 1200W
Monitor: 2x Dell Ultrasharp U2713HM (2560x1440)

rstrong wrote on 2/9/2013, 7:41 PM
VP 12, 486
Windows 7 64-bit
RAM: 24gb
Intel i7 970 3.2 GHz 12mb cache
GeForce 9600GT

GPU on, 5 secs.
CPU only, 31 secs.

R. Strong

Custom remote refrigerated water cooled system for CPU & GPU. Intel i7- 6950X, 10 Core (4.3 Turbo) 64gb DDR4, Win7 64 Bit, SP1. Nvidia RTX 2080, Studio driver 431.36, Cameras: Sony HVR-Z5U, HVR-V1U, HVR-A1U, HDR-HC3. Canon 5K MK2, SX50HS. GoPro Hero2. Nikon CoolPix P510. YouTube: rstrongvideo

Grazie wrote on 2/10/2013, 12:57 AM
WOW, Robert, how on Earth did you get that??? What is it in the build that gets you those blistering results?

Stunned . . .

Grazie

Barry W. Hull wrote on 2/10/2013, 4:45 AM
Are you kidding me? Is that real? Yeah, what on earth...
FilmingPhotoGuy wrote on 2/10/2013, 5:26 AM
@rstrong, please go tools - cleanup pre-rendered video and do the test again.
Grazie wrote on 2/10/2013, 6:35 AM
Here's some interesting results, well I think so:-

1] GPU OFF : Set aside RAM 8gb = 2:59

2] GPU OFF : Set aside RAM 0gb = 13:45

3] GPU OFF : Set aside RAM 0.1gb = 3:47

4] GPU ON : Set aside RAM 0.1gb = 53secs

5] GPU ON : Set aside RAM 8gb = 46secs

And now for SHIFT+B:-

6] GPU ON : Set aside RAM 8gb = 50secs

7] GPU OFF : Set aside RAM 8gb = 3:01

So, GPU ON and set aside RAM at 8gb for optimum.

Render = 46 seconds

SHIFT+B = 45 seconds

It would appear everything "render" is now being handled by my nVidia GTX560ti 16gb card. Is this possible? Have SCS got it right?

Cheers

Grazie

ps: Would still love to hear from Rob?

FilmingPhotoGuy wrote on 2/10/2013, 9:26 AM
I think it's high time we get a new Rendertest-2013 other wise we'll be measuring in milliseconds.

C'mon John do the honours.
OldSmoke wrote on 2/10/2013, 9:46 AM
If your entire projects fits into the preview RAM then the second time you render it it will be rendered out of the RAM. For real testing, you need to clear your preview RAM each time you render or set the preview RAM to the default of 200 to have comparable results. But it is fun watching the render to be done in seconds and the only bottle neck is the speed of your drive. I wonder how fast it would render to a RAMDisk.

Proud owner of Sony Vegas Pro 7, 8, 9, 10, 11, 12 & 13 and now Magix VP15&16.

System Spec.:
Motherboard: ASUS X299 Prime-A

Ram: G.Skill 4x8GB DDR4 2666 XMP

CPU: i7-9800x @ 4.6GHz (custom water cooling system)
GPU: 1x AMD Vega Pro Frontier Edition (water cooled)
Hard drives: System Samsung 970Pro NVME, AV-Projects 1TB (4x Intel P7600 512GB VROC), 4x 2.5" Hotswap bays, 1x 3.5" Hotswap Bay, 1x LG BluRay Burner

PSU: Corsair 1200W
Monitor: 2x Dell Ultrasharp U2713HM (2560x1440)

Hulk wrote on 2/10/2013, 10:07 AM
Old 2500k with slight overclock to 4.2GHz
No GPU in the system.
VP10e

2:36
ingeborgdot wrote on 2/10/2013, 4:04 PM
I guess I am baffled and maybe some of you experts on here can tell me what is my problem.
When I run this test with strictly my CPU and run the test at default ram my test was 211 sec
With GPU and ram default it was 38.
With GPU and ram at 800 which seems to be my sweetspot it was 31 sec.
My problem is that when I do normal rendering of video I do using the GPU is my slowest option. What gives here? In fact it is sometimes 3 times as slow. I am at this point taking HDV 1080 down for DVD which is what everyone still wants. When I do this my best option is to use the CPU. I like the results of the GPU with this though. Any advice on what to do or what is the problem? Thanks.
OldSmoke wrote on 2/10/2013, 4:30 PM
Which version of Vegas and template you use makes a difference too. Some templates, codex don't do GPU acceleration. AVC templates must be enabled to use GPU acceleration for example. If I leave the default AVC Internet template as it is, Automatic, it will not use the GPU and render times go up. The GTX5xx series are currently the best ones to use for Vegas.

Proud owner of Sony Vegas Pro 7, 8, 9, 10, 11, 12 & 13 and now Magix VP15&16.

System Spec.:
Motherboard: ASUS X299 Prime-A

Ram: G.Skill 4x8GB DDR4 2666 XMP

CPU: i7-9800x @ 4.6GHz (custom water cooling system)
GPU: 1x AMD Vega Pro Frontier Edition (water cooled)
Hard drives: System Samsung 970Pro NVME, AV-Projects 1TB (4x Intel P7600 512GB VROC), 4x 2.5" Hotswap bays, 1x 3.5" Hotswap Bay, 1x LG BluRay Burner

PSU: Corsair 1200W
Monitor: 2x Dell Ultrasharp U2713HM (2560x1440)

ingeborgdot wrote on 2/10/2013, 6:27 PM
I do most of my rendering to DVD Arch. NTSC Widescreen video stream. Are there any settings I need to change or can change?
OldSmoke wrote on 2/10/2013, 7:11 PM
I just rendered the rendertest-2010 using VP12, MPEG-2 DVDA NTSC Widescreen video stream template and it took 20sec. Are you rendering video and audio or audio seperate? You should actually render the audio seperate usng the AC3-Pro Stereo DVD template that will import into DVDA without recompression. Audio rendering does take it's time too and it can't be GPU accelerated.

Is Hyperthreading enabled in your bios?

Proud owner of Sony Vegas Pro 7, 8, 9, 10, 11, 12 & 13 and now Magix VP15&16.

System Spec.:
Motherboard: ASUS X299 Prime-A

Ram: G.Skill 4x8GB DDR4 2666 XMP

CPU: i7-9800x @ 4.6GHz (custom water cooling system)
GPU: 1x AMD Vega Pro Frontier Edition (water cooled)
Hard drives: System Samsung 970Pro NVME, AV-Projects 1TB (4x Intel P7600 512GB VROC), 4x 2.5" Hotswap bays, 1x 3.5" Hotswap Bay, 1x LG BluRay Burner

PSU: Corsair 1200W
Monitor: 2x Dell Ultrasharp U2713HM (2560x1440)

TheRhino wrote on 2/10/2013, 7:15 PM
The reason I revived this thread is that I am disappointed in the rendering performance of V11 & V12 vs. V10e and I am disappointed in the lack of high-end CPU & GPU developments in the last (3) years. It seems like all of the R&D is going into low-cost, low-power, CPUs & GPUs for tablets, smartphones, media centers, etc... Even Apple has not updated the Mac Pro line in a timely fashion...

About (3) years ago I purchased a 6-core 980x that when overclocked allows me to render a typical (2) hour HD project in about (2) hours using Vegas 10e which utilizes 100% of my CPU during renders. The exact project in V11 & V12 takes much longer with the same CPU and the results from GPU rendering (GTX570) are hit & miss but never as fast as 10e...

It's time to upgrade my oldest workstation & make it my flagship vs. the 980X (which will serve well for many years - just not as the primary editing rig...) HOWEVER, there just isn't a single CPU option right now that is all that much faster. I have been using Vegas for almost 15 years and this is the first time that a span of (3) years does not allow me to double my rendering speeds for the same/less money than I spent previously. My only option to double rendering speeds is to move to a costly dual Xeon which would cost 5X as much...

The flip side is that I guess I can keep my money & not upgrade for a while... However, I miss the days when I edited in SD only and a simple processor upgrade doubled or tripled my rendering speeds... It was great the first time a 2 hour SD project rendered in under 45 minutes. Those were the days...

Workstation C with $600 USD of upgrades in April, 2021
--$360 11700K @ 5.0ghz
--$200 ASRock W480 Creator (onboard 10G net, TB3, etc.)
Borrowed from my 9900K until prices drop:
--32GB of G.Skill DDR4 3200 ($100 on Black Friday...)
Reused from same Tower Case that housed the Xeon:
--Used VEGA 56 GPU ($200 on eBay before mining craze...)
--Noctua Cooler, 750W PSU, OS SSD, LSI RAID Controller, SATAs, etc.

Performs VERY close to my overclocked 9900K (below), but at stock settings with no tweaking...

Workstation D with $1,350 USD of upgrades in April, 2019
--$500 9900K @ 5.0ghz
--$140 Corsair H150i liquid cooling with 360mm radiator (3 fans)
--$200 open box Asus Z390 WS (PLX chip manages 4/5 PCIe slots)
--$160 32GB of G.Skill DDR4 3000 (added another 32GB later...)
--$350 refurbished, but like-new Radeon Vega 64 LQ (liquid cooled)

Renders Vegas11 "Red Car Test" (AMD VCE) in 13s when clocked at 4.9 ghz
(note: BOTH onboard Intel & Vega64 show utilization during QSV & VCE renders...)

Source Video1 = 4TB RAID0--(2) 2TB M.2 on motherboard in RAID0
Source Video2 = 4TB RAID0--(2) 2TB M.2 (1) via U.2 adapter & (1) on separate PCIe card
Target Video1 = 32TB RAID0--(4) 8TB SATA hot-swap drives on PCIe RAID card with backups elsewhere

10G Network using used $30 Mellanox2 Adapters & Qnap QSW-M408-2C 10G Switch
Copy of Work Files, Source & Output Video, OS Images on QNAP 653b NAS with (6) 14TB WD RED
Blackmagic Decklink PCie card for capturing from tape, etc.
(2) internal BR Burners connected via USB 3.0 to SATA adapters
Old Cooler Master CM Stacker ATX case with (13) 5.25" front drive-bays holds & cools everything.

Workstations A & B are the 2 remaining 6-core 4.0ghz Xeon 5660 or I7 980x on Asus P6T6 motherboards.

$999 Walmart Evoo 17 Laptop with I7-9750H 6-core CPU, RTX 2060, (2) M.2 bays & (1) SSD bay...

OldSmoke wrote on 2/10/2013, 7:43 PM
I feel totaly the opposite. The rendertest-2010 took 105sec in VP10e versus 34sec in VP12. I still have 10,11 and 12 on my PC. I dont see a GTX570 in your system specs. Did you recently upgrade your computer? For me GPU acceleration is stunning in terms of improvement; I guess that shows how different systems can be.

Proud owner of Sony Vegas Pro 7, 8, 9, 10, 11, 12 & 13 and now Magix VP15&16.

System Spec.:
Motherboard: ASUS X299 Prime-A

Ram: G.Skill 4x8GB DDR4 2666 XMP

CPU: i7-9800x @ 4.6GHz (custom water cooling system)
GPU: 1x AMD Vega Pro Frontier Edition (water cooled)
Hard drives: System Samsung 970Pro NVME, AV-Projects 1TB (4x Intel P7600 512GB VROC), 4x 2.5" Hotswap bays, 1x 3.5" Hotswap Bay, 1x LG BluRay Burner

PSU: Corsair 1200W
Monitor: 2x Dell Ultrasharp U2713HM (2560x1440)

rstrong wrote on 2/10/2013, 8:11 PM
LightADs --@rstrong, please go tools - cleanup pre-rendered video and do the test again.

Okay so I did cleanup pre-rendered video this time........

GPU......18 sec's
CPU......31 sec's
Preview ram was set to 1024mb

Win7 64-bit
RAM: 24gb
Intel i7 970 3.2 GHz
GeForce 9600GT

R. Strong

Custom remote refrigerated water cooled system for CPU & GPU. Intel i7- 6950X, 10 Core (4.3 Turbo) 64gb DDR4, Win7 64 Bit, SP1. Nvidia RTX 2080, Studio driver 431.36, Cameras: Sony HVR-Z5U, HVR-V1U, HVR-A1U, HDR-HC3. Canon 5K MK2, SX50HS. GoPro Hero2. Nikon CoolPix P510. YouTube: rstrongvideo

john_dennis wrote on 2/10/2013, 8:16 PM
I agree with TheRhino on the CPU front and I'm totally frustrated on the GPU front. While I like my new system, I didn't feel the bang was worth the trouble. I felt as if I was going through the motions on my four-year upgrade cycle. My expectation is to double my rendering performance.

I wish intel had been successful with the Prescott and could have pushed the clocks much higher wihout burning the house down. (I also wish I hadn't bought 10,000 shares of that dot-com stock in 2001.)

Parallelism, though it has been around for a long time, has not been the perfect solution to increased rendering performance. When the parallelism is spread across dissimilar processors (a.k.a. GPU cores) it becomes even more problematic to the end user. It sometimes reminds me of the days when we first were able to buy CD burners and each drive manufacturer had its own interface card.
JohnnyRoy wrote on 2/11/2013, 6:17 AM
> "GPU......18 sec's"
> "CPU......31 sec's"

Robert, are you following the instructions and rendering to MainConcept MPEG-2 / HDV 1080-60i with Video Rendering Quality set to BEST?

I find it hard to believe that your Intel i7 970 6 Core 3.2 GHz is TWICE as fast as my Core i7-3930K Sandy Bridge-E 3.2 GHz! Also I can't believe that a GeForce 9600GT doubles your performance.

Can you explain a bit more about your setup? Are you overclocking or did you not follow the render test instructions?

~jr
TheRhino wrote on 2/11/2013, 1:13 PM
I don't have a GTX 570 installed on my 980X system, but I recently built a new workstation for a client/friend. He liked my 980X setup so we went with the newer 3930K & GTX 570. After it was all said & done his system is only about 20% faster than my 3 year-old 980X. (I went light on the overclocking but chose the best GPU drivers based on others' results...)

Until he is able to find a license for V10 we could only test V12. We found that it is actually SLOWER than my (3) year-old 980X system running V10e. V10e uses 100% of my CPU during rendering whereas V12 only uses 60%... His GPU does OK but it does not work with every codec & he has to overcome the learning curve of when to use the GPU and when not to. IMO this is frustrating...

I just thought that (3) years later I could spend the same $2,000 on a system and have 1.5X to 2X the performance. I have (4) workstations in our studio and the slowest one is only a Q6600. Every few years I remove the guts from the slowest workstation & make it my main editing rig. This is the longest I have gone between upgrades.

Workstation C with $600 USD of upgrades in April, 2021
--$360 11700K @ 5.0ghz
--$200 ASRock W480 Creator (onboard 10G net, TB3, etc.)
Borrowed from my 9900K until prices drop:
--32GB of G.Skill DDR4 3200 ($100 on Black Friday...)
Reused from same Tower Case that housed the Xeon:
--Used VEGA 56 GPU ($200 on eBay before mining craze...)
--Noctua Cooler, 750W PSU, OS SSD, LSI RAID Controller, SATAs, etc.

Performs VERY close to my overclocked 9900K (below), but at stock settings with no tweaking...

Workstation D with $1,350 USD of upgrades in April, 2019
--$500 9900K @ 5.0ghz
--$140 Corsair H150i liquid cooling with 360mm radiator (3 fans)
--$200 open box Asus Z390 WS (PLX chip manages 4/5 PCIe slots)
--$160 32GB of G.Skill DDR4 3000 (added another 32GB later...)
--$350 refurbished, but like-new Radeon Vega 64 LQ (liquid cooled)

Renders Vegas11 "Red Car Test" (AMD VCE) in 13s when clocked at 4.9 ghz
(note: BOTH onboard Intel & Vega64 show utilization during QSV & VCE renders...)

Source Video1 = 4TB RAID0--(2) 2TB M.2 on motherboard in RAID0
Source Video2 = 4TB RAID0--(2) 2TB M.2 (1) via U.2 adapter & (1) on separate PCIe card
Target Video1 = 32TB RAID0--(4) 8TB SATA hot-swap drives on PCIe RAID card with backups elsewhere

10G Network using used $30 Mellanox2 Adapters & Qnap QSW-M408-2C 10G Switch
Copy of Work Files, Source & Output Video, OS Images on QNAP 653b NAS with (6) 14TB WD RED
Blackmagic Decklink PCie card for capturing from tape, etc.
(2) internal BR Burners connected via USB 3.0 to SATA adapters
Old Cooler Master CM Stacker ATX case with (13) 5.25" front drive-bays holds & cools everything.

Workstations A & B are the 2 remaining 6-core 4.0ghz Xeon 5660 or I7 980x on Asus P6T6 motherboards.

$999 Walmart Evoo 17 Laptop with I7-9750H 6-core CPU, RTX 2060, (2) M.2 bays & (1) SSD bay...

rstrong wrote on 2/11/2013, 10:09 PM
Hi JR,

Yes, I followed the instructions exactly as the readme that DSE provided, which seem to be fairly simple. But am I missing something?
I copied the file from the timeline in VP10e and pasted it into VP12. Was that okay?
When I rendered in VP10, I got a GPU time of 130 sec's. Something must be different when I render in VP12.

As far as my system goes, it's a home build, and very simple. I don't do any overclocking ( I really don't know how). The only thing really custom is the water cooling system, which is refrigerated. I water cool the ram, the GPU and chipset, as well as the CPU & HDD's. Water temps are approx. 50 degrees F. But I really don't know if cooling makes a difference.
I delete all pre-rendered files, and minimize the preview window before rendering, as well as doing a reboot before each test.
I'm surprised the video card performs so well, I thought it would need something newer to make a difference.

R. Strong

Custom remote refrigerated water cooled system for CPU & GPU. Intel i7- 6950X, 10 Core (4.3 Turbo) 64gb DDR4, Win7 64 Bit, SP1. Nvidia RTX 2080, Studio driver 431.36, Cameras: Sony HVR-Z5U, HVR-V1U, HVR-A1U, HDR-HC3. Canon 5K MK2, SX50HS. GoPro Hero2. Nikon CoolPix P510. YouTube: rstrongvideo

ritsmer wrote on 2/12/2013, 5:34 AM
As fas as I remember the Max Number of Rendering Threads has a significant influence on the rendering time.

Kind of that a high number (some 10-12) was best for CPU alone - while a low number (around 2-4) gave an optimum for GPU assisted rendering.

So - when the New Rendertest is done - the testers maybe should also note the Max Number of Rendering Threads that gives the best result for their setup.
dxdy wrote on 2/12/2013, 6:57 AM
V12 sets project properties to the properties of whatever the first clip dragged to the timeline are. Have you looked at the v10 proj props as you do your comparo?