Vegas Pro 12 - Render Tests GTX 680 vs GTX 580

xdcamer wrote on 6/27/2013, 9:03 AM
I thought I would post a new topic for this all important render testing.

With all the info on GTX Kepler vs GTX Fermi I decided to do some render tests to see exactly was was going on. As mentioned I am very satisfied with my GTX 680 on driver 314.22 (just stay away from driver 320.18).
Vegas Pro 12 (Build 563) is pretty rock solid with no major crashes or issues to report.

My system: Asus P6X58 MB, i7 930 @ 3.8Ghz, 24 GB Ram, GTX 680

The footage is HDV 1440x1080 25p
The project is 10 mins 47 sec in duration with intro title, crossfades, cookie cutters and a few other simpler effects.

These renders are 'Video Stream Only'.
Here are the results in Minutes: Seconds

Rendered to Sony AVC 1440x1080 50i 15Mbps
GPU OFF= 14.56
GTX 580 GPU ON= 9:50
GTX 680 GPU ON= 9:45

Rendered to MC Mpeg2 720x576 25p
GPU OFF= 3.21
GTX 580 GPU ON=2:39
GTX 680 GPU ON= 2.26

So I can confirm that my GTX 680 performs slightly better than GTX 580. Despite reports that Kepler cards under perform Fermi cards.

I know that the GTX 680 should out perform by far with 1536 Cuda cores vs 512 cores in the 580 but I am satisfied that I have not suffered any longer render times.

Perhaps with improvement to Vegas Pro 12 and NVidia Drivers with more compute support (rather than focusing on gamming all the time) things may get even better.

Comments

OldSmoke wrote on 6/27/2013, 9:38 AM
Have you tried the GTX580 with driver 296.10? That is still the fastest driver for my GTX570; any driver higher then that is a lot slower.

While it is good to know that render times are similar, why spend the money if you can get the same performance cheaper? As you said, the 680 should be twice as fast. The only advantage I see in the 680 is that it could drive all my three monitors.

Have you tried rendering the SCS benchmark project or rendertest-2010? That would give us some numbers we can relate to.

Proud owner of Sony Vegas Pro 7, 8, 9, 10, 11, 12 & 13 and now Magix VP15&16.

System Spec.:
Motherboard: ASUS X299 Prime-A

Ram: G.Skill 4x8GB DDR4 2666 XMP

CPU: i7-9800x @ 4.6GHz (custom water cooling system)
GPU: 1x AMD Vega Pro Frontier Edition (water cooled)
Hard drives: System Samsung 970Pro NVME, AV-Projects 1TB (4x Intel P7600 512GB VROC), 4x 2.5" Hotswap bays, 1x 3.5" Hotswap Bay, 1x LG BluRay Burner

PSU: Corsair 1200W
Monitor: 2x Dell Ultrasharp U2713HM (2560x1440)

NormanPCN wrote on 6/27/2013, 9:51 AM
I know that the GTX 680 should out perform by far with 1536 Cuda cores vs 512 cores in the 580.

Previous gen Nvidia cards ran the shader cores at double the clock rate so you cannot compare the number of CUDA cores from Kepler to anything previous.
xdcamer wrote on 6/27/2013, 9:56 AM
I have tried different drivers including 296.10 in the past (not with this test though) but found 314.22 the most stable with little or no crashes. VP12 had a habit of crashing a lot with different drivers and GPU had to be disabled in VP12 for many render templates but not for me with 314.22.

I think if you have a stable driver that performs well.....stick with it. No sense upgrading unless there is a benefit.

The main reason for my tests was to show that GTX 680 is stable and performs well. There has been lots of talk that the 6xx series was way below the performance of the 5xx series. Not so in my case.

I am happy that my system is rock solid and yes while the render times should be better given the specs of the GTX 680, at least they are not worse than the Fermi architecture as has been put out there and things can only improve. The Fermi 5xx cards are still great performers but 'old hat' now (with 6xx and now 7xx series) and difficult to buy a 5xx series if your in the market for a GPU.

And yes the benefit of three monitors on a single card is a dream for us video editors.
Seth wrote on 6/27/2013, 5:41 PM
You're right; there has definitely been a collective poo-pooing of the Kepler cards, and it kept me from getting one last year. Who knows, maybe on the next build of Vegas we'll see OpenCL renders outperform CUDA <crosses fingers> Thank you for sharing your experience.
OldSmoke wrote on 6/27/2013, 6:11 PM
@mphelan

Did you select OpenCL or CUDA under the render template?

Proud owner of Sony Vegas Pro 7, 8, 9, 10, 11, 12 & 13 and now Magix VP15&16.

System Spec.:
Motherboard: ASUS X299 Prime-A

Ram: G.Skill 4x8GB DDR4 2666 XMP

CPU: i7-9800x @ 4.6GHz (custom water cooling system)
GPU: 1x AMD Vega Pro Frontier Edition (water cooled)
Hard drives: System Samsung 970Pro NVME, AV-Projects 1TB (4x Intel P7600 512GB VROC), 4x 2.5" Hotswap bays, 1x 3.5" Hotswap Bay, 1x LG BluRay Burner

PSU: Corsair 1200W
Monitor: 2x Dell Ultrasharp U2713HM (2560x1440)

xdcamer wrote on 6/28/2013, 7:06 PM
When i check the button for 'check GPU' it tells me Cuda is available. In the Sony AVC temp you can select the Encode Mode Automatic, CPU, GPU etc..
For these renderes I just used the standard AVC Blu-ray 1440x1080 50i @ 15Mbps (which is much faster than MC AVC) with the standard setting of Encode Mode set to Automatic
dirtyklingon wrote on 11/25/2013, 8:17 AM
sony avc plug in is faster than mainconcept in GPGPU rendering. unfortunately youtube really does not like it at all.


so you're pretty much stuck with the mainconcept plugin which definitely see poor performance on kepler vs fermi.


was wondering if there is any hint or word that the mainconcept plug in will be optimized for kepler? probably too late in the game to expect it considering how long it took them to get it running on 6 series at all and how close the 8 series is now.

i'd love to use the sony plug in for the render speed, but like i said, youtube does not like it at all. basically get garbage data on the bottom few lines. looks perfectly fine when run from my local hdd though. just youtube being crap unfortunately.

i looked at the OP again and seems you do have mainconcept render times.

not typical at all what i'm seeing on my end though.

CUDA rendering in mainconcept internet hd 720p with cuda enabled on a 680 ftw with 2600k both stock is about 5-10s faster than without cuda. (ie it's marginal to the extreme and probably within margin of error)

from 1920x1080 29.97fps video source, fraps codec.


where as sony avc plugin is noticeably faster in CUDA with same source material to output resolution on same hardware. but again youtube dislikes it.