Increase CUDA Cores? Good Idea? Y/N?

Grazie wrote on 1/24/2014, 4:42 AM
My Tower constructors tell me that I could have the Titan GTX.

OK, this is me, so please keep it as simple as you can:

1] I presently have an nVidia GTX560ti. It has 352 CUDA Cores

2] I COULD get a GTX Titan it has a massive 2688 CUDA Cores, that's like 7.6 times as many.

So:-

Question 1: Will Vegas Build770 benefit from this? For Previewing? For Rendering?

Question 2: Would Build RAM Preview be faster?

Question 3: I use Prerender and Render to New Track which requires me selecting a Render Template. Would THIS benefit Vegas from the 7.6 times more CUDA cores?

Question 4: What Plugs make use of CUDA? Or are Plugs using CUDA?

Well, that's a small list of 4!

Please assist me in making a decision.

Grazie

Comments

DiDequ wrote on 1/24/2014, 6:33 AM
Grazie, I cannot answer your questions, but...

Can you visit your tower constructors with a Vegas project, and a backup of your system.
They install the GTX Titan Video card in you Pc.
You render your project. You see how faster it is (or not ???)
Then only you decide to buy it or not.
I am quite sure they will understand if you say " no, it does not work better ! I do not buy it."

Didier.
Carlos Werner wrote on 1/24/2014, 7:04 AM
Isn't it Kepler architecture? I think it is not (so far) fully supported by Vegas.
rs170a wrote on 1/24/2014, 7:43 AM
Grazie, if your computer can handle it, you might want to consider the new AMD 290. Reviews so far have been extremely positive with respect to use in Vegas.
GPU Info for Vegas
anyone using an Amd 290 gpu?

Mike
Stringer wrote on 1/24/2014, 7:49 AM
The AMD R9 cards appear to be about 30% faster than the best Nvidia cards..

An R9 280X in the $400 range does better than the $1000 Titan in XDCAM render ..
http://www.anandtech.com/show/7492/the-geforce-gtx-780-ti-review/14

Even the $200 R9 270x does better than the GTX 780 ti ( or Titan for that matter )

The AMD advantage shows up in the benchmark database put together by Hulk..

http://www.sonycreativesoftware.com/forums/ShowMessage.asp?ForumID=4&MessageID=880334

http://www.hyperactivemusic.com/vegaspro/vegaspro.html

If you go with Nvidia, the more CUDA cores the better in general, but there are diminshing returns.. The pricier Titan does not appear to do better than the cheaper 780 Ti ..

i.e - 2000 cores will have a significant advantage over 1000, but 2800 over 2600, not so much..

I personally know the difference between a 560 Ti ( 448 cores ) and a GTX 580 ( 512 ) is negligible .





OldSmoke wrote on 1/24/2014, 8:44 AM
I wouldn't spend my money on a Titan as it doesn't do as well as an old 580. R9 seems to do quite well but I would wait for SCS to make a new statement in that direction. Also the Titan will be "old" soon with the 800 series in the horizon.

Proud owner of Sony Vegas Pro 7, 8, 9, 10, 11, 12 & 13 and now Magix VP15&16.

System Spec.:
Motherboard: ASUS X299 Prime-A

Ram: G.Skill 4x8GB DDR4 2666 XMP

CPU: i7-9800x @ 4.6GHz (custom water cooling system)
GPU: 1x AMD Vega Pro Frontier Edition (water cooled)
Hard drives: System Samsung 970Pro NVME, AV-Projects 1TB (4x Intel P7600 512GB VROC), 4x 2.5" Hotswap bays, 1x 3.5" Hotswap Bay, 1x LG BluRay Burner

PSU: Corsair 1200W
Monitor: 2x Dell Ultrasharp U2713HM (2560x1440)

Jerry K wrote on 1/24/2014, 9:04 AM
I saw this today on Newegg, 6 video cards on one M/B
I guess this is for some very serious gaming...

http://www.newegg.com/Product/ComboBundleDetails.aspx?ItemList=Combo.1532095&cm_sp=DailyDeal-_-1532095-_-Combo

Jerry. K
NormanPCN wrote on 1/24/2014, 9:57 AM
You cannot directly compare CUDA cores from current Nvidia (Kepler) architectures to previous architectures.

The shader clock in previous architectures was run at double the clock rate of the GPU core. It is now run at the same clock rate. In other words in the new architectures you need more cores just to stay even.
astar wrote on 1/24/2014, 1:04 PM
From my research you might want to consider a Firepro. NVIDIA gimps gaming cards. You could buy 2 v8800 for the price of a Titan, the v8800 is on the supported list The w9000 is the card Sony is Demoing for 4k support.
JohnnyRoy wrote on 1/25/2014, 11:38 AM
> Posted by: Jerry K "I saw this today on Newegg, 6 video cards on one M/B. I guess this is for some very serious gaming..."

I think you missed the significance of the title of the product:

Mining Kit UKO-K140K

That's a mining kit for people who want to mine BitCoins.

~jr
bigrock wrote on 1/28/2014, 1:06 AM
The 290 smokes the Nvidias for compute at half the price.
Grazie wrote on 1/28/2014, 1:51 AM
DiDequ: "Grazie, I cannot answer your questions, but..." O...K.... ?

DiDequ: "Can you visit your tower constructors with a Vegas project, and a backup of your system." Didier, I've come here, not my suppliers, to ask a question that I wish to have an answer to. It would appear from the "informed" replies that this is not as straightforward as it would seem, which is valuable in itself. If I've got you correct, in your reply are you implying that there is no straightforward answer other than doing what you suggest - system > suppliers > install > a real world "Test and Measure" > User purchaser decision - then that's just plain dim? Not you, but the only way forward.

Actually, it was more about improving PREVIEW via anything that I could Prerender to to Preview, the rendering times on this present machine are more than acceptable for what I do. No, this is purely about Previewing.

Cheers,

Grazie

ushere wrote on 1/28/2014, 2:10 AM
me too please regarding PREVIEW only....

afaic rendering is an overnight thing, i'm not really interested in shaving / saving seconds off of an hour render (if, on the other hand, i could render my hour in half the time i'd be interested ;-))
Christian de Godzinsky wrote on 1/28/2014, 4:19 AM
In the middle of all this confusion it would be prudent for SCS to FINALLY come forward and publish some REAL up-to-date information about compatible and well performing AVAILABLE hardware.

The GPU infopage should be updated ASAP. Or is the GPU rendering still so flawed with newest GTX GPU's that its better just to forget it and let it be?

I have a GTX660Ti that is performing extremely well with all my applications - except Vegas Pro 12. I would like to use the much advertized GPU acceleration during editin AND rendering, but must disable both to preserve the stability of VP (and my nerves).

I'm so frustrated that I'm willing to invest money in a new (or OLD!) GPU if that solves the problem. I'm running on three monitors and switching to an second hand GTX570/580 would be a step backward since it only support two simultaneous monitors. Having two GPU's would need a 1000W PSU... Not an attractive option.

If the money is not the issue (my pain treshold is at about $700), what would be a good GPU investment that would also be future proof with Vegas? Only SCS can answer that. AMD or NVIDIA? Is the only reliable path a PRO GPU - Quadro/FirePro? Someone must have this information - we cannot possily experiment with all available combinations...

Agreed, prewies is utmost important, rendering usually is anyhow an overnight business but it would be nice to speed up that as well...

Cheers,

Christian

WIN10 Pro 64-bit | Version 1903 | OS build 18362.535 | Studio 16.1.2 | Vegas Pro 17 b387
CPU i9-7940C 14-core @4.4GHz | 64GB DDR4@XMP3600 | ASUS X299M1
GPU 2 x GTX1080Ti (2x11G GBDDR) | 442.19 nVidia driver | Intensity Pro 4K (BlackMagic)
4x Spyder calibrated monitors (1x4K, 1xUHD, 2xHD)
SSD 500GB system | 2x1TB HD | Internal 4x1TB HD's @RAID10 | Raid1 HDD array via 1Gb ethernet
Steinberg UR2 USB audio Interface (24bit/192kHz)
ShuttlePro2 controller

Grazie wrote on 1/28/2014, 4:48 AM
Christian, you speak nothing but the Truth!

Often, with regard to this, I feel I'm groping darkly in the darkest of alleyways, on Planet Fog, wearing sunglasses woth my head sewn on backwards.

Can somebody please "Plough the Road" and get us a way forward.

Grazie



farss wrote on 1/28/2014, 5:08 AM
Just to be clear here, to preview you've got to render anyway, improved render speed and preview should improve as well. It's [I]encoding[/I] that you don't do in order to preview.

As to the original question, I agree with others who are saying you really need to test this with a typical project. It's quite possible your vendor is basing their advice on experience from a different NLE or different kind of projects to the ones you typically do. In general though the problem with using the GPU is the amount of data that has to be moved to and fro from the CPU / GPU that's more likely what's holding back real time preview. A faster CPU, more and faster RAM and higher buss speed is more likely to yield an improvement than more GPU cores.

Bob.
Grazie wrote on 1/28/2014, 5:19 AM
Understood Bob. Now, would LESS CUDAs have an effect on your analysis?

Grazie

DavidMcKnight wrote on 1/28/2014, 9:31 AM
I would trust OldSmoke's advice on this subject.

6 cards on one MoBo!! I have no idea what it means to "mine" BitCoins but I'm gonna find out.
Mark_e wrote on 1/28/2014, 9:58 AM
I'm interested in this as well, currently running GTX 570 which works great with vegas but I'm doing a bit more with blender and hitfilm as well and have been looking at the titan because of the 6gig ram, or the 780 ti at 3gig but a bit quicker and cheaper.

I'm not that worried about getting faster times with vegas I just don't want to break it :-). I've not seen much posted about peoples experiences so far and couldn't see any on the bench marking tests earlier in this thread.
farss wrote on 1/28/2014, 2:16 PM
[I]"Now, would LESS CUDAs have an effect on your analysis?"[/I]
I cannot see how less would be better anymore than more would be.
The old "A chain is no stronger than its weakest link" applies to a significant extent.
Actually that's not a terribly good analogy because in this case each link in the chain contributes a "cost" to throughput so any improvement in any link could yield [I]some[/I] improvement in performance. Up against that though is the consideration of real dollar costs.

Bob.
Grazie wrote on 1/28/2014, 2:49 PM
Please simplify.

Grazie
John Lewis wrote on 1/28/2014, 3:08 PM
I have a Titan on one PC and a GTX690 on the other
The 690 performs better than the Titan
I think in the NLE world only Adobe After Effects will support the 2 GPU`s and that is only for some functions
The 690 performs better with Vegas
However you still wont get full RT from some plugins EG NB it depends on which filter you use
You may and again depending on complexity get RT from NBT2
I use i7 3770k with 32gb Ram on an Asus P8Z77v-Premium
OldSmoke wrote on 1/28/2014, 3:12 PM
I had an old Q6600 system with a GTX460 VP11-32bit. It did well enough to edit HDV material but not AVCHD. However, just for the fun of it and before I handed the old PC over to my daughter, I put in a GTX570. The improvement on that system was impressive. The SCS benchmark project rendered to MC AVC was done in 58sec instead of 110sec; that's 386 vs 480 CUDA cores. But, aside from the cores there is also a good difference in memory bandwidth & bus width. As Bob mentioned, the more data you can put through the faster the system will be. Have a look at the Nvidia tables at Wikipedia and compare the "old" GTX580 against a GTX680 or even 780. http://en.wikipedia.org/wiki/Comparison_of_Nvidia_graphics_processing_units#GeForce_600_Series A card that is designed for gamers, like the 600 and 700 series, "only" has to work in one direction, out to the monitor. However, for encoding you need to send the data back to the CPU and out to the disk.
I upgraded my two GTX570 to two GTX580 and there is a simple reason for that. In order to get the most out of two cards, both need to be in PCIe x16 slot and both must be running at x16 and not x16&8 or even worse 2x8. The GTX570 was a huge 3slot card but the two 580s I got off eBay where only 2slots wide and now I can fit them both in x16 slots. I am now rendering the SCS Benchmark project to MC AVC in 29sec. instead of 33sec. I strongly believe that the difference comes from the cards being in the right slot rather then the difference between 570 and 580.

What I still cant understand is why users are so against accepting the obvious and use what is proven to work. We all can shout as much as we want, SCS will only make a new statement when there is something new to report. The SCS GPU Acceleration website still shows a graph with a GTX570. Even if you go out and buy a new Quadro K4000, I am certain you wont get what you expect. And yes, SCS needs to change that but until then, Nvidia Fermi architecture is still what works for VP11&12. I cant speak for AMD as I have no experience with it at all. As for cost, 2x GTX580 off eBay and a new 1000W PSU are still cheaper then a Titan that may not even get you where you want to be. Also keep in mind, that the 600 & 700 series will be obsolete soon and who knows, maybe SCS is now optimizing VP for the 800 series.

Proud owner of Sony Vegas Pro 7, 8, 9, 10, 11, 12 & 13 and now Magix VP15&16.

System Spec.:
Motherboard: ASUS X299 Prime-A

Ram: G.Skill 4x8GB DDR4 2666 XMP

CPU: i7-9800x @ 4.6GHz (custom water cooling system)
GPU: 1x AMD Vega Pro Frontier Edition (water cooled)
Hard drives: System Samsung 970Pro NVME, AV-Projects 1TB (4x Intel P7600 512GB VROC), 4x 2.5" Hotswap bays, 1x 3.5" Hotswap Bay, 1x LG BluRay Burner

PSU: Corsair 1200W
Monitor: 2x Dell Ultrasharp U2713HM (2560x1440)

JasonATL wrote on 1/28/2014, 5:09 PM
Grazie,

If you are interested, I can try to shed light on your specific questions. I built a new system a few months ago that I purposely built to be expandable and to rely on GPU's (mainly for using Resolve and Premiere Pro). I originally bought a GTX 770 4GB OC (1536 CUDA Cores). Wanting more power, I then bought a GTX 780 3GB (2308 CUDA Cores), keeping the GTX 770. Both are installed in my system.

In Vegas, I can select which GPU to use. If you'll tell me what your benchmark project is (e.g., type of footage, framerate, fx chain), I can try running some tests to compare.

I can tell you that I normally run VP 11 (when I use Vegas nowadays) because it has been much faster on my projects in using the GPU compared to VP 12. VP 12, on the other hand, has been a bit more stable in certain projects, when not using the GPU. Premiere Pro, on the other hand, loves both GPU cards, has fast previews and rendering, and is stable, but that's another story.

As others have suggested, it appears to me that the Vegas implementation of GPU support pales in comparison to others' and to what seems to be its potential. Almost never will the GPU be taxed by Vegas beyond, say, 20-30% in my projects. Resolve and PPro will typically utilize 80% of the cards when previewing or rendering. My gut tells me that the premium for a Titan will not be worth it and that, for Vegas only, the money should be spent on CPU, not GPU, power - and not too many cores, at that (since Vegas won't use all of them!).
OldSmoke wrote on 1/28/2014, 5:24 PM
The usage of CUDA in Vegas is limited, but rendering to MC AVC does get me between 40 and 80% on the SCS benchmark project. It very much depends on the footage on the timeline and the FX used.

JasonATL would you mind rendering the SCS benchmark project to MC AVC and XDCAM in the same way SCS did? That would be a good indication of additional support we can get from those cards.

Proud owner of Sony Vegas Pro 7, 8, 9, 10, 11, 12 & 13 and now Magix VP15&16.

System Spec.:
Motherboard: ASUS X299 Prime-A

Ram: G.Skill 4x8GB DDR4 2666 XMP

CPU: i7-9800x @ 4.6GHz (custom water cooling system)
GPU: 1x AMD Vega Pro Frontier Edition (water cooled)
Hard drives: System Samsung 970Pro NVME, AV-Projects 1TB (4x Intel P7600 512GB VROC), 4x 2.5" Hotswap bays, 1x 3.5" Hotswap Bay, 1x LG BluRay Burner

PSU: Corsair 1200W
Monitor: 2x Dell Ultrasharp U2713HM (2560x1440)