OT Nvidia CEO

JJKizak wrote on 2/6/2009, 4:32 PM
I just watched an interview with the CEO of Nvidia and he said their new "cuda" architecture will speed up your computer 100 times. I just can't handle these speeds anymore. He said also that a super computer in Japan uses the new stuff instead of adding processors and they were happy as a pig in mud.
JJK

Comments

johnmeyer wrote on 2/6/2009, 4:49 PM
It would be great if it were actually true ...

Nothing like a little hype from the CEO.
jabloomf1230 wrote on 2/6/2009, 4:58 PM
But if you have used TMPGEnc Xpress 4, which is written to take advantage of CUDA, the increase in speed is nowhere near that. It really depends on whether, not only the software uses CUDA, but also the filters and codecs are written to use CUDA. I don't have the h.264 CUDA-base media encoder BadaBoom installed anymore, but I did try the demo and the encoding improvement was noticeable, but not astounding. CUDA does not work with multiple GPU cards or cards in SLI, without disabling one GPU, which defeats the whole purpose.

The real problem with CUDA is that it is proprietary. AMD (ATI cards) has a similar SDK and both nVidia and AMD are supposedly "working" on a unified GPU language/compiler, which may displace both proprietary approaches. It's also interesting that nVidia has been rumored to be developing an x86 CPU, just to hedge their bets.
Himanshu wrote on 2/6/2009, 7:01 PM
The closest thing to an unified parallel/GPU/high-performance computing language is OpenCL, which has support from Apple, and many others so it may succeed.

The other alternative is if Microsoft can get everyone to adopt DirectX and if they take on the responsibility of standardizing GPU programming via the DirectX API.
Coursedesign wrote on 2/6/2009, 7:09 PM
There is absolutely no way Microsoft would be able to get everyone to adopt the very Windows-oriented proprietary DirectX API.

I don't think they would want to do it even if they could, because it would tie their hands.

Also, perhaps the supergeeks here can explain if DirectX is expected to survive the next few years? I thought I saw somewhere that MS had plans for a more advanced interface? Or was that bagged when they decided to scrap their new Super Windows in favor of Vista 2.0 (aka Windows 7, based on a much cleaned up Vista kernel)?

Either way, MS would have to add a lot of primitives that are not in DX10.

John_Cline wrote on 2/6/2009, 7:16 PM
There are just a few filters in TMPGEnc4 that take advantage of CUDA, to my knowledge, none of the rendering is being accelerated.

Badaboom does use CUDA for rendering and is pretty quick for what it does. (There is a new version out as of yesterday.) No, it doesn't speed up rendering to h.264 by a factor of 100 but it does speed it up by 4-7 times in my experience.

There are some math intensive applications that can see a speed-up of 100x, but not rendering video.

I've said it before, implementing CUDA in Vegas would catapult it to a whole new level. I would be very surprised if Sony wasn't at least experimenting with it in some secret build of Vegas.
GlennChan wrote on 2/6/2009, 7:55 PM
Some of the six-figure high end systems use GPU acceleration in some form or other. Mistika, Resolve (CORE = CUDA optimized render engine or something like that), Flame would be examples.

They are significantly faster than Vegas. Several times faster in a lot of cases. But if you want to edit DV natively, none of those systems can do it.

I've said it before, implementing CUDA in Vegas would catapult it to a whole new level.
It would be interesting to see, BUT you'd have to have a newer Nvidia card.

Which is kind of un-Vegas-like, because it used to be that you just install Vegas and it just works.

Background rendering might be interesting to see, and it would work for everybody. However, it would have some complications... what codec would Vegas render to, disk space, etc. It also tends to make performance sluggish, so sometimes people turn background rendering off.
John_Cline wrote on 2/6/2009, 8:13 PM
"It would be interesting to see, BUT you'd have to have a newer Nvidia card."

I DO have a new, high-end nVidia card. Heck, any of the 8000 series and higher nVidia cards with 256MB of memory support CUDA and some can be had for less than $100. Now, who wouldn't spend $100 to speed up Vegas?

There are over 100 million CUDA enabled GPUs already in use and CUDA can be programmed in C. FORTRAN and C++ support is coming soon. It is compatible with Windows32 and 64bit flavors. If I'm not mistaken, Vegas is written in C++. Vegas v8.1 already needs somewhat specialized hardware and Vista64, maybe CUDA could be part of the 64-bit Vegas series.
johnmeyer wrote on 2/6/2009, 9:10 PM
The few CUDA-enabled apps I've played with since I got my new computer with an nVidia 9800GT have not shown any fantastic improvements.

I don't really follow the details of CPU architecture improvement, but it has always been my understanding that many years ago (like fifteen), the Intel architecture was all wrong for doing repetitive calculations on large sets of data. So, in many devices, the vendors used proprietary Digital Signal Processors (DSPs) which did these things more efficiently.

As clock speed improvements have stalled out, Intel and AMD have added a lot of parallelism to their architecture. I think (but am not sure) that they added other elements that make these modern general-purpose CPUs (like my new i7) more efficient, per clock cycle, at doing these media tasks.

Since the GPU is a little like a DSP, I think the gap is probably narrowing, not getting wider, between what they can do vs. what the main CPU can do. In addition, the benefit of having this "free" CPU is incrementally a lot smaller when you have a CPU that has two, four, or eight cores. In particular, the ratio between one CPU/core and a CPU+GPU is big, but the ratio between eight cores and nine (an eight-core CPU compared to that CPU plus a GPU) is relatively small.

I could be totally wrong about this, and would welcome input from someone that has actually followed this in more detail.

But, as I said, so far I haven't see any huge benefits, which is unfortunate because I bought the nVidia specifically to get CUDA and was hoping for Nirvana.


jabloomf1230 wrote on 2/6/2009, 10:00 PM
"There are just a few filters in TMPGEnc4 that take advantage of CUDA, to my knowledge, none of the rendering is being accelerated."

TMPGEnc4's MPEG2 encoder is CUDA-accelerated and I can attest to that. Else, as you stated, nothing else other than filters, is presently CUDA-based.
John_Cline wrote on 2/6/2009, 10:55 PM
The nVidia card that I have has 256 processor cores, my CPU has 4.
GlennChan wrote on 2/7/2009, 12:10 AM
It may be that tasks like MPEG-2 encoding are more complicated and may not be a task that GPUs are necessarily good at. For MPEG-2 encoding, you have to do motion estimation, any secret sauce, and a number of other things (including lossless compression, which I don't think GPUs are good at), etc. etc.

A lot of filters in Vegas would be things that GPUs are very good at, e.g. gaussian blur.
The Nvidia GPUs also has instructions for doing things like power functions, sqrt, etc. very fast and at slightly reduced accuracy. Power functions are kind of slow on the CPU. For video editing, we don't care much about reduced accuracy as 32-bit float is overkill when implemented properly.
Himanshu wrote on 2/7/2009, 7:25 AM
GlennChan,

It's not that GPUs aren't good at any particular task - to a GPU a computation is a computation, doesn't matter whether you're calculating average temperatures or compressing video, right?

The architecture is important - GPUs generally succeeded where CPUs failed previously because they specialized their computational pipelines for 3D graphics (and imaging/video). This is referred to as having "fixed function pipelines."

This worked well for the graphics industry and as a bonus, using the vendor provided libraries to access the GPUs, programmers started using these cards for general purpose high performance computing.

Intel changed the game with a new "micro" architecture (i.e. Core i7) where they don't focus on a fixed function pipeline, but rather provide several general purpose CPUs and allow them to be configured as necessary. This means they can be used to perform in the same way as GPUs have been used, or they can be reconfigured into pipelines that GPU cards currently don't have, and thus can outperform GPUs in those situations.

As we all know, having the most powerful hardware available is no use unless the application we're using can take advantage of it. I've said in previous posts - I don't think it will pay off for Sony to focus only on CUDA-enabled (or pick your favorite vendor-based technology, Stream,etc.) and enhance Vegas for that, while alienating customers that have investment in hardware from other vendors. That kind of enhancement is best left for 3rd party plug-ins if the Vegas plug-in architecture is enhanced to allow such additions.

My best guess is that Vegas will continue to be dependent on Microsoft technologies, and hopefully incremental access to the GPU via DirectX/Direct3D will eventually improve performance, or OpenCL (or some such technology) will become a defacto standard and Sony will eventually port to it.

CourseDesign:

Not sure why you think DirectX will not survive? Yes, it is proprietary but look how far it's come because MS has control of it - it has practically overtaken OpenGL in a much shorter time span. And don't forget where OpenGL came from - Silicon Graphics' then-proprietary GL language.
jabloomf1230 wrote on 2/7/2009, 11:13 AM
"It may be that tasks like MPEG-2 encoding are more complicated and may not be a task that GPUs are necessarily good at. "

TMPGEnc XPress 4 has a CUDA-based MPEG-2 encoder & BadaBoom has a CUDA-base h.264 encoder, so I don't think it has anything to do with codec complexity. David Newman of Cineform posted something on their forum, a while back, about GPU encoding and did explain why there was no advantage to using CUDA for Cineform encoding. I'll try to find his post.

EDIT:This isn't the one I remember, but it's similar:

http://www.reduser.net/forum/showpost.php?p=259225&postcount=113
srode wrote on 2/7/2009, 1:11 PM

'I've said it before, implementing CUDA in Vegas would catapult it to a whole new level.
It would be interesting to see, BUT you'd have to have a newer Nvidia card."

I agree that CUDA and Vegas would be a great improvement - other NLEs are already doing it - as to the age of the card - Depends on how old you are talking about - anything in the 8000 series will help, the 200 series with up to 480 cores would smoke - even with a single card. Other NLEs are seeing 30% reduction and more in rendering time with CUDA.