AMD CPU Offloads Video Processing to GPU

jrazz wrote on 9/10/2009, 9:56 AM
I don't have any real technical specs, but if I read this right I wonder if it will benefit us? A processor that is smart enough to know when to offload number crunching to another piece of hardware? Hmmm.

Here's the link.

Edit: here is a more direct link concerning video encoding.

Here is an excerpt: The most amazing and new feature of this Tigris-based notebook for me was the GPU-assisted video encoding. Quite simply, video encoding is changing the format of a video to be played on another device. One example is taking a family video on an HD camera and encoding it to play on an iPod or iPhone.

j razz

Comments

TheHappyFriar wrote on 9/10/2009, 11:17 AM
there's not much detail, but all ATI cards above a certain model include GPU encoding for certain codec's. You have to use their encoder, but it's part of the Catalyst package (must use the basic/simple interface).
Tech Diver wrote on 9/10/2009, 11:23 AM
I wonder how the processor could be "smart" enough to do that without something explicitly identifying the code as a potential GPU operation. For example, if I were compositing two images where one was a mask I would be multiplying the normalized pixel value of the mask with the corresponding pixel of the target image. Then I would move to the next pixel and do the same. How would the processor know that this would be a good GPU task? To take advantage of the graphics card hardware, I would have to specifically load both images into the GPU's memory buffer, send it an appropriate command and then retrieve the resulting image.

Peter

Edit: I posted this before I read HappyFriar's reply. Yes, that makes sense because the encoder is specifially written to use the GPU. Furthermore, this sort of approach does not seem to require the use of an AMD processor. Sony could do the same thing.
hazydave wrote on 9/14/2009, 10:53 AM
There's no magic here... this has been done for quite awhile.

Most decent graphics hardware has video acceleration. There's been a DirectX Video Acceleration layer (DVXA) around since Windows 2000, and it was much improved in Vista. On my old PC, playing a Blu-Ray barely worked without acceleration... using the acceleration (from the nVidia 8600GTS graphics card) put the CPU use at more like 25%.

Applications have to be coded for this... there's no "automatically done by the CPU" factor here.. CPUs never see the job at a high enough level to do this automatically.

I think the point of the article was twofold. One is that AMD is putting into a mainstream laptop chipset... before, you probably had to get to something somewhat high-endish to get video acceleration on the GPU. Secondly, the Cyberlink Espresso software application may be making more general use of this acceleration (eg, transcoding) than simply rendering for display, which is what most of the applications do with it.

They also imply that AMD's native GPU-use APIs are adding value here, over generic Windows interfaces. This is likely.. it takes awhile for Microsoft to adapt to the cutting edge of things, but it also helps... when AMD and nVidia have different acceleration APIs, it's hard to get software support for either. If they're just one, it's more likely (though I'm still waiting to see video acceleration well used in Vegas... it would certainly help out with AVCHD and MPEG decode acceleration).

The big one is the native GPU computing APIs, which are supposedly coming out in Windows 7. This would theoretically allow Vegas to accelerate all video rendering, compositing, and other mathematics in a generic format that would work with existing and future GPUs, or with just the CPU(s) if there is no adequate GPU. It'll be interesting to see if this turns out to be usable in NLEs or not.