Mark Dileo ("Hulk" on this forum) has taken an interesting view on CPUs in the coming year. Read his comments; curious to see where the community falls with regards to his position.
Two things that aren't addressed that are of significance, GPUs are taking over many of the tasks of the CPU, even for audio, and disk I/O system performance.
I have a dual Xeon CPU system and even with SATA RAID off a dedicated controller for the kind of work it mostly does the CPUs are far from stressed, they spend a good part of their time waiting for the disk system.
So for me seeing the price of SCSI arrays come down is of more interest than faster CPUs.
Bob.
Nice article - I agree largely with his assessment. I'm a bit skeptical about the DDR2, but I'm eyeing the platform advancements and the impending drop in prices on CPUs.
> GPUs are taking over many of the tasks of the CPU, even for audio, and disk I/O system performance.
I would go even further and say that special purpose processors like the Cell developed by Sony, IBM, and Toshiba, are the answer. We need more efficient processing. In the days when PC’s just did word processing and spreadsheets, the general-purpose processors made sense. When gaming graphics hit a wall, GPU’s were the answer. No CPU can compete with today’s GPUs for processing 3D graphics. Likewise now that PC’s are regularly processing video and other high bandwidth multi-media, specialize processors like the Cell are clearly the answer.
IMHO, the AMD/Intel wars are uninteresting using today’s technology. Silicon has gone as far as it’s going to go until someone discovers a more efficient medium. Faster general-purpose processors are not the answer. More use of specialized processors are.
"specialize processors like the Cell are clearly the answer."
Mmmm... I'm not so sure - maybe "multi-core processors like the Cell are clearly the answer" might be more accurate - in any case, asserting that there is only one answer is a bit pre-emptive. I think the key to advancement in central processors is decreasing die size and manufacturing capacity. As was pointed out in the article, AMD can't afford another "miss" in either department if they wish to answer Intel's technical and marketing push.
That said - farss' comments about GPU taking on a heavier load for video and audio tasks are spot-on. I just put nVidia's PureVideo codec on one of my machines, but it still has quite a bit of growing up to do. But with new "hints" at advancement in general audio convolution on the GPU with BionicFX and long-standing proprietary solutions like UAudio, I think that people are rightfully expecting to be able to leverage every processing cycle that lies fallow on their machine - the GPU on the video card being the primary slacker in most instances.
As a composer that also does sound design and film mixing, I'm particularly interested in advances with SLI and the potential to put a pair of relatively inexpensive video cards on the machine to handle tasks like rendering native resolution HD video as well as a few dozen convolution instances for positional modeling of an acoustic space. This kind of thing would have been unthinkable five years ago.
This not only means that the hardware has to advance, but the maturity of the programming languages and protocols have to step it up as well. It will be interesting to see how this all dovetails into the release of Vista - and the cooresponding growing pains that are expected.
Nice article but obviuosly pro AMD. I remember reading for years that people said to stay with Intel becuase the AMD lacked certain faetures that the software was written to use. I don't have the facts, but I think it was something about the instruction sets?? There was never any mention in this article about what was wrong with the AMD over the years in that regard. Probably not a problem these days with the newer chips.
Until recently, we've always been an Intel house, but after being enticed by a reseller to try the AMD dual cores, we're now about half and half. The AMD dual dual cores kick significant tush over the dual P4 3.6 systems, and the 4400+ single proc's seriously beat the crap out of the Pentium D system we've got.
Whatever AMD's problems have been in the past, they seem to be just that....passed. SuperMicro, a total Intel manufacturer, started building AMD-based motherboards in the past year as well.
I agree and I am sure most of us really appreciate people like you who actually try it and use it along with posted benchmarks. I would also be interested in how all of this plays into what Intel is doing with the new Macs. Are there different things going on with Mactels and how will they compare to Intel PCs and AMDs for us video folks?
Good question!
At the moment some pretty basic things like doing an imahe rotate are running way slower on the Core Duo Macs than on the old models or on the Wintel PCs.
Does this mean Apple have lost the plot?
No. The code needs to be optimised for the platform and at the moment the code is being cross compiled / translated which invariably isn't very efficient.
Still, it'd serve Apple right if they got wrongly boiled in oil over this is they were more than happy to use such dodgy comparisons in the past themselves.
Which leads us back to an already mentioned point, code optimisation. Nothing make more difference than how well code is written, I'll wager there's many things that'd run faster using good coding on a 8086 than they do on any of the latest crop of PCs or Macs.
Just to give an example.
Many decades ago I wrote a memory diagnostic for the 68010. Started off writing it using Oregon's Pascal compiler. To test our 8Mbyte memory card it'd take around a day to run!
Thing is the Pascal compiler was written originally for the PDP 11 and it had been translated to run on Versados so one line of Pascal could end up as 1,000 lines of 68010 machine code, hardly optimal. However the 68010 had a very small instruction pipeline so by writting my code so it fitted within that pipeline in assembler I was able to get the same test to run in less than 10 seconds.
And then there's the very basic issue of the uP's instruction set. Given that the x86 line has had to maintain backwards compatibility little has been done to make it more efficient.
I think we can safely assume that writing a new OS to run on a newly designed CPU we'd see huge speed improvements if it was combined with a toolset optimised for that CPU.
..and Google, a company that has Intel's CEO on their board, switched to AMD for their servers.
That must have hurt.
Many years ago, AMD had to rely on VIA to create their peripheral chips. This is what gave AMD processors a bad name at the time.
Today, NVIDIA makes peripheral chips for AMD processors, and everything works properly.
Note that there are still some new AMD mobos with VIA chips, those chips work better than before, but not good enough to be 100% safe. Many have reported compatibility problems.
Intel even had to swallow the indignity of having to make their recent CPUs AMD compatible.
Microsoft and other major software vendors told Intel that they would not create a separate 64-bit version for Intel's CPUs. It was AMD-compatible or nothing, so Intel had to give in. Of course they didn't call it that, they just said that they now had additional EMT64 instructions.
As for the new Intel chips coming this fall hopefully, I think we have to wait for the real product before we know the performance they will have in real life. With Intel being where it is right now, I think it would be foolish to bet the farm that they will have something that totally works in Q3.
Intel has a long history of releasing CPUs before they were ready for primetime, and in this case I think they are feeling a lot of stress from the competition.
Not just from AMD actually beating Intel in the retail channel, but also seeing AMD exceed a 20% market share for the first time, and many signs that they haven't peaked yet.
And seeing that AMD was able to do with 90nm what it took Intel 65 nm to do, thanks to a superior design.
Btw, the Opteron architecture builds on the key concepts of DEC's Alpha CPU from many years ago.
Way ahead of its time, it was just too bad that DEC couldn't market its way out of a paper bag...
I was under the impression that Intel was supplying Apple with some new technology for these new Duo Core computers they are about to release in a workstation. I am curious as to if they are better, faster than what the PC folks have (or will have) and the AMD camp. I guess it may be too early, as Bob says, since software needs to be written for it to gain the best from it. FCP will be released at the end of March in a universal format and maybe they will have something new at NAB with new workstations and FCP6. Just specualting. I guess I am wondering if these new Mactels might be the winners for running FCP and Vegas on one system (dual boot obviously) compared to AMD or pure Intel PCs with Vegas (7 ??). :-)
Apple went with Intel because Apple's next frontier is the living room. The living room is what will get the attention and the dollars at Apple for the next many years.
Intel's sales guys promised built-in bulletproof content protection, which is key to Emperor Jobs' plan for world domination (i.e. record profits).
It will be interesting to see the content providers provide content that can only be seen on a brand new Intel CPU system. If it happens.
> I'm not so sure - maybe "multi-core processors like the Cell are clearly the answer" might be more accurate
Sorry if I wasn’t clear. Yes the Cell is multi-core (9 cores to be exact which blows a way anything Intel or AMD have to offer by a magnitude) but the Cell is also a “specialized” processor. It can’t run an Operating System like XP, or OSX, or Linux so you won’t see it running your desktop computer anytime soon. That’s not what it was designed for. It is a co-processor in that regard. Hopefully you will have a desktop (AMD, Intel or PowerPC) that also has a Cell for multi-media processing. I think that’s where the future is heading.
> - in any case, asserting that there is only one answer is a bit pre-emptive.
I agree, I thought I was saying exactly the opposite. That there is NOT one answer but many “specialized” answers. Hardware that is designed to do one thing and do it well... but lots of these. The article was suggesting there was only one answer. I was not.
AMD made the first PC RISC CPU with their 386 model (it was a RISC with a CISC converter). But it wasn't really anything special, they just were able to do it (the article says the P was the first to do it risc-LIKE, so technically it's right. Just an interesting fact about AMD's!)
The benchmarks that AnandTech ran were normal ones that almost everyone runs for Benchmarks. The game are the most resource hungry & configurable ones out there (well, Doom 3 is more resource hungery then Quake 4, but that's on purpose: Q4 was a modified D3 that had unnecessary things stripped out to make it run faster), WMP is always used too. Normally afew others are thrown in, but it's pretty much the same across the board for every branchmarking from anyone. So I really trust those results (Intel doesn't & can't control the software tested).
I agree with him 95% though. Ironicly, AMD wasn't the only one to out-perform Intel CPU's. The Cyrix 486 DX2-80's out performed Intel's 486DX4-100's easy (I bench'ed both side by side!). And Intel is in a big bind with the "clockspeed is king" thing they've been pushing for decades. And AMD better have something else up it's sleeve. It could, we just might not know yet (and it will probley be released after the next Intel. They may want to send off a final generation of the 64/X2 line before the release a new core).
The other 5%?... This may NOT be good for the consumer. AMD is still a touchy company. A big enough swoop & they could be nearly shut down (which intel may be hoping for) and we could very well be 10 years behind in CPU technology once again (IE this new Intel could be the last one until someone else comes along).
Cell processors the future? Maybe, but not in the NEAR future. Programmers who work with PC's, X-Box 360's & PS3's say that a modern PC easily outperforms the 360 & PS3 for simular tasks. Mostly because no hardware/software is designed to actuatly use all the cores to their potential and that odds are it won't be any time soon (they just produce bigger stats to throw out at consumers, like clock speed, that are meaningless in the real world, so when you buy a 360 or PS3 based on those numbers it's like buying a car with a 12 cylinder engine but having it locked at a max of 60mph).
GPU's the future? If you mentioned this 2/3 years ago people would of laughed at you. Now the same people say it's a new & great idea. :) This most likely won't happen because the same people who've been writing the rendering engine's for a decade are set in their ways & this goes 100% against what they've done (and could of done a decade ago!). Get some guys who don't belive it's impossible & we could see Vegas 7 rendering almost anything in RT (or near RT) in HD. Heck, those benchmarks PROVE it. Just try one of those games. In nearly any given scene you have the GPU rendering alpha channels, full 3d motion, real time lighting, AA, masks, color correction, distortions, zooms, reflections... Not to mention the CPU being stressed at 100% with AI, loading/unloading sounds, sending surround sound to descrete channels.
Much of what we (video editors) want is being built in to the hardware of current 3d cards. All you need to do is write the code to interface them.
I'm sure that AMD will throw a DDR2 controller on their X2's & we'll see performance increase real soon for them. I'm not worried about chipset compatiblity. I've only had 1 MB with an Intel chipset & have had as many problems with that as the SiS & VIA one's I've had. I'd say it wasn't a problem with them but an intenital INcompatability from MS & Intel. :)
GGman, in late 2004 AMD added the SSE3 instructions with some other instructions. That helped AMD pass Intel on the multimedia benchmarks. Then AMD advanced further with their memory architecture and superior dual-core offering. This happened the same time Intel ran out of speed head room and stumbled trying to find a new design path.
I don't really think the article is unfairly pro-AMD. It's just that right now AMD provides the best choice for consumers on all CPU offering, minus laptops. It is unfortunate, but nobody can say much good about Intel at this time because of it's poor performance, high power consumption, and high heat output. You can expect Intel to recover by 2007 though.
Peter,
Thanx, I didn't have all my facts as I mentioned. I agree that the article was not all pro AMD, but some history was left out. I also do not care to see the author using huge letters in different colors claiming AMD the winner throughout the article. I would rather see the author as not being biased and just present the facts so we, as intelligent readers, can draw our own conclusions. I felt I was being sold AMD and I guess I am a Joe Friday guy... "just the facts ma'am". I agree that the AMD has been looking good lately from seeing Spot's reports on actually using it and the tests people have made. I am not in favor for any single chip or OS system. They all still fall short of what I want... 1080 60p, realtime everything, all the time and shooting with no lights. :-)
AMD will need DDR2 for the future and DDR2 will increase performance in a big way.
When AMD added a 64 bit instruction set to its processors, Intel laughed but ended up adding AMD 64 bit instruction set to it processors. The AMD 64 bit instruction killed Intel's 64bit processor that was going to phase out the X86 instruction set.
Intel processors have always been poorly designed,(except the Mobile "M"), when it came to instruction set execution. The microcoding and pipeline management is just poorly done. By the late 70's pipeline modeling was understood very well. Intel engineers thought "bigger is better", great phrase for marketing, but a killer in chip desigh if you don't understand heat management and IC hardware "space" requirement for instruction execution.
The X360 3-core Powerchip is clocked at 750m. If you overclocked it to 1.8 or 2.0,
Apple users would kill for it. Like the Cell, the 3-core Powerchip has better memory management. AMD and the Powerchip design all have superior cache and memory management than the Intel design.
Linux is actually running on the Cell, this was publicly announced by the Sony division President when talking about the PSP3 and its delay.
It would nice to have AMD and the level-3 Cell processor on the same board(8way memory subsystem). Add secondary GCU support and you could realtime edit any video stream,upto 4k, if disk IO management can handle it.
" I am not in favor for any single chip or OS system. They all still fall short of what I want... 1080 60p, realtime everything, all the time and shooting with no lights. :-)"
GGman, I could not agree more. My Opterons seem fast until I try to render 3D animation. Other sad news is I'm not impressed with the Vista betas either... :-(
Thanks for taking the time to have a look at the article. I had a good time writing it and I hope you enjoyed reading it. It's very difficult to write an AMD/Intel article as someone will always see you favoring one side or the other. I tried to have a look at the history of these chips impartially. I probably should have noted that I ran benchmarks sites for MediaStudio Pro for quite a few years, and also Vegas 5 until 6 came out. So I did have some real world experience with how these CPU's performed with video editing applications.
As far as GPU's go, beyond accelerating playback I don't see them accelerating video editing features in the near future. But I hope I am wrong! I used Liquid Edition 6 a bit, which is supposedly "GPU Enhanced" or something like that and was unimpressed. Unless you wanted to put simple 3D animations ON your video it was of no real help in speeding my editing workflow.
As I see it the problem with GPU offloading for video editing is the GPU manufacturer having to code for a certain software, thus limiting market, or trying to establish a "standard." I think a good example of this was Premiere with the "RT" series Matrox hardware.
As someone stated, who knows when, if ever we'll see the cell processor on the desktop.
For better or worse the near future looks to be AMD and Intel processors running some sort of MS OS. And I for one am excited about this. Conroe is up and running and looks extremely promising. AMD will have some type of answer.
I think for the type of work most people do, which is still mainly DV-based, they are looking for real-time previews with multiple effects and streams, and fast rendering/encoding of the final project. Both of which are CPU intensive. HDV streams aren't disk limited even in Cineform format since it's "only" about 10MB/sec, well within the reach of most modern computers, even for two or three streams.
The leap from my PIII850 to my P4 2.0 made efficient video editing a reality. With the PIII850 I had to pre-render every edit to see the result. I hated having a client behind my back waiting to see an edit while I "diverted" him with small talk during the render. With the upgrade to the P4 2.0 most previews were close enough to real time to be sufficient to make editing decisions. I think the leap from my P4 3.06 to my next system will be as significant.
As I see it the problem with GPU offloading for video editing is the GPU manufacturer having to code for a certain software, thus limiting market, or trying to establish a "standard." I think a good example of this was Premiere with the "RT" series Matrox hardware.
The widespread standardization you are waiting has already happened.
It's called OpenGL (and to some extent DirectX 9), and already contains the functionality to do high quality video rendering vastly faster, as used in Magic Bullet, After Effects, Premiere Pro 2, and a rapidly increasing number of other NLEs and post programs (FCP coming soon).
It's not the future (other than for Vegas), it's the present.
Indeed it came from 3D animation, but then in the quest for photorealism, "shaders" became so powerful that they could be used for real life video.
Consumer volume (many millions of cards) cut manufacturing costs dramatically, which was passed on to the customer.
For me personally, I'd prioritize 10-bit video (or even 16-bit/32-bit float) over GPU rendering, but I know others feel differently.
I'm still not sure OpenGL will help with the operations that typically slowdown the video editing workflow.
Color correction, resizing/cropping, PIP effects, audio/video filters, etc.. It's not really any one effect/filter that is generally the problem (for me anyway) it's the stacking of them.
This opinion is only based on previous industry history, but I believe the newer faster multicore CPU's, multithreaded software, and specific instructions like SSE will make a bigger dent in NLE performance over the next year or so than any other advances on the horizon. The best/fastest proprietary solutions will be available and have hardware tuning but serve a small part of this market.
Then again I have always rooted for the CPU to turn my computer into an NLE! I remember in 1998 lots of people were telling me I wouldn't be able to edit MPEG-2 on my computer without hardware assist for at least 5 years. I was doing it a few months later.
Until the PIII at speeds over 300MHz we were told playing back DVD MPEG-2 streams was unlikely.
Just 5 or 6 years ago real time previews of SD streams (w/o hardware assist) with moderate to high effects was impossible, now it's very possible.
I wouldn't underestimate what a little competition between AMD and Intel can achieve. I believe that Conroe is going to be an eye opener for video editing performance. The AMD X2 chips are so fast right now at 2.4GHz that just releasing them at 2.8+GHz will be a significant jump in performance. Conroe at 2.67GHz looks to be a 20% improvement over that. I think Conroe at 2.67GHz will be 30%+ faster than anything available today.
Even though I criticized a small part of the article, I did enjoy reading it and I learned some new things. I also enjoyed the whole thread that followed.
Nice article. I'm wondering the effect Windows Vista will have on all this, since several articles I've read indicate that for awhile at least, video editing and other content creation software will take a back seat to security with the removal of direct access to the hardware layer.
One article hoped at least for audio the openal interface promoted by creative will help users get some efficiency back.
I could be misunderstanding things though.
I've always wondered just how much faster our apps would be without the MS bloat penalty.