Comments

johnmeyer wrote on 12/8/2011, 9:52 AM
Bob,

I posted that link a month ago in this thread that I started:

GPU Support Information from Sony

Like you, I thought it was an amazingly important resource and that everyone should be aware of it, given the large number of posts about GPU problems, questions about how to maximize performance with this new feature, and issues about what GPU/video card to purchase.

However, everyone in the forum now seems obsessed, not with fixing problems or helping people, but enforcing the way in which people post, the best example being the post asking people to fill in their system specs on their profile page, something that has nothing to do with helping people understand how to better use the product or fix problems.

I have written many thousands of posts, always trying to help people, and not ONCE have I ever looked at anyone's system specs.

Sorry to go a little OT, but the thread on that subject has gone completely OT, something that was absolutely guaranteed to happen given the nature of the original post. By contrast, this post you started is designed to help people, and provides some amazingly valuable information for anyone who wants to take the time to read all the FAQ on the page you linked to.



TheHappyFriar wrote on 12/8/2011, 10:25 AM
the post asking people to fill in their system specs on their profile page, something that has nothing to do with helping people understand how to better use the product or fix problems.

Except, of course, when someone says "my GPU support ain't workn'" and don't post anything else, so you look up their system specs.

Which is pretty much what's happening.

I've posted the same link bob did several times but it seems nobody bothers to read the knowledge base or readme included with Vegas, let alone the manual, all of which could solve many people's problems.

EDIT: just checked and the last several threads that have a subject with "GPU" or "graphics card" in them don't have any useful info listed in the first post, except the model of GPU they're using. Only one had system specs to even see what else they could be using.

If everybody posted useful info when they made their first post instead of "it's broken, why?!?!" then system spec's section wouldn't be needed.
johnmeyer wrote on 12/8/2011, 11:10 AM
Except, of course, when someone says "my GPU support ain't workn'" and don't post anything else, so you look up their system specs.I agree.

And so, when it is required in order to answer something, you ask for the specific hardware or software information that is needed to answer the question.

However, 98% of the time "system spec" information is not required. More to the point, there is almost always some vital piece of information (nothing related to system specs) that the OP forgets to include, and that you have to ask for. So why is there this abnormal obsession with system specs?

Examples of information not related to system specs that is needed far more often than system specs:

What codec is used by the video being edited?
How many still photos are used, and what resolution?
What are the project settings and do they match the source video? Have they set the deinterlace setting to "none?"

I can go on for another hundred lines, but each of my examples are far more important to a wider range of problems than are system specs.

What's more, there is no standard for what system specs should be included. Do I include the make of my DVD burner? Do I include its model number? Should I include the firmware number? Each of these can be exceedingly important, depending on the problem. I can, as with the example above, list dozens of other examples, but the point is that the brief system specs included in most profiles -- including those posted by the people who seem to obsess over this subject -- are completely and totally insufficient to answer most problems that are actually related to hardware.

And, the system specs should probably also include software. Are you running Windows XP, Vista, or 7? Which service pack are you running? Which of over 1,000 patches have been applied? You may think I am now trying to be cute, but as an example, there are numerous people I have helped over the years who had problems capturing, and it turned out they needed a particular Windows patch that had not yet been installed. Similarly, there is a very specific patch that lets some older computers read SDHC cards (the large sized, but physically identical, big brother to the original SD memory cards). If that patch is not installed, their computer can read 2 GB cards, but nothing larger.

I can go on and on, but the point is that the obsession with system specs is totally pointless and seems based more on achieving a forum pecking order and less with actually helping some poor lad or lass who is having a problem.

So, my recommendation is that if you need some additional information in order to answer a question, just ask for the specific information that you need rather than make a general request for "system specifications" which, as I've pointed out above, almost never provides all the relevant and necessary information for any particular issue.

But, I digress ...
JJKizak wrote on 12/8/2011, 12:41 PM
It's hard to search for something in the knowledge base----especially if you have too many words. One at a time is a nightmare. Search engine is very tight and if you have one tiny letter wrong you get nothing. You can't use any phrasing, it must be one word or two words at the most. It basically is almost worthless unless you find something on the first couple of pages. I might possibly die in the time it takes to find something there.
JJK
johnmeyer wrote on 12/8/2011, 1:28 PM
It's hard to search for something in the knowledge baseThat is probably true, but Sony did include, in the link Bob provided in his initial post above, direct access to answers to most of the major GPU questions people have been asking.
R0cky wrote on 12/8/2011, 1:51 PM
Technical support told me to go back to the AMD 11.2 drivers to debug GPU accel problems on my new laptop.

Their own documentation on the above referenced page says that the driver must be 11.7 or newer.

AMD's documentation says I must have the 11.11 driver for my hardware which is the new dual Intel/AMD graphics that switches between graphics adaptors depending on power status or various manual settings. I'm sure that alone is part of the issue even though I have manually set it to always use the AMD graphics.

Tech support has not yet responded to this information.
farss wrote on 12/8/2011, 2:03 PM
"However, 98% of the time "system spec" information is not required. More to the point, there is almost always some vital piece of information (nothing related to system specs) that the OP forgets to include, and that you have to ask for. So why is there this abnormal obsession with system specs?"

Thank you for taking the time to type all of that. I've several times started to write a similar post and then just lost interest as there's so much hubris and emotion attached to this topic I doubt any would bother to take what was said on board.

Bob.
johnmeyer wrote on 12/8/2011, 2:55 PM
Thank you for taking the time to type all of that. I've several times started to write a similar post and then just lost interest as there's so much hubris and emotion attached to this topic I doubt any would bother to take what was said on board.
I agree, and thanks for posting that.

I apologize, Bob, for taking your thread slightly OT with my previous two posts, but the thread in this forum that is specifically about that topic has degenerated, so I didn't want to post there.

Every forum goes through cycles, and unfortunately, since the release of Vegas 11, this one has degraded significantly, with a few people denying that there are any problems with the new release (hey, there are always problems and user education issues with any new release), and instead of trying to help, they try to subtly (and sometimes not so subtly) berate or belittle the OP rather than try to help.

Hopefully the pros will still continue to post, and not get turned off by the unhelpful posts that seem to have proliferated since V11 made its debut.

BTW, back on the issue of GPU, I posted, many times in the past years when users were requesting GPU support for timeline playback and rendering, that Sony would do well NOT to listen to their customers. What I meant by this is that a software engineer should never deliver a feature in the exact manner that a customer asks for, but instead should try to figure out what the user is really trying to do. In this case, no one really wants or needs GPU acceleration. Instead, what they really want and need is faster performance. They don't -- and shouldn't -- care how that is delivered. In other words, the user has absolutely no idea whether using the GPU is the best possible way to improve performance.

Now that most CPUs have multiple cores, the incremental benefit of GPU acceleration will be much smaller than it might have been 4-5 years ago before multiple cores were prevalent. I am still waiting for a definitive set of benchmarks, done on the latest possible motherboard/CPU system, showing how much better timeline performance and rendering might be with a top-notch GPU installed (with the latest drivers ...). This should be done with a real-world project, and not something that consists solely of generated media. Don't get me wrong, I think the RenderTest VEG file is very useful and does provide some good, consistent information about relative performance between different computers, but I also have found that it is very misleading, especially when it comes to timeline performance, and also when it comes to rendering to various different codecs, some of which seem to be much better at handling multiple cores (including the GPU "core") than others.

One final note. The other thing that has always made me wary about using the GPU is that it involves the application (Vegas) directly with some of the most unreliable and often-changing software in the personal computer business: the video driver. I don't think I need to spend much time recounting all the posts over the past month in this forum about how different versions of ATI and nVidia drivers have either fixed or caused all sorts of problems.

Since my Vegas 11 trial expired before I could try the new drivers that would, hopefully, have worked with my card, I can't do these tests myself, but if and when I ever do upgrade to Vegas 11, my first act will be to turn off ALL traces of GPU acceleration AND to make sure that no additional plugins are installed. After I use that configuration for a few months, I'll then create an image backup, and then, one-by-one, add GPU acceleration and then plugins.

It might actually be helpful if Sony could include a "single-click" way to completely disable all GPU involvement so that users don't have to visit multiple dialogs just to make sure it is completely and totally turned off.

Steve Mann wrote on 12/8/2011, 3:12 PM
". Instead, what they really want and need is faster performance. They don't -- and shouldn't -- care how that is delivered."

About two years ago when users were severely criticizing SCS for being the only pro NLE without GPU acceleration, I stated: "Be careful what you wish for". I saw this firestorm of incompatible systems on the horizon.

Also, I resemble your remark. I have decided to simply stop helping people fix their computers.
Hulk wrote on 12/8/2011, 4:19 PM
JohnMeyer,

You are brave to bring up the point that GPU acceleration brings up many potential issues since now a large part of the preview and rendering is done with varying hardware and software. i.e. the video card and video drivers. And let's be realistic the microcode used with an Intel or AMD processor is much more solid and reliable than the GPU hardware/software combination.

4 months ago I was editing on a dual core E6400 processor. I could edit SD fine. AVCHD was another mattter entirely. Preview performance was horrible and rendering was always an overnight affair. I recently upgraded to an i2500k with 16GB RAM and Windows 7 64bit. You know what? AVCHD editing is a breeze. Best/Full preview performance without effects no problem, most of my projects can do full framerate at Best/Half, and only really tough ones like the rendertest or that VP11 test require Best/Quarter preview for good (near realtime the whole way though) framerates.

Keep in mind that I "only" have a 2500k. The 2600k add 4 more logical cores via hyperthreading. Ivy Bridge which is just around the corner should bring an additional 5-10% performance improvement, and Haswell will bring yet another jump in performance.

And I think it's safe to say 1080p will be the mainstream resolution for everything outside of editing movies for quite some time. This means that the bar will not be going up for a while but CPU performance will continue to improve. By the time we are seeing 2k or higher go mainstream the good old CPU may be up to that challenge as well.

The point of my post is really to validate yours. What exactly are the benefits of GPU acceleration in real world projects? Does the GPU make setting up a stable Vegas system more difficult? Will the Sony engineers stop improving CPU only performance?

Before my CPU upgrade I would have purchased VP11 solely for the GPU acceleration. Now with the 2500k I can't convince myself it is necessary.

- Mark
johnmeyer wrote on 12/8/2011, 5:34 PM
CPU performance will continue to improve ...That is an interesting concept you bring up. If I wanted to be contentious, I could claim that CPU performance hasn't improved in almost eight years. In particular, we haven't seen any significant increase in clock speeds during that entire time, nor are we ever likely to see any significant increases in the future, as long as processing is copper-based.

Instead, all the "performance" increases have come from increasing parallelism, meaning that we keep getting more and more "cores" jammed onto the CPU chip. Fortunately for us, some of the performance issues involving video, most notably those involving rendering, can benefit directly from this parallel processing model. However, it is less clear whether the performance gains on real-time computational tasks, such as timeline playback, can benefit to the same degree, or whether they will scale directly as more cores are added. Perhaps they will, or maybe they won't. I don't have enough understanding to be able to offer an opinion.

However, I don't know if we will continue to see annual performance gains in new computers that match the pace we saw back in the late 1990s. Thus, it is a very good thing that, as you say, 1080p will likely bee the "top end" of prosumer video for the next few years.

Hulk wrote on 12/8/2011, 9:27 PM
@JohnMeyer,

My observation of the CPU scene is vastly different from yours. I am an enthusiast so I closely follow this market segment.

Despite the modest increase in clockspeeds IPC (instructions per cycle) has increased dramatically over the last 5, 10, 20, or however many years you want to go back. One metric I have been using is CPUmark 99, it only uses one core and is a good indication of integer processor efficiency.
Let's look at MHz/CPU99mark, or the number of MHz required to produce 1 CPU99Mark. This will take MHz out of the analysis.

2003 - Pentium 4 - 3.06GHz - 15.8MHz/CPUmark99
2011 - i2500k - 3.3GHz - 6.5MHz/CPUmark99

That's nearly 3 times as efficient per clock cycle. And this metric doesn't stress the memory subsystem (great increases in real world performance have come from the on board memory controller) or additional SSE instructions and other architectural improvements. For example, while a Core2Duo from 2006 posts a CPUmark99 result of 7.2MHz/CPUmark99, only a little better than the i2500k, in Vegas my 3.2GHz C2D was 5 TIMES SLOWER when rendering compared to my i2500k at 4GHz. If you take into account the number of cores and increased clock of the 2500k the 2500k should be 2.5 times faster, not 5 times faster. The additional 250% improvement is due to the architectual improvements I mentioned above.

You need only edit on a Q6600 clocked at say 3.3GHz and an i2500k at the same frequency and you will see and feel the performance improvement from the new cores.

Thermal limitations have dramatically slowed the increased in clockspeeds (and Moore's Law) but improvements in IPC and core count have more than made up for it.

Finally, if you look at the original Core2 architecture those topped out around 3.8GHz for a quad while today's quads will routinely do 1GHz more. So there have been improvements in raw clockspeed it's just that Intel doesn't need to push in that area because there just isn't any competition from AMD at the high end. Why aggressively bin chips when you don't need to?

Many people are "stuck" on the idea that if the clockspeed doesn't go up the CPU isn't faster. All you need to is compare a Pentium 4 to a Sandybridge processor and you'll realize that is just not true. There is a lot more to processor performance than clockspeed.

There is no reason that timeline preview can't benefit from more cores as rendering does. If you have eight cores each frame of video can be broken into 8 horizontal strips for preview. Or each core could render 1 frame (or field) of video and there could be a 1 frame pause before playback starts. The parallel processing can be as fine or course as the application demands for optimum performance. I'm not a programmer but know a bit about it from my college engineering days. In fact, besides games most really CPU intensive applications lend themselves to parallel processing. I'm talking about video editing or 3D animation.

Anyway, I see the glass as being half full. The fact that my i2500k gave Vegas Pro 10 a new lease on life with AVCHD give me lots of hope for the CPU powered video editing future.

- Mark
megabit wrote on 12/9/2011, 1:54 AM
"Now that most CPUs have multiple cores, the incremental benefit of GPU acceleration will be much smaller than it might have been 4-5 years ago before multiple cores were prevalent. I am still waiting for a definitive set of benchmarks, done on the latest possible motherboard/CPU system, showing how much better timeline performance and rendering might be with a top-notch GPU installed (with the latest drivers ...). This should be done with a real-world project, and not something that consists solely of generated media. "

Working on multi-camera (real life) projects, I see the following playback speeds at Full/Best with up to 8 HD tracks in multi-camera edit mode on Preview window, PLUS the current take full screen on secondary display, with 2-3 FXs active:

- VP 10 or VP11 with GPU off: 5-6 fps
- VP 11 with GPU on: 25 fps

Now, if this is not a real-life benefit of GPU acceleration then I don't know what is :)

In the above scenario, the CPU load is ca. 50-60%, and GPU - 40-50% typically. Interestingly, with more than 8 tracks, the GPU load drops to 2-3%, CPU rises to 80-90% with fps crawling at 3-5 again. Isn't this a clear indication of the GPU actually helping (unless the threshold of 8 tracks is exceeded, that is)?

Piotr

PS. The figures are for the System #1 in my specs listing (i7-2600K, GTX 580)

PPS As to rendering, I see 2.5x faster renders using GPU with supported formats (like MC AVCHD).

AMD TR 2990WX CPU | MSI X399 CARBON AC | 64GB RAM@XMP2933  | 2x RTX 2080Ti GPU | 4x 3TB WD Black RAID0 media drive | 3x 1TB NVMe RAID0 cache drive | SSD SATA system drive | AX1600i PSU | Decklink 12G Extreme | Samsung UHD reference monitor (calibrated)

Laurence wrote on 12/9/2011, 2:32 AM
My all-in-one Lenovo B520 has drivers that are in the nVidia list, and I can download the exact model number that corresponds to the card that I know is in the machine (and identified as such by my PC), but the driver install won't recognize the hardware and quits without installing. Bummer.
Rv6tc wrote on 12/9/2011, 11:48 AM
I agree with Piotr. GPU was the only feature driving me to upgrade to v11. And the near 30% increase in render speeds and playback makes me giddy.

As the the frustrations in the stability of the system.... they'll work it out. They always do, just sometimes faster than others...
Red Prince wrote on 12/9/2011, 6:56 PM
Now that most CPUs have multiple cores, the incremental benefit of GPU acceleration will be much smaller than it might have been 4-5 years ago before multiple cores were prevalent.

Not necessarily. The multiple cores of a CPU and those of a GPU are designed for different things.

The CPU cores are designed for serial processing, that is, to run different code on each. Of course, you could run the same code on each, but you have to load it on each separately. Not to mention that in a multitasking OS you have many different processes running at the same time (often processes the user does not even know about), so even if an application requests multiple instances of the same code, they may all be running on the same core or they may be running on different cores, but the application does not control that. The OS does. The advantage of a CPU is its flexibility: It can run just about any code we can imagine.

The GPU has a limited set of instructions (compared to a CPU) optimized for video processing. It can have hundreds of cores precisely because it does not have the flexibility of a CPU. Everything is run strictly in parallel. That is, all cores will run the same code at the same time. On different parts of the data set, but the same code instructions simultaneously.

Thus, the multiple cores of a CPU are certainly advantageous but their advantage is of a different type than the advantage of a GPU. It is a best combination of both, a CPU and a GPU, that speed up the video processing.

He who knows does not speak; he who speaks does not know.
                    — Lao Tze in Tao Te Ching

Can you imagine the silence if everyone only said what he knows?
                    — Karel Čapek (The guy who gave us the word “robot” in R.U.R.)

Grazie wrote on 12/9/2011, 9:14 PM
Red Prince, that's an interesting piece of information that the GPU and CPU deal with processing differently. When it comes to CPU v GPU, there appears to be more questions than answers.

Posting system specifications, through the present User form, appears to be far too simplistic to be of value at this level.

Would an auditing piece of software, establishing whether a PC is up to the job, be of value? Maybe some form of ranking and scoring to give a User a straightforward rating of their pc.

When I first thought of upgrading, back in May this year, I got great feedback from the forum. I then took those specs to my PC builder, who in turn recognised the value of the comments from and through this process of 'scoping' I've got a PC that I believe is 'coping'. However, when it doesn't I'm not sure if the PC is failing, or Vegas has a bug.

- g

ushere wrote on 12/9/2011, 10:06 PM
the ONLY piece of hardware / software i've EVER owned that DIDN'T have some sort of problem / bug was a casio programmable calculator (fx-602p)

that said, i never learnt how to properly program it in the first place, perhaps if i had i might have found out that it too had problems ;-(
Grazie wrote on 12/9/2011, 10:34 PM
Would be interesting to see if an auditing program, combined with a more comprehensive User spec data sheet would move this along and get to a quality improvement.

- g
megabit wrote on 12/10/2011, 3:05 AM
"The GPU has a limited set of instructions (compared to a CPU) optimized for video processing. It can have hundreds of cores precisely because it does not have the flexibility of a CPU. Everything is run strictly in parallel. That is, all cores will run the same code at the same time. On different parts of the data set, but the same code instructions simultaneously"

You've nailed it. Using GPU (or a cluster of GPUs) in number crunching is the very essence of what's called "parallel processing". As my main profession, I'm dealing with a CAE application (Autodesk Moldflow) which has it implemented in a very efficient fashion. Time savings are the most pronounced when running several analyses at the same time, e.g during Design of Experiment (DOE) sequence.

And I must say GPU implementation in VP 11 gives even greater relative benefits in video decode/encode (which also is number crunching), expressed in % playback fps increase or % render time decrease.

Piotr

AMD TR 2990WX CPU | MSI X399 CARBON AC | 64GB RAM@XMP2933  | 2x RTX 2080Ti GPU | 4x 3TB WD Black RAID0 media drive | 3x 1TB NVMe RAID0 cache drive | SSD SATA system drive | AX1600i PSU | Decklink 12G Extreme | Samsung UHD reference monitor (calibrated)

farss wrote on 12/10/2011, 5:11 AM
"The GPU has a limited set of instructions (compared to a CPU) optimized for video processing. It can have hundreds of cores precisely because it does not have the flexibility of a CPU. Everything is run strictly in parallel. That is, all cores will run the same code at the same time. On different parts of the data set, but the same code instructions simultaneously"

That is not strictly true at all.
The GPU is optimised for operations such as rasterising, anti-aliasing, shading, texture mapping etc. Vegas uses none of this as it uses OpenCL, NOT OpenGL. Most of that power is useless for editing video, it is very, very useful for playing video games and for CGI applications.

I have seen first hand several systems that do "video" through the GPU for things like color grading but that's ALL those systems do and they are highly optimised to use the very expensive nVidia Quadro cards.

Vegas makes no use of any of this, heck it doesn't even use the H.264 video decoder on the video card.

Bob.
megabit wrote on 12/10/2011, 6:49 AM
Bob,

Maybe it doesn't do all those things in Vegas Pro 11, but -as I said - it does sheer number-crunching, and does it well.

Enough to look up the CPU/GPU load distribution at playback I posted above.

Piotr

PS , Oh, and speaking of loads: if VP 11 doesn't "understand" H.264, how come that when rendering XDCAM EX (with a couple of OFX added) to XDCAM EX my GPU load is a mere 18%, while rendering to MC AVCHD it works at 60-70 %?

AMD TR 2990WX CPU | MSI X399 CARBON AC | 64GB RAM@XMP2933  | 2x RTX 2080Ti GPU | 4x 3TB WD Black RAID0 media drive | 3x 1TB NVMe RAID0 cache drive | SSD SATA system drive | AX1600i PSU | Decklink 12G Extreme | Samsung UHD reference monitor (calibrated)