Vegas 13 & Intel Haswell Octocore

Byron K wrote on 3/21/2014, 7:59 PM
If Vegas Pro 13 has anything to offer they really should be ready to work with the new Haswell processor and Iris Pro graphics due out in a couple months. The “Iris Pro” is a brand, not a formal architecture — Intel has previously claimed that Broadwell would be a major overhaul of the graphics stack. Since Iris Pro also includes a 128MB L4 cache, that means Intel is bringing a significant GPU engine over to the desktop.

I'm looking to upgrade my PC this year and am keeping a keen eye on this processor. My general rule to PC upgrades is at least double my current PC power for each upgrade.

I personally don't have any use for "cloud" collaboration, so if Vegas 13 doesn't have anything to deliver w/ compatibility w/ the new Octocore I'll likely pass on 13 unless it has some outstanding - can't - live - w/out feature that I can resist! (:

Comments

ushere wrote on 3/21/2014, 8:30 PM
i would be happy if they simply implemented their gpu effectively, reliably, and to use the latest gen of video cards, eg. nvidia 6 / 7, let alone technology that hasn't yet been released ;-)
Hulk wrote on 3/21/2014, 9:38 PM
I am running a Haswell 4770k using the HD4600 (GT1) integrated GPU and it works perfectly with Vegas. While it is not as fast as my Radeon 7770 it is solid and that's much more important.

I can preview all of the my projects realtime generally at Best/Full, sometimes I have to drop to Best/Half so the HD4600GPU gets the job done nicely.

Haswell GT3 has twice the EU's (40) compared to GT2, and perhaps Bloomwell will have even more. ASAIK the 8 core Haswell E won't have integrated graphics. They are talking about a socketed Haswell varient with Iris Pro.

BTW. "Iris" mean 5000 or 5100 series, 40 execution units. 5100 has higher clocks than 5000. 5200 or Iris "pro" means it has the 128MB eDRAM. Which also acts as an L4 cache for the CPU, which is why they are using it as Xeon server chips now as well.
DataMeister wrote on 3/21/2014, 10:58 PM
I would be surprised if the Iris Pro GPUs perform any better than a GeForce 750 Ti series GPU. Might be great for an all-in-one money saving option, but probably not something worth craving.
Hulk wrote on 3/22/2014, 11:03 AM
Iris Pro renders the Sony press project to XDCAM EX in 93 seconds.
The 750ti in 64 seconds. The 750ti is faster but they are in the same ballpark. And given that my GT2 does this same test in 127 seconds and has very good preview performance I would say GT3 (Iris) would have even better preview performance.
If Bloomwell can improve on the Haswell implementation of GT3 then we're talking about some pretty good Vegas GPU performance from an iGPU.

And as I said, at least in my rig the HD4600 iGPU is MUCH, MUCH more stable than my discrete AMD video card.


http://www.anandtech.com/show/6993/intel-iris-pro-5200-graphics-review-core-i74950hq-tested/17

http://www.anandtech.com/show/7764/the-nvidia-geforce-gtx-750-ti-and-gtx-750-review-maxwell/21
VideoFreq wrote on 3/22/2014, 12:11 PM
Not to take away from your excitement but, I finally built a stable machine using 12 with a Haswell 4770K and an old Quadro FX4600. Having been burned by the upgrade to 11, I will stay away from odd number releases. it will take something fantastic to make me want to try to upgrade. As stated, reliability is more important than speed. One crash eliminates 30 seconds of render time savings. I render at night, anyway, sleeping peacefully, and wake up to rendered beauty every time.

Gutsy Cheers to Vegas if they actually release a number 13. AVID Pinnacle skipped the unlucky number and went directly to 14. It didn't help. 14 was a disaster. 15, the true 14 was excellent. AVID Studio became the 15. It is so weird AVID ended up selling it to another company.

I'll wait for SVP 14.
PeterDuke wrote on 3/22/2014, 6:55 PM
"It is so weird AVID ended up selling it to another company"

Maybe they saw it as a toy and they wanted to be fully grown up. Maybe they only bought Pinnacle for some technology or staff which they still retain.
Byron K wrote on 3/24/2014, 2:29 AM
As Ushere mentioned I agree, SVPro also really need to get the GPU working right w/ the higher end GTX 750 series cards. I'd like to order a higher end card from my GTX 550 but don't want to be limited to the 650 on my new build.

Reply by: Hulk, Date: 3/21/2014 4:38:38 PM
I am running a Haswell 4770k using the HD4600 (GT1) integrated GPU and it works perfectly with Vegas. While it is not as fast as my Radeon 7770 it is solid and that's much more important.
If your Haswell 4770K is stable hopefully that's good news for the new one! (:
OldSmoke wrote on 3/24/2014, 8:23 AM
I agree that GPU has to work with the later and coming GPU cards. As for Nvidia, the Keppler consumer cards are really more geared for gaming and there are articles on the net explaining why they dint do well with rendering; its not entirely SCS fault that 600 and 700 series cards are currently not as good as 500 cards. I would like to see VP12 with an Iris Pro 5200, that should do very well too.

Proud owner of Sony Vegas Pro 7, 8, 9, 10, 11, 12 & 13 and now Magix VP15&16.

System Spec.:
Motherboard: ASUS X299 Prime-A

Ram: G.Skill 4x8GB DDR4 2666 XMP

CPU: i7-9800x @ 4.6GHz (custom water cooling system)
GPU: 1x AMD Vega Pro Frontier Edition (water cooled)
Hard drives: System Samsung 970Pro NVME, AV-Projects 1TB (4x Intel P7600 512GB VROC), 4x 2.5" Hotswap bays, 1x 3.5" Hotswap Bay, 1x LG BluRay Burner

PSU: Corsair 1200W
Monitor: 2x Dell Ultrasharp U2713HM (2560x1440)

Terje wrote on 3/24/2014, 8:37 AM
To me, running a 6 series card, I get only a marginal improvement in rendering and timeline preview with Vegas, in Premiere Pro I get a significant speed boost. The issue lies with Sony, not nVidia.

Perhaps Vegas uses OpenCL for the 6 series rather than CUDA. OpenCL has abysmal performance on the 6 and 7 series for political reasons. nVidia wants you to use the Workstation cards for this and they ship a decent OpenCL driver for the Quaddro K (as in Kepler) cards. There is (in many cases) no difference between the 600 card and the Quaddro card except the driver.

Considering Premiere Pro (but not After Effects mind you, which uses OpenCL) gets a good performance boost on 6 and 7 series cards, SCS should be able to do that too.
OldSmoke wrote on 3/24/2014, 10:22 AM
Her is an extract from one of the many articles about Kepler architecture found in the 600/700 series:

"6. Less focus on compute. All of Nvidia's changes have resulted in what is, overall, the fastest and the most electricity-bill-friendly single-GPU gaming video card we've yet seen. But this title hasn't come without one sacrifice: compute. Fermi GPUs were sold, at least partially, on their ability to perform mathematical calculations à la CPUs, and displayed impressive facility doing just that, but Nvidia stripped some of those abilities away in order to improve power efficiency. Using LuxMark 2.0, an application designed for testing OpenCL compute performance, we compared last generation's GeForce GTX 580 (based on an updated Fermi-style GPU) with the GTX 680, and the earlier card came out ahead in every test—and AMD's new cards, like the Radeon HD 7970, did even better. If you want a card that's every bit as good for work as play, Kepler-based GPUs may not be the way to go. But the GTX 680 is the runaway champs for playing 3D games on your PC."

There are many more articles if you do a search but that is the reason why I said it isn't "entirely" SCS's fault.

The whole article is here: http://www.pcmag.com/article2/0,2817,2402021,00.asp

Proud owner of Sony Vegas Pro 7, 8, 9, 10, 11, 12 & 13 and now Magix VP15&16.

System Spec.:
Motherboard: ASUS X299 Prime-A

Ram: G.Skill 4x8GB DDR4 2666 XMP

CPU: i7-9800x @ 4.6GHz (custom water cooling system)
GPU: 1x AMD Vega Pro Frontier Edition (water cooled)
Hard drives: System Samsung 970Pro NVME, AV-Projects 1TB (4x Intel P7600 512GB VROC), 4x 2.5" Hotswap bays, 1x 3.5" Hotswap Bay, 1x LG BluRay Burner

PSU: Corsair 1200W
Monitor: 2x Dell Ultrasharp U2713HM (2560x1440)

Terje wrote on 3/25/2014, 9:51 AM
As I said, it is not the hardware, it is the driver. Your quote, which includes the following "testing OpenCL compute performance" does not contradict that in any way. I have never said that OpenCL performance on Kepler is good, I did in fact say the opposite.

HOWEVER. The 6 series are in many cases basically the exact same cards as the Quaddro K series, some places you can even find articles on how to "turn your 6 series card into a Quaddro K card". For that reason one should not use OpenCL on Kepler cards. If you use CUDA on the other hand, the reality is somewhat different. Using CUDA, the 660 and above are as fast as, or faster than the 570 series for rendering and outperform the Quaddro cards.

Below you can see benchmarks for Permiere Pro, where rendering gets a speed bost of 10x from a non-GPU situation. In Vegas, using Kepler cards, you get nowhere near ten times increase in performance. You can also check the same site for After Effects performance. AE doesn't get anywhere near the same speed boost, because AE uses OpenCL.

In other words, it's not the card nor the architecture, it is nVidia crippling the OpenCL drivers on purpose to make you buy the Quaddro versions. Not using OpenCL will cure that particular issue.

So, considering Premiere gets a significant speed boost from cards with the Kepler architecture, and Vegas gets nothing of the kind, the problem lies with...

http://www.studio1productions.com/Articles/PremiereCS5.htm

No GPU - time line render. 110 seconds, mpg-2 render 174 seconds
GTX-570 - time line render. 9.4 seconds, mpg-2 render 90 seconds
GT-640 - time line render. 10.5 seconds, mpg-2 render 163 seconds
GTX-660 - time line render. 9.4 seconds, mpg-2 render 88 seconds
GTX-680 - time line render. 9 seconds, mpg-2 render 84 seconds
Quadro 2000 - time line render. 11.2 seconds, mpg-2 render 160 seconds
Quadro 4000 - time line render. 10. seconds, mpg-2 render 164 seconds
monoparadox wrote on 3/25/2014, 10:59 AM
FWIW, only speaking as one who is no longer on the bleeding edge of speed, now running a 5 year old i7 266. I had tended to stay away from Radeon having had less than stellar experiences with drivers in the past. With the desire to up performance at low cost, I replaced my GTX 460 with a Radeon 7870/2 gig for less than 200 bucks (newegg)

Part of my willingness to change was it seems Sony has been sending less than subtle messages for some time they were working with AMD on optimizing Vegas performance. So, why not give it a go?

I am more than pleased. Timeline performance (and stability) is mucho better and what used to totally bog me down working with some of complexities of BCC has gotten much better. Of course, all the Sony FX that are optimized for GPU work great, except for one issue rendering color curves I reported on the forum. But then I was somewhat surprised that someone jumped into my thread and said it would be added to a future Vegas fix. Seemed kind of odd considering all the folks having problems getting Sony to listen. Overall, stability has been very good even though I'm minimally pushing the card a bit. Rendering is good where applicable. All in all a good experience thus far.

Some of you might take a harder look at Radeon.

tom
OldSmoke wrote on 3/25/2014, 11:13 AM
Terje

Yes, you are on the right track. The drivers are the main issue why it doesn't work. My GTX580 isn't recognized as CUDA capable in PPro 6 and later unless I use one of the 300 series of drivers. The implementation of GPU acceleration for PPro was written much later then VP11 & 12; actually VP10 was supposed to have already some kind of GPU acceleration but only for rendering and only for either MC AVC or Sony AVC... I cant recall the details.

SCS would and I hope will re-write their code again to make it suitable for the newer series of cards. VP does use CUDA and uses OpenCL only for timeline preview. I can fully understand their position as rewriting the code is certainly not done over night and has a huge cost impact too. Maybe it was the wrong time to come out with GPU acceleration but it was what many people where waiting for.

Yes, you can make a GTX look like a Quadro and I have done so for my GTX580 but even the Quadro K series doesn't do well in VP and for exactly the same reason, fundamental changes in hardware and drivers on the Nvidia side.

I think I mentioned it before in another thread, if I where Sony I would build my own acceleration hardware board much like in the days of Canopus. I had Media Studio Pro that worked very well with one of the Canopus cards and MPEG encoding/decoding, even certain effects where hardware accelerated. Once you do some level of compositing, or just scrolling text, timeline performance is down considerably. We are still not at a point where we can get real-time preview at Best/Full with a couple of FX such as Starburst on 1080-60p footage on a good system like mine and others have; lets even talk about 32bit.

One more word to PPro; I had the latest CC version of it on my system and made a test project that contained equal footage and equal kind of transitions, things I could find that both apps provided. Rendering to the same AVC internet format was 30% faster with Vegas then PPro and I used GPU acceleration for both apps.(I used driver 334.89 for that test which allowed GPU acceleration in PPro)

Maybe SCS will stray from Nvidia and turn towards AMD for better OpenCL implementation and I cant fault them for doing so. Nvidia's strategy seems to be that if you want more then gaming then you have to by a Pro card at 4 times the price; even tough it is almost the same hardware.

Proud owner of Sony Vegas Pro 7, 8, 9, 10, 11, 12 & 13 and now Magix VP15&16.

System Spec.:
Motherboard: ASUS X299 Prime-A

Ram: G.Skill 4x8GB DDR4 2666 XMP

CPU: i7-9800x @ 4.6GHz (custom water cooling system)
GPU: 1x AMD Vega Pro Frontier Edition (water cooled)
Hard drives: System Samsung 970Pro NVME, AV-Projects 1TB (4x Intel P7600 512GB VROC), 4x 2.5" Hotswap bays, 1x 3.5" Hotswap Bay, 1x LG BluRay Burner

PSU: Corsair 1200W
Monitor: 2x Dell Ultrasharp U2713HM (2560x1440)

hazydave wrote on 4/19/2014, 12:55 AM
I agree that GPU has to work with the later and coming GPU cards.
It actually does. The limitation in Vegas itself is only in Main Concept's AVC encoder. They are enabling only very specific nVidia GPUs (and only for CUDA mode) and AMD GPUs (and only for OpenCL, obviously). Any Kepler or GCN GPU will simply be ignored.

The other problem is that nVidia just seems to have a bad implementation of OpenCL on Keplers. This is something different than simply the artificial slow-down on desktop vs. workstation OpenGL or the slowdown on desktop/worktation vs. "computer" OpenCL or CUDA. There are simply things that go faster on GTX5xx than GTX6xx or GTX7xx, which should go faster. Since Vegas itself isn't using CUDA, this is only an OpenCL issue. But you can find this same behavior on nVidia cards independent of Vegas.

And there's nothing Sony can do about it. OpenCL didn't change in any significant way since Vegas 11 (ok, OpenCL 2.0 came out after Vegas 12, not a current issue). So the OpenCL client coding has been current. The problem is on nVidia's drivers right now (or at least last I checked... I found Vegas faster than nVidia at my $300-ish price-point shortly after Vegas 11 shipped, so I've been on AMD).
hazydave wrote on 4/19/2014, 1:14 AM
SCS would and I hope will re-write their code again to make it suitable for the newer series of cards. VP does use CUDA and uses OpenCL only for timeline preview. I can fully understand their position as rewriting the code is certainly not done over night and has a huge cost impact too. Maybe it was the wrong time to come out with GPU acceleration but it was what many people where waiting for.

That's incorrect.

Vegas proper only used OpenCL. And OpenCL didn't change in any significant way over the years between Vegas 11 and Vegas 12 or the GTX5xx, GTX6xx, and GTX7xx. AMD GPUs kept going faster IN VEGAS, nVidia didn't. nVidia has an OpenCL problem on the Kepler hardware, plain and simple. There's nothing you could do to fix this in Vegas code.

The only place CUDA is used is in external modules like the Main Concept AVC CODEC. Ok, sure, that's shipped as part of Vegas, but it's an entirely separate program and not code that SCS touches. That's the part that isn't getting GPU accelerated on any new GPU, AMD or nViida, and that's because they hard-code the GPU types in the CODEC. That's also why even older nVidia cards, which get acceleration when you switch Main Concept to CUDA, get no acceleration in OpenCL mode, though they ought to, they support it. Main Concept will not accelerate any Kepler or GCN GPU.

Keep in mind, both Kepler and GCN, while both big redesigns of nVidia and AMD GPU architecture, respectively, both were redesigned to better support the GPGPU model -- which is, of course, OpenCL and (for nVidia) CUDA. So there's no reason these shouldn't be faster. For AMD, they are... the newer AMD cards are always faster on GPGPU benchmarks. Kepler trips in places in shouldn't.

Some of this seems to be the old "high end protection" thing. Desktop drivers have a couple of 64-bit operands in OpenGL that run at 1/10th the speed of the professional drivers, even those the GPU chips are identical (or damn near, depending on the specifics). That's to help protect the more expensive cards' business, and you can find plenty of stores about this, including a pinpoint of when nVidia started doing this. But the same seems to be true in OpenCL and probably CUDA, where in just a few places, the desktop and workstation GPUs just weirdly lose performance, totally out of step with older nVidia cards and current AMD cards. But a Tesla... the newer "compute" category of card, will run these at full speed... even though they also use the same GPU hardware. Certainly some benchmarks will slow down on consumer cards due to RAM limitations, but others seem to just hit a well. That's the kind of thing that suggests intentional behavior. I would not expect this to be a Vegas issue, but that hasn't been well explored, far as I know.

Anyway, the AVC rendering situation is not likely to change. Their web site still lists only those 2011-vintage GPUs. They don't seem to have made any improvements over the few years they were owned by Rovi. No idea if the new guys (Parallax Capital Partnets) will do any good here or not. But Sony really needs to find a capable CODEC partner. The Main Concept GPU coding is very good, but it's not valuable if it doesn't work on new hardware. And they're going to need to support HEVC, VP9, and perhaps other CODECs going forward.