Last night I rendered a 32sec segment of a timeline consisting of Nikon HDSLR (.mov) video. I rendered using the Sony AVC CODEC (720p-24), here are the results:
Since I use the Sony AVC codec and Quick Sync with similar results, I've had no burning desire to add hundreds of dollars and 250 Watts to my latest system build.
If I were to delete the graphics card my total cost for this system, including two SSD drives, a 2TB media drive, BluRay burner etc. would be about $1k. That's a lot of editing power for the $$.
Wow, I just moved one of my monitors from my EVGA GTX 660 to the onboard Intel 4000 on my i7-3770 and ran the test - Intel Quick Sync is 50% faster than 1000 Cuda cores!
I wish I had known about this before I spent $230 on a video card. O dear, what are you going to do. I guess I'll sell the video card. There's no need to run a video card simply for rendering when the onboard render is much faster. Yikes!
" Intel Quick Sync is 50% faster than 1000 Cuda cores!"
It's untrue, plain and simple. If Vegas Pro 12 makes anybody think like this it only indicates how poor handling of CUDA currently is. The good news it shows how great potential there still is to take advantage of, should SCS care to optimize the code for the Kepler architecture and current nVidia drivers.
Well I would love to see CUDA be faster since I have already made the investment in the card, but right now Quick Sync is what I will be using to render. Time will tell.
I was going to test the render speed on my system using your settings, but I can't find a template for "Sony AVC CODEC (720p-24)" in my install of VP12.
I did render out to: Mainconcept AVC/AAC - Apple TV 720p24 Video with GPU rendering turned on and 30 seconds of timeline rendered in 7 seconds. My system specs are in my profile.
I think there is room for optimization with all the cpu and video processing options available today.
In this thread you can see my results did not saturate all four (8 with Hyperthreading) Intel cores or use much of the available 16 GB of memory. I don't know whether the HD 4000 adapter was creating a bottleneck.
One of the main reasons I didn't buy a 3930 system was that I haven't seen all my available cores running consistantly near 100% since I had a Core2Duo. I didn't want to pay for capacity that I would leave stranded. Further, I think that current software is leaving a lot of hardware throughput unused, whether it be CUDA cores whatever the analogy for ATI is called or the specialized on-die rendering hardware inside an Intel chip.
"I can't find a template for "Sony AVC CODEC (720p-24)" in my install of VP12."
You can create your own template using the Customize option. 1280-x720-59.94p is legal for Blu-ray. I use it all the time.
Former user
wrote on 11/6/2012, 5:50 PM
I use custom templates all the time -- I just thought the OP had used some sort of pre-defined template that I didn't see listed and I just wanted to be sure to compare apples to apples (so to speak ;-)
I purchased the bottom rung Z77-based ASUS MOBO (P8Z77-V LX) and it included the Virtu software. Once I connected my monitors to the on-board video QuickSync was active and V12 now lists both nVidia GPU and QuickSync when rendering (QS has two choices listed, Speed and Quality).
What do you think to this statement ? "Quick Sync, like other hardware accelerated video encoding technologies, gives lower quality results than with CPU only encoders. Speed is prioritized over quality." Taken from Wikipedia. Just curious.
Given video from Sony consumer AVCHD cameras and over-the-air ATSC broadcasts encoded with the Sony AVC codec at about the same bit rate as the original source, I can’t tell the difference on my playback equipment.
The quote from Wikipedia refers to a test here of applications such as MediaConverter from Arcsoft, MediaEspresso from Cyberlink and others, not the Sony AVC implementation.
If your results prove that encoding video with software running on an x86 processor gives better results, which is possible, you always have the option of checking the box to use “CPU Only”.
Virtu was supplied with mobo, I tried it with Edius but an updated intel graphics driver solved matters and virtu was no longer required. I'll try it again with Vegas.
Update: Last night I had a chance to compare my test renders from a few days ago and QuickSync (Speed) looked just as good as the others.
Please note that I viewed the clips on an HP 23" 1080p computer monitor, not a high-end broadcast monitor; there may very well be small differences that I didn't notice, but I was expecting a highly visible difference.
To compare the results don't rely on your eyeballs.
Use Vegas. Put one encode on track one, encode two on track two, set composite mode to Difference or Difference Squared. Check result with monitor and waveform scope. Anything either shows is the difference.
Per Bob's advice, I compared Quick Sync and CPU Only renders using difference composite mode. Per the scopes and monitor there is a quantitative difference if the subjective difference is imperceptable. This test doesn't expose which version might look the best in a blind test. Graphic is here.
The 8 bit renders on this machine are decribed as follows:
Auto.................................65 sec. .........102,158,542 bytes
Quick Sync Quality..........94 sec. .........86,751,648 bytes
CPU Only........................70 sec. ..........63,787,685 bytes
On my machine there would be no render time penalty rendering this project with the CPU only.
Friends, I am more convinced than ever that I missed the boat with Intel Quick Sync. Seriously, I just uninstalled my new EVGA GTX 660 Superclocked 2gb DDR5 video card (with 960 CUDA cores) because Intel Quick Sync was significantly faster at rendering with equal results.
My encouragement to Sony would be to support this strongly. I had issues with rendering with CUDA, but with Intel Quick Sync, Vegas has been remarkably stable.
Here's the deal: of course I was hoping CUDA would be worth the $230 investment I made in the card, and to the extent that it sped up rendering by 50% over the CPU alone, it was worth it. But the reality is that Intel Quick Sync is 100% faster than the CPU alone, and I wish I had tested that before purchasing the video card, which is now up for sale on eBay.
Which begs the question, why hasn't Sony been more supportive, at least in their marketing, of Intel Quick Sync?
<< Put one encode on track one, encode two on track two, set composite mode to Difference or Difference Squared>>
Great tip, thanks!
<<I just uninstalled my new EVGA GTX 660 Superclocked 2gb DDR5 video card (with 960 CUDA cores) because Intel Quick Sync was significantly faster at rendering with equal results.>>
Vegas uses Open CL, so yes, the CUDA cores are used, but via Open CL - that's why Sony's acceleration works with ATI and Intel GPU solutions that support Open CL.
I'm holding on to my nVidia card for now because a) I'll never get my money out of it, and b) I suspect that Open CL optimization will increase as nVidia releases new drivers.
I should also point out that I've only rendered to one CODEC. Once I look at timeline performance and rendering to other GPU-accelerated formats I may find that QS doesn't perform as well as a discreet card.
I have been comparing render results on my new 3770k (not overclocked) with my old i7-950. With a GTX 660ti, the 3770k looks to be about twice as fast as the 950 when debug frame serving to TMPGEnc for a SD Mpeg 2 render. If you are interested, you can see the project I used for this comparo here...
I've rendered more than five forty minute projects using "AUTO" and not noticed a loss of video-audio sync. I forced Quick Sync Quality for one project and didn't notice loss of sync. Most of my output goes to DVD Architect as separate (Peter Duke will be so proud of me.) elementery streams.