The need for speed - what to do?

ingvarai schrieb am 17.07.2008 um 09:59 Uhr
I run XP 32 bit and have an Intel(R) Core(TM)2 CPU 6600 @ 2.40GHz., and have 2Gb of RAM
Having entered the AVCHD world, I spend more time waiting for gauges to slowly crawl along, than being creative.
Excuse me for this post - I know the topic has been aired many times, and I have read some of it, but I like to use my own words:

I am concerend about rendering / processing speed, not the preview quality and speed.

Will I notice any difference if I install XP 64 bit and run Vegas here?
Will I notice any difference if I install Vista 64 bit and run Vegas here?
Will I notice any difference if I add an additional 2Gb of RAM, provided I keep XP32?
Will the benefit of adding RAM only apply if I switch to a 64 bit OS?
Will I notice any difference if I replace my CPU with a an Intel Quad Extreme? (I guess yes, but is it worth it?)

Has anyone here successfully decreased rendering time using serial coupled PCs? (Network rendering)

Ingvarai

Kommentare

John_Cline schrieb am 17.07.2008 um 10:51 Uhr
Will I notice any difference if I install XP 64 bit and run Vegas here?

No.

Will I notice any difference if I install Vista 64 bit and run Vegas here?

No

Will I notice any difference if I add an additional 2Gb of RAM, provided I keep XP32?

No. (Adding a total of 4 gig of RAM to a 32bit version of Windows will only get you about 3.125GB of usable memory)

Will the benefit of adding RAM only apply if I switch to a 64 bit OS?

Yes, if you want to use the entire 4 gig of RAM.

Will I notice any difference if I replace my CPU with a an Intel Quad Extreme? (I guess yes, but is it worth it?)

BINGO! The only way to really speed up Vegas is to throw a faster processor at it. A Quad-core processor will render up to twice as fast as a dual core of the same processor clock speed.

Has anyone here successfully decreased rendering time using serial coupled PCs? (Network rendering)

Network rendering only works with .AVI files. You can't network render MPEG2 or AVCHD files.
baysidebas schrieb am 17.07.2008 um 11:31 Uhr
"A Quad-core processor will render up to twice as fast as a dual core of the same processor clock speed. "

Only if your projects have complex natures that can benefit from the extra horsepower, meaning lots of FX, track motion, color correction, etc. An easy way is to watch your CPU utilization on a current typical project while rendering. Unless your dual processor is currently using cpu cycles in the upper ranges, Between 85 and 100%, you probably will not see much of a benefit by going quad. If your CPUs are often pegging at 100%, then you will probably see a marked improvement by going quad. Of course the faster the processor, dual or quad, the better.
ingvarai schrieb am 17.07.2008 um 11:32 Uhr
Hi John,
thanks a lot.

Reading you reply, I start to wonder.. Do we all here using Vegas mainly sit and wait for Vegas to finish processing something, and just being creative in-between?
What do TV-companies do? I assume they have specialized hardware rendering systems, way faster than the software compliant we use. Or?

Here is a new question:
Do mother boards exist with sockets for several CPUs? If I had 4 separate Quad CPUs, provided XP supports this, I might get a decent rendering speed <g>

Ingvarai
ingvarai schrieb am 17.07.2008 um 11:48 Uhr
baysidebas: Unless your dual processor is currently using cpu cycles in the upper ranges, Between 85 and 100%

It lies in this area, yes. However, I think Vegas and FX libraries, like most software, are programmed not to grab absolutely all CPU power, so your assumtion my not be absolutely right. It will also be programmed to grab, let us say, 80% of the resources when running a Quad processor, which should mean a significant speed improvement. I am excited to hear what Quad users here have to say about this.

Ingvarai
JohnnyRoy schrieb am 17.07.2008 um 12:25 Uhr
> Do we all here using Vegas mainly sit and wait for Vegas to finish processing something, and just being creative in-between?

Nope. My QuadCore handles HDV quite nicely. I can work with HDV in preview mode at full frame rates.

> What do TV-companies do? I assume they have specialized hardware rendering systems, way faster than the software compliant we use. Or?

Some use systems with hardware acceleration but they don't use AVCHD and so they don't have the problem that consumers of AVCHD are having right now.

> Do mother boards exist with sockets for several CPUs? If I had 4 separate Quad CPUs, provided XP supports this, I might get a decent rendering speed

Yes. Get yourself an Intel Skulltrail motherboard and mount 2 quad cores for eight (8) cores total. There are currently no desktop systems that I know of with more than 2 sockets.

~jr
John_Cline schrieb am 17.07.2008 um 12:45 Uhr
baysidebas, notice that I said up to twice as fast.
farss schrieb am 17.07.2008 um 12:46 Uhr
"Having entered the AVCHD world,"

Well, to blunt, that's where your woes began.

We bought two Sony AVCHD cameras because they kind of fill a gap in the market. You could take the memory stick out of them, plug it into a Sony DVD burner and burn the video to a cheap red laser DVD that'd playout your HD footage on a PS3 or BD player. Since the demise of the VHS C camera nothing filled that market segment for the consummer.

But you want to edit the footage, now you have a problem. You have very highly compressed footage that needs a lot of calcs to decode.

Here's a hint. We sold our two AVCHD cameras and bought 4 HC7s.
HDV is still a bear to work with but nothing like AVCHD and you can buy a HDV camera at a big range of pricepoints with quality to match, some of which will give a camera 20x their price a run for their money.

Bob.
ingvarai schrieb am 17.07.2008 um 12:50 Uhr
JohnnyRoy: Get yourself an Intel Skulltrail motherboard and mount 2 quad cores for eight (8) cores total

I found this using Google:



The question is to what extent Vegas will take advantage of it. And Cakewalk Sonar, which I also use. I notice that most configurations in this league are st up with tons of RAM. Do they run 64 bit systems? I find it hard to get 64 bit drivers for some of my hardware..

Ingvarai
ingvarai schrieb am 17.07.2008 um 13:08 Uhr
farss:

This may well be the case. Well, I can transfer the footage to my PC in a snap. I can then play it back, using the bundled "ImageMixer" application. It looks just great and the CPU usage goes up to 34% when playing it back.

I made a test - I put a pure m2ts on the timeline, and rendered it out to WMV 720x540 1,3333. This takes about 40-50 seconds per recorded second. My 30 seconds long footage took more than 20 minutes to render to this low-res format.

The question is what Vegas is doing the 49 seconds out of 50. While "ImageMixer" uses 1 second (or less) to decode 1 second of m2ts footage, Vegas uses x seconds to read 1 second of my footage and the rest output it to 1 seconds of WMV, all in all approximately 50 seconds.

This is something I do not understand.

Ingvarai

farss schrieb am 17.07.2008 um 13:31 Uhr
Well of course you can transfer the footage very quickly. More compression = smaller file = less data to move = faster transfer.

If I try to shuffle some lightly compressed SD YUV footage around between disks it takes forever because the files are HUGE. It plays back real time no sweat and renders real quick too, off fast RAID disks.

I don't know a lot about the technical details of AVCHD but from memory it uses a wavelet based codec. Those have the ability to extract low res images very quickly and are pretty easy to get the GPU to help with as well. Vegas does none of this however. Most of that's a bit of a guess so don't take it as fact but it could be what "IMageMixer" is doing.

However when Vegas is rendering to your SD it has to a) decode the AVCHD to full raster uncompressed frames b) Downscale the frame and to do that it needs to use bicubic interpolation c) Encode to WMV. Also if you're doing 2 pass WMV encodes it has to do all of that for every frame twice!

All of those take a fair number of calculations. As I can render MXF from my EX1 to SD at close to RT on my not overly top shelf quad core I'd say where you're taking the biggest hit is in decoding the AVCHD to uncompressed frames. 20x slower than RT is very slow though. It's been a lot time since I had any AVCHD to play with and I just left it render overnight. However the few people back then who did rent our AVCHD cameras did complain about much the same thing so your figure may not be that exceptional, hopefully someone else from here can give more upto date comparison figures however the general consensus amongst all NLE users of AVCHD is it's a PIA .

Bob.
johnmeyer schrieb am 17.07.2008 um 15:15 Uhr
They key to this is the decompression algorithm. I am appalled at the continuing decline of quality programming across all disciplines. Current day programmers are approaching problems by using gargantuan DLL libraries, bloated data structures, and terrible programming practices.

The actual decompression of something like AVCHD should be done in assembler, or at least in highly optimized C code.

The difference between what Sony has done and what can be done is simple to demonstrate. Encode some video to a simple MPEG-2 file, using one of the DVD templates. Put that on the timeline and preview at Best with a few fX. Then, create a pseudo AVI from that same file using the DGIndex/VFAPI technique I have described in other posts. Put that AVI on the timeline. The exact same footage which stutters and freezes on my computer plays flawlessly at 29.97 using Best resolution. Night and day difference, but the source MPEG-2 is the same.

If some freeware hack can do this, why not Sony?

I suspect the same thing can be done with AVCHD.
rmack350 schrieb am 17.07.2008 um 15:28 Uhr
Most of the WIndows consumer products are only licensed for two physical CPUs and the current Vegas doesn't really scale up to 8 cores (afaik), so a single socket board supporting a quad core CPU might be a better use of money. 4 CPUs would be overkill, but you could look at SuperMicro's site to see what they offer in that vein.

People have high hopes that the upcoming 64-bit Vegas will support more cores easily but I'd keep my wallet closed until that version of Vegas actually exists.

Rob Mack
rmack350 schrieb am 17.07.2008 um 15:33 Uhr
32 bit OSes have enough address space for 4.0 GB of RAM. You get less in practice because all hardware needs a share of that address space, not just the sticks of RAM.

To address more RAM you need a 64-bit OS and a motherboard to support it. Most very new consumer motherboards are specked to support 8GB using 4x2GB DIMMs.

Rob
ingvarai schrieb am 17.07.2008 um 15:40 Uhr
John Meyer,
I wholeheartedly agree. I am a programmer myself, and am old enough to have programmed in assembler. When we before wrote a function to do some specific operation, directly, programmers today use a library that call n functions in orther libraries, which again inserts "stumble points" to check for everything and report this and that and finally you end up with mega-bloated code which keeps the CPU occupied with all kinds of things until it finally is time to do what originally was the intention.

A good example of this is code written in 1995 using Borland Delphi. On a 86 Mhz CPU with 8 Mb of ram, the main application window appeared instantly, and operations were mostly carried out almost instantly, even databse operation. Todays CPUs are 50 times faster, not to mention the memory available, still ordinary application cough and moan when they are started up. Just because of bloated code and the extreme high-level programming used now(high-level = far away from opcodes, low-level=assembler, talks directly to the CPU).

> If some freeware hack can do this, why not Sony?
Right, my thoughts exactly.

> I suspect the same thing can be done with AVCHD.
Yes, this is what I thought too.
Generally - I have the feeling we should have been able to render our videos almost in RT with the hardware available today, had the software been optimized enough.

Thanks for your comments on this who deviate from the standard.

Ingvarai
rmack350 schrieb am 17.07.2008 um 16:05 Uhr
I suspect it's time and money. People who code in assembly language are probably rare and expensive at this point and with project managers measuring progress with a stopwatch I'll bet no one says to them that they need to hire another programmer to rewrite some components in assembly.

Rob
baysidebas schrieb am 17.07.2008 um 16:14 Uhr
John: "baysidebas, notice that I said up to twice as fast."

I did notice, that's why I replied "Only if your projects have complex natures..."

It wasn't a negation of your statement, but just a qualifier. I was constantly almost pegging my dual CPUs in rendering, however, I must have been just about using all the available horsepower I needed in my projects since, after popping in a quad, all four cores laze along at barely a hair over 50% each unless I pop in some fx that add to the load. Not complaining, the quad is 10% faster in the clock department so at least I gained that, and also achieved enormous headroom for my renders.
ingvarai schrieb am 17.07.2008 um 20:37 Uhr
johnmeyer:

This is more relevant than I thought, look here:
How to edit AVCHD M2TS files

I will post a new thread on this

Ingvarai
John_Cline schrieb am 17.07.2008 um 22:35 Uhr
I just read the Jake Ludington "MediaBlab" article you referenced above. While I was there, I read a few of his other articles. Well, he got the "blab" part of the title correct anyway, there is some innacurate information and just plain bad advice there. It appears that he knows enough to be dangerous.

For example, he suggests using the Microsoft Video 1 codec. The Video 1 codec has two flavors: One encodes 8-bit palettized data, where the 256-color palette is stored in the AVI file header. The second encodes 16-bit colors which will reduce your lovely 16.7 million color 24bit video image down to 65,535 colors. At the very least, it will produce some color banding issues, either really serious with the 8-bit or merely serious with the 16-bit version. The Video 1 codec was a barely acceptable choice fifteen years ago and has no place in today's multimedia production.
ingvarai schrieb am 17.07.2008 um 22:58 Uhr
> For example, he suggests using the Microsoft Video 1 codec

Further up, on the top of the article, he writes:
I no longer recommend this method, but it will work:
:-)

About the suggestion to use Microsoft Video 1 codec, I immediately ignored this. Still this article suddenly made my AVCHD camera purchase look good :-)

Ingvarai
darg schrieb am 17.07.2008 um 23:19 Uhr
Quote:
Generally - I have the feeling we should have been able to render our videos almost in RT with the hardware available today, had the software been optimized enough.

That this is possible shows the fact, that a camera is doing it (and this without the help of Microsoft). I fully agree with you in regard to computing power today vs. power 10 years ago. We are sitting in front of a race car but to open a document takes roughly three times longer than 10 years ago. No wonder that we need quad or atleast double core cpus. If easy tasks take already longer than before, where does all this end up?

Regards

Axel
johnmeyer schrieb am 18.07.2008 um 00:35 Uhr
I agree with John on the inaccuracies -- or at least decisions I would not make, such as deinterlacing (fine if the final output is web or PC, bad otherwise).

However, I can see how this workflow could be adapted to Vegas in somewhat the same way as the DGIndex/VFAPI "trick" for editing VOB and MPEG-2 files.
ingvarai schrieb am 18.07.2008 um 08:04 Uhr
> or at least decisions I would not make, such as deinterlacing (fine if the final output is web or PC, bad otherwise).

When playing back the m2ts files using the camcorder bundled software, the video looks great, when stopping and looking at a scene, the interlace effect is very visible with stripes all over.

When converting to AVI using VirtualDub the interlace "stripes" are visible also when the video is running. That is why I used the deinterlace filters. Now I wonder how the video "with stripes" suddenly can look good on a TV.. If this is what you mean. I cannot test it myself at the moment.

I admit there is a lot to learn here for me, any advice and sharing of knowledge is highly appreciated!

Ingvarai
farss schrieb am 18.07.2008 um 08:46 Uhr
You will see the interlaced "stripes" on many computer based displays as they cannot display interlaced video correctly.
The myriad issues associated with interlaced video have a simple solution, use a camera that shoots progressive, 30p is a good choice if you're in 60Hz land.

Bob.
ingvarai schrieb am 18.07.2008 um 09:16 Uhr
Hi Bob,
> You will see the interlaced "stripes" on many computer based displays as they cannot display interlaced video correctly.

Hm.. I think it is the software as well. Using the bundled software, the interlaced footage looks great on my PC. Stripes appear when converting m2ts to AVI. From this I judge it is not the display itself, bu the software used to play it back.

> use a camera that shoots progressive, 30p is a good choice if you're in 60Hz land.

I am in 50Hz land :-) and do not like shooting with 25p for several reasons, one is camera display latency (Canon HF 10), what I see is slightly delayed. Apart from this, the footages look like film, not video.
I will shoot interlaced, and do whatever I need later. As a matter of fact, using VirtualDub with the deinterlace filter(s) I have described in the thread Happy AVCHD user the result looks very good to me.
I need to experiment more, that is for sure.

Ingvarai