The need for speed - what to do?

Comments

farss wrote on 7/18/2008, 6:34 AM
"Hm.. I think it is the software as well. Using the bundled software, the interlaced footage looks great on my PC. Stripes appear when converting m2ts to AVI. From this I judge it is not the display itself, bu the software used to play it back."

You've got most of that wrong. Some software when displaying interlaced footage will attempt some form of de-interlacing, some times it works OK, sometimes it can make a bigger mess of things. VLC lets you select a number of de-interlacing stratergies. Other display devices make a real hash of displaying progressive footage, they mistake if for interlaced and create artifacts.


"Stripes appear when converting m2ts to AVI."

Then you're doing something wrong. If you're doing this in Vegas and converting to SD AVI then you must specify a de-interlace method in your project settings otherwise you will indeed get bad artifacts that look like interlace combing but they're a bit more complex than that.

Bob.
johnmeyer wrote on 7/18/2008, 8:27 AM
Leave interlacing alone. Don't deinterlace. It is not needed, unless you have to display on a computer monitor, and then only if the computer has software that doesn't deal with this correctly. Interlacing works and displays just fine. If it didn't, we'd all have thrown out our TV sets fifty years ago.
JoeMess wrote on 7/18/2008, 10:21 AM
Guys,

Remember a few weeks back there was the posting for a free plug in called Cartoonr, (www.newbluefx.com). (Very cool, by the way!) They have a product called AVCHD UpShift, that appears to do all of these steps in one low effort swoop. It looks like it would be really worthwhile to try out, especially if you are trying to get a workflow going around AVCHD content. I bought one of the Aiptek 1080i cameras at WalMart, and I will be giving UpShift a try to see if it makes things more tolerable on the notebook I am doing most of my Vegas work on. It looks like it is a Vaast and NewBlue joint effort, so my gut says it is going to be well worthwhile.

Joe
ingvarai wrote on 7/31/2008, 2:31 PM
John Cline:

I have now replaced the dual core with a quad core 2.6 GHz (yes, one of the cheapest ones). To make a long story short (the long story includes hazzles with BIOS update, lots of blue screen of death shut downs), the result is as expected.

Vegas seems to utilize all cores just fine. A test project rendered in 48 minutes on the dual core, now use 25 minutes running on the quad core. So my tea consumption also is reduced to the half of what is used to be <g> (I drink tea when watching gauges crawl along).

I now have the fastest PC I ever had. Still I think video editing does take a loooooong time, at least when it comes to rendering. It will be interesting to see what effect this CPU upgrade will have on my daily work with Vegas and other resource hungry applications.
Terje wrote on 7/31/2008, 7:15 PM
The actual decompression of something like AVCHD should be done in assembler, or at least in highly optimized C code.

I disagree with the assembler part and agree with the C code part. In most cases, and believe me, when I was working with embedded software, we tested and tested this, in any moderately sized project the C compiler will generate better code than hand-coding will. That is, if you stick to one or the other. The best middle ground was usually lots of C code, and then well documented, and tiny amounts of assembler here and there. We had a rather rigorous process you had to go through to be allowed to use assembler to discourage it as much as possible.

Assembly is extremely hard to maintain and the cost of owning it becomes extremely high. Even more so in the '90s when nobody could hold on to an engineer for more than 9 months. It is therefore far less expensive to mandate a faster CPU than to allow assembly.

Encoding like this is relatively "easy". In other words, the amount of code, relative to the entire project, required for the encoding is tiny. Since it is owned by a separate entity (Mainconcept) is is probably fairly well optimized. That is not the problem.

To work with video Vegas has to create some sort of in-memory format for that video. All video has to be in that format, no matter what the originating format was. This means that decoding AVC isn't like playing it on a monitor, it is far more complicated.

using the DGIndex/VFAPI technique I have described in other posts. Put that AVI on the timeline. ... stutters ... flawless

Well, the footage is the same, but the bits read by Vegas is not. The converted avi is easier to transcode to the format Vegas uses internally than is MPEG-2. That's why it is faster. Remember, as someone else pointed out, for each frame Vegas has to do the following:
1/ Convert source frame to Vegas interal video format
2/ Process new footage
3/ Down size footage to fit timeline size using binomic etc...

If step 1 takes a long time the footage will stutter, if it is fast, the footage will not stutter.
Illusioneer wrote on 8/2/2008, 1:27 PM
<Vegas seems to utilize all cores just fine. A test project rendered in 48 minutes on the dual core, now use 25 minutes running on the quad core. So my tea consumption also is reduced to the half of what is used to be <g> (I drink tea when watching gauges crawl along).>

Also make sure that you have set up vegas correctly, i.e. to allow 4 threads.
ingvarai wrote on 8/4/2008, 6:56 AM
Also make sure that you have set up vegas correctly, i.e. to allow 4 threads.

It is set to 4, but not by me, I am sure. This means it is set up with 4, as a default value?
And "threads" can mean several things IMO. Applications can have several threads going simultaneously, also with an old CPU. So the question is - should I set it to for example 8?

Ingvarai