Aliasing, artifacts and moire?

Comments

NickHope wrote on 4/13/2011, 1:38 PM
John, that was Jerry's (amendegw) torture test, not mine and I didn't really participate at the time. Anyway it's interesting that the QTGMC script did a great job with one clip but a poor job with another. Thanks for pursuing that test and saving me the effort.

Jerry, regarding the problem with the file in AviSynth, I got this back from Didée (a man who knows) on the doom9 forum:

Can't tell what's up with that file, but something is not quite right with it. Slightly older versions of ffmpegsource crash right away with it. The most recent version can decode it, but the field/frame order is somehow not correct: everything is choppy, with some forward/backward/jump issues, ....

I'm just wondering if those issues might have affected some people's tests when they were doing your challenge. Probably not but it's feasible.

I was going to do a CCE encode of the challenge to try and illustrate how great the encoder is, but now I'm rather flummoxed which method to use to downscale for it. LOTS of choices and opinions!
johnmeyer wrote on 4/13/2011, 2:15 PM
I was going to do a CCE encode of the challenge to try and illustrate how great the encoder is, but now I'm rather flummoxed which method to use to downscale for it. LOTS of choices and opinions! Try the simple (relative to the QTGMC approach) script I posted above (using the IResize function). As I already posted, I got great results. Also, I just responded over at doom9.org to the resize thread where you are asking about the AVISynth error codes. I don't have any idea what is causing that, although a quick Google search turns up problems relating to installation. As always, when dealing with AVISynth, once you are having problems, start with "hello world" and work up from there. In this case, use my script, but open a simple file using AVISource. There are no external calls or functions in that script. Then, if you can successfully open and process a simple DV AVI file (or some other garden-variety video file) try opening the .mts torture file using DirectShowSource. If that fails, you've nailed the problem. You can then either serve out of another instance of Vegas, or can use one of the DGIndex derivatives which handles AVC. You can then read this into AVISynth. Many people report more consistent, stable results using this workflow.
amendegw wrote on 4/13/2011, 2:32 PM
My Gosh, as I mentioned that clip came straight out of my Canon HG21. Could it be problem is with the way Canon encodes its AVCHD? I don't think it would be my specific camcorder, would it? After all, it's all electronics from the photons to the SD Card.

fwiw, the Peacock & Poppies were captured with the same Canon HG21 at the same FXP setting!

I made a posting on the doom9 forum asking if there was anything I could do to resolve the issue.

How can we test for the "funkiness" (i.e. field order issues). I'd like to start a separate thread for owners of the same vintage Canons to post a clip shot in the FXP mode - and see if the problem is universal. Whadya think?

...Jerry

System Model: Alienware Area-51m R2
System: Windows 11 Home
Processor: Intel(R) Core(TM) i7-10700K CPU @ 3.80GHz, 3792 Mhz, 8 Core(s), 16 Logical Processor(s)
Installed Memory: 64.0 GB
Display Adapter: NVIDIA GeForce RTX 2070 Super (8GB), Nvidia Studio Driver 527.56 Dec 2022)
Overclock Off

Display: 1920x1080 144 hertz
Storage (12TB Total):
OS Drive: PM981a NVMe SAMSUNG 2048GB
Data Drive1: Samsung SSD 970 EVO Plus 2TB
Data Drive2: Samsung SSD 870 QVO 8TB

USB: Thunderbolt 3 (USB Type-C) port Supports USB 3.2 Gen 2, DisplayPort 1.2, Thunderbolt 3

Cameras:
Canon R5
Canon R3
Sony A9

amendegw wrote on 4/13/2011, 4:21 PM
A couple more things.

First, I reshot the "Hula Dancer" with my Panny TM700 @ 1920x1080 60i. Some quick tests indicate that is no easier to process than the Canon HG21. For anyone who might be interested, the clip is here: TM700-60i.zip

Next, and I understand this has nothing to do with field order funkiness (a technical term). I ran MediaInfo against both the Canon HG21 & the Panasonic TM700 clips. Here's the results:



...Jerry

System Model: Alienware Area-51m R2
System: Windows 11 Home
Processor: Intel(R) Core(TM) i7-10700K CPU @ 3.80GHz, 3792 Mhz, 8 Core(s), 16 Logical Processor(s)
Installed Memory: 64.0 GB
Display Adapter: NVIDIA GeForce RTX 2070 Super (8GB), Nvidia Studio Driver 527.56 Dec 2022)
Overclock Off

Display: 1920x1080 144 hertz
Storage (12TB Total):
OS Drive: PM981a NVMe SAMSUNG 2048GB
Data Drive1: Samsung SSD 970 EVO Plus 2TB
Data Drive2: Samsung SSD 870 QVO 8TB

USB: Thunderbolt 3 (USB Type-C) port Supports USB 3.2 Gen 2, DisplayPort 1.2, Thunderbolt 3

Cameras:
Canon R5
Canon R3
Sony A9

farss wrote on 4/13/2011, 5:30 PM
The best video cameras that money can buy can record video with embedded moire problems. The codec used makes very little difference, you'll see the problem quite easily even on the uncompressed signal. Cheaper video cameras are generally more likely to have such issues but the real point I want to get at is if you set out to break the video system it is rather easy.
Back in the days when people cared about how TV images looked if you turned up at the studio wearing an offending piece of apparel you were sent to wardrobe to get the problem fixed.
Put simply if you want to make stunningly good looking video the first place to start is with what's in front of the camera. The camera itself is a factor as is you post workflow but the biggest single, easiest and cheapest to wrangle factor is what is in front of the lens.
Of course you might not have a lighting or wardrobe department out in the wilds shooting a doco. You can however put additional filters in front of the camera. Be they grad NDs or polarisers to wrangle extreme contrast to diffusion to wrangle aliasing and moire.
Almost every extreme issue that arises from using a camera is somewhere between impossible to utterly impossible to fix once it is shot. Yes, Hollywood spends a lot on the camera dept but they also spend a lot on lighting, costume design and DPs, way more than they spend on the cameras dept. There's a good lesson to take from that no matter what your budget.


Aside from all that and as someone who worked around process control for two decades a simple mantra that I picked up from those days seems highly applicable here. "If you want to control a process you need to understand it". We controlled everything from power stations to statewide electricity grids, gas pipelines and effluent treatment plants. So yes, we had "steam" experts and "turd" experts on the payroll, I think I was a bit of the Renaissance Man type of guy, also known as the "steaming turd" expert :)

What I find more than a little frustrating from reading all of these threads is the mount of effort being put into them that I genuinely feel could be better spent using more empirical methods. Taking an unknown quantity, putting it through a complex,multistage process and evaluating the results subjectively using a device that itself is part of the process just seems a task almost certainly doomed to failure. It is a brute force approach which is fine if no other means is available and you don't mind a zillion iterations of the tests before your get meaningful results. Thing is there are well established ways of approaching these studies, they are part of the arsenal used by professionals to do exactly these tests and reach meaningful conclusions.

The most basic and freely available is a resolution chart. Zone plate test charts I cannot find free copies of but heck, given the number of man hours expended on this the cost of buying one from DSC Labs is looking pretty low to me. For anyone who wants to do a bit of reading on this here's a link: http://www.broadpres.com/ar.pdf. To quote a small section "Zone plates are very useful for identifying the effects of filters, Bayer-pattern decoding, sub-sampling, down-conversion, detail enhancement, lens performance, and a variety of other parameters."

Bob.
johnmeyer wrote on 4/13/2011, 7:48 PM
Bob,

I understand, in the abstract, what you are saying, but unless I am in control of every step of the process, from pre-script meetings to delivery, at some point I'm going to have to deal with something outside of my control. In my case, this is especially true because I restore old stuff (just got an interesting transfer from a 1969 broadcast Quadruplex tape, but that's a story for another time ...).

So, when someone gives me an HD video of a doll in front of a cross-hatch shirt, and I have to deliver it on DVD, I'm going to have to come up with something. Granted, Mr. Grant, this was just a test, but my daily reality is often not too far from it. (Sometime I'll send you some video I had to restore that had made three NTSC->PAL->NTSC transformation before I got it, not to mention some deinterlacing that runined everything; nothing but dropped and duplicated frames everywhere).

The few projects I've done where I actually have been able to do everything myself from beginning to end have been a real joy: I just shoot, edit, and deliver. Wow, is that ever easy! But, the rest of the time, I have to deal with reality as it gets delivered to my doorstep.

John
farss wrote on 4/13/2011, 9:18 PM
" Granted, Mr. Grant, this was just a test"

Exactly John. If it was just a 'please help me fix this piece of footage' thing then all that I had to say is irrelevant.
My understanding though is that these tests and challenges are to find a best practice workflow that is generic, regardless of what camera was used or what was in front of it.

Bob.

NickHope wrote on 4/14/2011, 1:57 AM
I'm an idiot. I had H264 decoding disabled in ffshow. I've enabled it now and DirectShowSource will open both .MTS fine. Incidentally the TM700 file had been giving exactly the same error as the HG21 file before I enabled the H264 decoding.
amendegw wrote on 4/14/2011, 2:01 AM
"My understanding though is that these tests and challenges are to find a best practice workflow that is generic, regardless of what camera was used or what was in front of it."That was exactly my intent when I started the HD to SD Challenge thread.

In the last day or so, I've had a somewhat sinking feeling, the issue might be specific to my original footage, however, in reshooting with a different camera, I don't see major differences: TM700-60i.zip

...Jerry

System Model: Alienware Area-51m R2
System: Windows 11 Home
Processor: Intel(R) Core(TM) i7-10700K CPU @ 3.80GHz, 3792 Mhz, 8 Core(s), 16 Logical Processor(s)
Installed Memory: 64.0 GB
Display Adapter: NVIDIA GeForce RTX 2070 Super (8GB), Nvidia Studio Driver 527.56 Dec 2022)
Overclock Off

Display: 1920x1080 144 hertz
Storage (12TB Total):
OS Drive: PM981a NVMe SAMSUNG 2048GB
Data Drive1: Samsung SSD 970 EVO Plus 2TB
Data Drive2: Samsung SSD 870 QVO 8TB

USB: Thunderbolt 3 (USB Type-C) port Supports USB 3.2 Gen 2, DisplayPort 1.2, Thunderbolt 3

Cameras:
Canon R5
Canon R3
Sony A9

amendegw wrote on 4/14/2011, 2:06 AM
"I'm an idiot. I had H264 decoding disabled in ffshow. I've enabled it now and DirectShowSource will open both .MTS fine. Incidentally the TM700 file had been giving exactly the same error as the HG21 file before I enabled the H264 decoding."Ha! we're posting at the time!

I think you're making me feel better!

...Jerry

System Model: Alienware Area-51m R2
System: Windows 11 Home
Processor: Intel(R) Core(TM) i7-10700K CPU @ 3.80GHz, 3792 Mhz, 8 Core(s), 16 Logical Processor(s)
Installed Memory: 64.0 GB
Display Adapter: NVIDIA GeForce RTX 2070 Super (8GB), Nvidia Studio Driver 527.56 Dec 2022)
Overclock Off

Display: 1920x1080 144 hertz
Storage (12TB Total):
OS Drive: PM981a NVMe SAMSUNG 2048GB
Data Drive1: Samsung SSD 970 EVO Plus 2TB
Data Drive2: Samsung SSD 870 QVO 8TB

USB: Thunderbolt 3 (USB Type-C) port Supports USB 3.2 Gen 2, DisplayPort 1.2, Thunderbolt 3

Cameras:
Canon R5
Canon R3
Sony A9

farss wrote on 4/14/2011, 3:21 AM
"In the last day or so, I've had a somewhat sinking feeling, the issue might be specific to my original footage, however, in reshooting with a different camera, I don't see major differences: TM700-60i.zip"

The problem is not in the original footage, the problem is in what is in front of the camera. You've tried two different cameras and got almost the same result. Looking at your TM700 footage there is already moire problems in the camera original. Look at the thumbnails on the T/L, with preview at Best/Full switch Scale to Fit on/ff.

With Scale to Fit Off I see problems and these problems are difficult to impossible to avoid in the design of video cameras. Please take a moment to read and look at the samples here: http://en.wikipedia.org/wiki/Moir%C3%A9_pattern

Bob.
amendegw wrote on 4/14/2011, 3:52 AM
"The problem is not in the original footage, the problem is in what is in front of the camera. You've tried two different cameras and got almost the same result. Looking at your TM700 footage there is already moire problems in the camera original. Look at the thumbnails on the T/L, with preview at Best/Full switch Scale to Fit on/ff.Yes, I see some moire issues in the original footage, however if I merely click "Reduce Interlace Flicker" on the video event, they pretty much go away (without any visual loss of resolution - to my old eyes). Actually, the original Canon HG21 seems to have fewer moire issues than the TM700 (counter-intuitive to me) - maybe it's because the zoom is different & there's more movement in the TM700 footage.

Now, as I've said before, it was my objective to shoot as difficult source as I could contrive. If we could solve that very difficult issue, then everyone else's HD to SD issues would be a piece of cake.

If the problem is insolveable, then we should acknowledge that and move on to other subjects. However, I will say that using the "Lagarith" method posted above, I was able to get a pretty darn good result here: Proj-B-LagsV1322.zip

...Jerry

btw: my "sinking feeling" was based on the observation that the original clip from the camcorder appeared to have field order issues.

System Model: Alienware Area-51m R2
System: Windows 11 Home
Processor: Intel(R) Core(TM) i7-10700K CPU @ 3.80GHz, 3792 Mhz, 8 Core(s), 16 Logical Processor(s)
Installed Memory: 64.0 GB
Display Adapter: NVIDIA GeForce RTX 2070 Super (8GB), Nvidia Studio Driver 527.56 Dec 2022)
Overclock Off

Display: 1920x1080 144 hertz
Storage (12TB Total):
OS Drive: PM981a NVMe SAMSUNG 2048GB
Data Drive1: Samsung SSD 970 EVO Plus 2TB
Data Drive2: Samsung SSD 870 QVO 8TB

USB: Thunderbolt 3 (USB Type-C) port Supports USB 3.2 Gen 2, DisplayPort 1.2, Thunderbolt 3

Cameras:
Canon R5
Canon R3
Sony A9

NickHope wrote on 4/14/2011, 3:59 AM
That was exactly my intent when I started the HD to SD Challenge thread.

The problem seems to be that the horrific, moire-inducing background is dominating the challenge to the exclusion of other issues, so a method that suits that particular clip well might not be the best generic all-rounder, as John Meyer found with his QTGMC test (great for his dancer but poor for your doll).

OK, here's a question that probably proves how little I really know about this stuff... Let's assume I'm starting with TFF AVCHD or HDV footage... Does it matter whether the final SD encode is TFF or BFF, or does this depend on the resizing method? Do DVD players support both TFF and BFF at 480i? The reason I'm asking is that other people seem to be submitting BFF clips, but I have old CCE templates that produce TFF clips that I'm pretty sure I put on a lot of DVDs. In addition the IResize script that John posted above is giving me TFF.
farss wrote on 4/14/2011, 4:13 AM
" Does it matter whether the final SD encode is TFF or BFF"

No, so long as it is correctly flagged.

"Do DVD players support both TFF and BFF at 480i?"

Technically no, in practice it seems not to matter at all. Apparently the mpeg-2 encoders combines the fields into frames to encode and the player separates them back into fields.


-------------------------------------------------------------------------------------------

I guess if the source field order was really screwed up such that the odd and even lines were transposed then somerthing in some of the processing you've been trying could get itself messed up but that's quite an unusual problem.

Bob.

NickHope wrote on 4/14/2011, 4:21 AM
Thanks Bob.

I can see that if we're going HD 60i > 60p > SD 60i (e.g. deinterlace with QTGMC then reinterlace), then it wouldn't matter at all if the HD was TFF and the SD was BFF. But if we're resizing in Vegas, wouldn't it screw things up badly if the field order gets changed?
farss wrote on 4/14/2011, 5:36 AM
"if we're resizing in Vegas, wouldn't it screw things up badly if the field order gets changed?"

I don't think so, again so long as it's flagged correctly Vegas will handle it just fine.


On a completely different note but one of some relevance to this thread and the whole aliasing/ moire / jitter issue.
My first paying job with Vegas was way back in the V4 days and I had to make a short video from about 10 very high resolution photos. Against all the advice at the time I just popped them onto the T/L, did a few things, synced some music to it, printed it to BetaSP and it was the backdrop for a hairstyling catwalk parade. No jitter, no moire, nothg. We checked it on various CRT monitors just to be sure to be sure.

A few years later I got a lower profile job of the same nature and I had ongoing nighmares with aliasing and line twitter. What had changed was the previous job's photos were shot on a 110mm roll film camera and the negs scanned. The later job was all shot on a DSC. I've processed and made into slide shows over 6,000 slides and neg. Again not a single problem and I scan at 4K and from that make SD 50i DV.

Bob.
craftech wrote on 4/14/2011, 7:08 AM
John,

I am excited about trying out the script you posted on Act 1 of a musical I shot with my EX1.

It is the script that starts with
Code Block:, etc.

I was wondering if you could let me know if I am rendering this properly as it is just started and estimated at 14 hours and rising. It is an Hour and 20 minute video.

1920 x 1080 square pixel [render best, deinterlace method interpolate] Vegas 8 (.mxf edited files) timeline with color and levels correction but no sharpening is being frameserved as YUY2 to Virtual Dub (1.9.11 - newest version) with default settings and being saved as an AVI. Processing thread priority Higher.

Is that right ?

Thanks as always,

John

NickHope wrote on 4/14/2011, 8:57 AM
craftech, When I use that script to render a 13 second clip to 2-pass CBR MPEG-2 in CCE Basic it takes about 4 times as long as the length of the clip itself (i.e. Just under a minute). So it's pretty slow, but your ratio does sound excessive.

While we're waiting for John Meyer to reply, you might get a moderate speed increase without ill effects by frameserving in RGB24 and converting to YV12 in the script.

Your script would then look like this:

source=AviSource("e:\frameserver.avi")
ConvertToYV12(interlaced=true, matrix="PC.709")
AssumeTFF
IResize(source,720,480)
function IResize(clip Clip, int NewWidth, int NewHeight) {
Clip
SeparateFields()
Shift=(GetParity() ? -0.25 : 0.25) * (Height()/Float(NewHeight/2)-1.0)
E = SelectEven().Spline36resize(NewWidth, NewHeight/2, 0, Shift)
O = SelectOdd( ).Spline36resize(NewWidth, NewHeight/2, 0, -Shift)
Ec = SelectEven().Spline36resize(NewWidth, NewHeight/2, 0, 2*Shift)
Oc = SelectOdd( ).Spline36resize(NewWidth, NewHeight/2, 0, -2*shift)
Interleave(E, O)
IsYV12() ? MergeChroma(Interleave(Ec, Oc)) : Last
Weave()
}


I would test that on just a few seconds of video to see if you're getting any benefit. Also check the colours/levels correctly afterwards to see which method gives a better match to your original as there are potential problems with either method.

Also, assuming you're using your Quad core machine, you should be able to get a bigger speed increase by using a multi-threaded version of AviSynth such as the one here. Then you would need to add 2 more lines to your script:

SetMTmode(5,4)
source=AviSource("e:\frameserver.avi")
ConvertToYV12(interlaced=true, matrix="PC.709")
AssumeTFF
SetMTMode(2)
IResize(source,720,480)
function IResize(clip Clip, int NewWidth, int NewHeight) {
Clip
SeparateFields()
Shift=(GetParity() ? -0.25 : 0.25) * (Height()/Float(NewHeight/2)-1.0)
E = SelectEven().Spline36resize(NewWidth, NewHeight/2, 0, Shift)
O = SelectOdd( ).Spline36resize(NewWidth, NewHeight/2, 0, -Shift)
Ec = SelectEven().Spline36resize(NewWidth, NewHeight/2, 0, 2*Shift)
Oc = SelectOdd( ).Spline36resize(NewWidth, NewHeight/2, 0, -2*shift)
Interleave(E, O)
IsYV12() ? MergeChroma(Interleave(Ec, Oc)) : Last
Weave()
}


The "4" in SetMTmode(5,4) should be tweaked according to your machine. I think the guideline is something like the following, but I've never done it myself. John Meyer could confirm this:

- Start at the number of cores in your machine.
- If it crashes, decrease 1 at a time.
- Otherwise increase 1 at a time until CPU usage is very, very close to 100%, don't go too far or it will slow down.

John Meyer, when you take a look at this, could you also please tell me (if you know) how I would change that script to correctly get BFF output? I'm doing loads of HD>SD tests today and this is the only one that is TFF, so it's hard to compare by muting tracks on the timeline. And anyway I kind of want to fit in with standard DVD templates which are BFF. I'll ask on doom9 as well. Thanks!
johnmeyer wrote on 4/14/2011, 10:00 AM
I was wondering if you could let me know if I am rendering this properly as it is just started and estimated at 14 hours and rising. It is an Hour and 20 minute video.First of all, I always recommend rendering a short, 10-30 second section of video and taking it all the way to the final output stage. I do several projects a day, and I always do this. If final delivery is on DVD, I render 15 seconds of video, author a DVD and put it on an 8x DVD+RW and then view it on an interlaced CRT. If I am delivering for the web, I upload the clip to YouTube and view the result. This saves a HUGE amount of time, and much pulling of hair when I find out (as I'm sure everyone reading this has found at one time or another) that I just killed twenty hours waiting for a render that is going to be totally useless (and I'm sure twenty hours is short compared to some stories).

Just as a sidenote, and as a way of answering Nick's questions about field order, here's my answer: field order doesn't matter, unless you get it wrong. Yes, I know that sounds very unhelpful, but what I mean is that you can render TFF or BFF to a DVD, and either will look just fine. If you think about it, there actually isn't really an order because the process is continuous: field01, field02, field03, field04 ... What you see is a series of fields, and you don't really care whether the top one came first because the next one is a bottom field and now IT is first, but then here comes another top field.

The whole "top-bottom" thing is what screws people up. What matters is when one field comes from an earlier moment in time than the previous field, whether it is top or bottom. So, all the TFF and BFF flags are just a way of making sure you start by showing the first field in time, whether it is top or bottom. As I just stated in the last paragraph, when doing anything with AVISynth, I always encode a few seconds of video, and then read that back into AVISynth with a script that has just a "separatefields()" line. I look at about four fields of video and immediately know if I have a field problem. Usually all I have to do is put an assumetff() or assumebff() line at the end of the script. However, if you manage to get the fields in the wrong order in the middle of the script things can get a little more screwed up. Usually this can be avoided by putting one of the two "assume" lines at the beginning of the script as well, and make sure it matches the source. If you are unsure, you can read your source into an AVISynth script that has one line:

assumetff().separatefields()

Open that in VirtualDub, and if everything looks OK, your source is TFF; otherwise it is BFF.

Back to John. I don't know which script you are referring to. I remember doing your test clip but don't remember what I was doing with it. Nick just posted one of the re-size/de-interlace/re-interlace scripts I've been working with the past few days. The one with QTGMC is snail-slow. The main way to get around that slowness is to use one of the multi-threaded versions of AVISynth, either 2.58 MT or 2.6 MT. I got more stability with the 2.6 MT build when doing QTGMC stuff, but it has some serious color shift issues with YV12 material, so try to keep things in YUV2, if you can. Once you are using the MT version use the SetMTMode (5,x) before your AVISource command and the put SetMTMode(2) immediately after that command. "x" should be half the number of cores in your computer, at least to start with. Run a few seconds of video, and look at the CPU utilization. If it seems to be averaging close to 100%, you might want to make x smaller. Your goal is to get CPU utilization above 85%, but not at 100% all the time. This "breathing room" seems to make all the plugins used by QTGMC "happy" and they don't crash. On the other hand, if you are using less than 80% of all the cores, you aren't getting all you paid for, and you can increase "x."

BTW, I had enough color issues with 2.6MT that I've gone back to 2.58MT, something I've used for almost two years, for all work not involving QTGMC.

The script that Nick posted is something developed by others over an doom9, back in 2008. It was developed primarily to handle a subtle problem when re-sizing YV12 encoded video, because the chroma tends to shift by a different amount than the luma. That part is subtle, however, the other part, where fields are shifted vertically by sub-pixel amounts, by an amount depending on the scaling factor, is something that really does make a difference in the quality of the result. This script should run quite fast. The QTGMC script provided better results with my ballet clip, but this script did better than what Vegas can do by itself. With the torture clip (the doll in front of the herringbone cloth), QTGMC failed for me, but one of the gurus over at doom9.org took up the challenge yesterday and produced a rather remarkable result using QTGMC. It had a very small amount of interference patters, but the detail he retained while scaling down to 720x480 was quite remarkable.

Finally, you can use different scaling algorithms. The scripts Nick posted use Spline36resize. AVISynth provides a huge number of built-in resizers:

BicubicResize
BilinearResize
BlackmanResize
GaussResize
LanczosResize
Lanczos4Resize
PointResize
Spline16Resize
Spline36Resize
Spline64Resize

and you can download plugins and scripts that do resizing in different ways. I don't have a clue as to which one is "best" for a given situation, but I do know that they operate at different speed. Perhaps changing the resizer would make some difference in encoding speed.

Too many choices, too many variables ...

NickHope wrote on 4/14/2011, 10:14 AM
Thanks for the clarifications, John M.

I tried LanczosResize (which does a Lanczos3 resize) in all 4 places in that script earlier this evening and found no increase in speed over Spline36Resize, so I'll probably stick with Spline36Resize.
craftech wrote on 4/14/2011, 11:25 AM
I was wondering if you could let me know if I am rendering this properly as it is just started and estimated at 14 hours and rising. It is an Hour and 20 minute video.
==================================
First of all, I always recommend rendering a short, 10-30 second section of video and taking it all the way to the final output stage
==================================
I've been in and out all day so I wasn't in a rush. It ended up that the estimated time went back down to 8 hours. About 4 1/2 hours are done so far. I'll be back tonight.
====================================
Back to John. I don't know which script you are referring to. I remember doing your test clip but don't remember what I was doing with it.
====================================
I don't think I sent you a test clip recently, but here is the entire script I am trying out that you posted above:

Code

1920 x 1080 square pixel [render best, deinterlace method interpolate] Vegas 8 (.mxf edited files) timeline with color and levels correction but no sharpening is being frameserved as YUV2 to Virtual Dub (1.9.11 - newest version) with default settings and being saved as an AVI. Processing thread priority Higher.

I was wondering if that sounded like the right settings.......

I will try the multithreaded version on Act 2 if Act 1 turns out well.

Thanks for the help,

John
johnmeyer wrote on 4/14/2011, 1:12 PM
John,

Your main question in your last two posts is whether the performance you are seeing is correct, or whether you have set something that might be slowing things down.

I see nothing wrong with any of your settings.

To provide a basis for comparison, I took some 1920x1080 SR-11 clips I have kept handy for testing. I frameserved YUY2 out of Vegas 8.0c through the exact script you posted above (which is the same as the one I posted, but I copied from your listing, just to make sure we were, literally, on the same page). I opened the result in VirtualDub and used the "Run Video Analysis Pass" from the File menu. This workflow eliminates any slowdowns from Vegas plugins (you are doing color correction and some other things) and also eliminates the additional slow down from the codec you use in VirtualDub.

The "video rendering rate" shown in the VirtualDub dialog was just over 18 fps. I have a 3.2 GHz Intel i7 8-core computer running Windows XP Pro 32-bit.

I then added a setMTMode(5,0) command before the AVISource line and a setMTMode(2) after that line (using "0" in the first command makes it utilize ALL cores). I re-did the test and this time got 19 fps, a trivial increase. I had expected this because this particular AVISynth script consists only of trivial commands. Only the re-size command uses any real CPU power, and apparently it is not designed in a way that can be segmented for multi-cores (or at least the MT version of AVISynth can't do anything with it). So, unlike scripts based on MVTools2 where I often see a 4X improvement in speed, multi-threading will not help this script.

Hope this helps!

craftech wrote on 4/14/2011, 4:19 PM
John,
=====================================
It is actually the first time I am trying anything like this. Haven't really used scripts.

So I rendered the avi and brought it into Vegas. It ended up 4:3 for some reason. If I change the properties to DV Widescreen it is distorted. It is in fact 4:3.

Moreover when I render a small loop or even when I play it from the timeline the motion of the actors or anyone moving looks like strobing with ghosting trails. If I look at a still shot it looks good (except that it is 4:3). I am viewing it on my CRT HI Res external monitor. Looks 4:3 in the preview window as well.

So I think something was set incorrectly in my workflow. Of course now that I am home I'll process only short segments to test.

This is the part that is actually in the script, not the title.
I cut and pasted it here:

source=AVISource("e:\frameserver.avi").assumetff()

It was saved as an .avs extension and opened in VD under Open Video File. Then chose Save as AVI. Processing thread priority Higher.

Maybe the default setting for saving an avi in VD is 4:3? And could the strobing and ghost trails come from using Interpolate instead of Blend in the project properties of the master project? Or maybe it is an issue frameserving YUY2 instead of RGB32 when the master project has square pixel video.

Any ideas?

Thanks,

John
NickHope wrote on 4/15/2011, 12:38 AM
John (craftech),

That script as it stands should output a 720x480 stream with square pixels, which is 3:2 AR, not 4:3. That's what VirtualDub will then render in an AVI file.

What do you plan to do with the avi file?

If you want 16:9 with square pixels you would have to change 720x480 in the script to 853x480 (or 854x480).

If you want anamorphic 16:9 for widescreen DVD then you'll want to leave it at 720x480 and set a widescreen aspect ratio flag when you encode to MPEG-2.

If you want to preview a 720x480 file in Vegas at 16:9 you need to set the PAR in both the project and the media properties to a widescreen value. Vegas' "NTSC Widescreen" PAR value is 1.2121 (but actually true 16:9 is 1.1852).