Interlaced HD to DVD AGAIN - some test renders

NickHope wrote on 4/29/2011, 12:04 PM
Apologies for yet another thread on this subject. I could have put this here, here, or here, but eventually decided it was better to start a new one. However those threads provide important background to this one.

I set up a minute's project of various 60i HD footage and converted it to SD NTSC MPEG-2 in a variety of ways. I have put 4 of my renders into a ready-to-burn DVD project which you can download here (193 MB). Please burn the 2 folders to a DVD and let me know any reactions/preferences you have to it, and please tell us what gear you watched it on (CRT, LCD, DVD software etc.).

The project contains a Belle Nuit test card, some of my own HDV footage and parts of amendegw's hula dancer doll clip, stringer's driving clip, John Meyer's ballet dancer clip and craftech's stage show clip. I started with Musicvid's web video project and it morphed into this. Thanks to everyone who allowed me to use and upload footage.

Settings:
Sony Vegas Pro 10.0c.
Full-resolution rendering quality: "Best".
Deinterlace method: "Interpolate".
Color Curves and Levels applied to video events.
Events conformed to 16-235 (broadcast-legal) levels.
"Reduce interlace flicker" switch: OFF.
No blur or sharpen applied in Vegas Pro.
Debugmode Frameserver 2.10
AviSynth 2.5.8 single-threaded used in tests 2 and 3.
All encoded in CCE Basic at 8000 kbps 1-pass CBR, upper field first.

Test 1 - Rendering time 309 seconds (100%)
Demonstrates Vegas Pro resizing without blurring or sharpening.
HD clips in NTSC widescreen project, modified with upper field first and PAR 1.1852.
Frameserved in RGB24.

Test 2 - Rendering time 431 seconds (139%)
Demonstrates AviSynth "IResize" function developed by Gavino, IanB and others on the doom9 forum.
Script does low-pass filter "automatically" during resizing to reduce twitter/shimmer/aliasing etc..
HD clips in 1080-60i project.
Frameserved in YUY2 (slightly faster than RGB24).
AviSynth Script:
source=AviSource("d:\fs.avi").ColorYUV(levels="TV->PC").AssumeTFF  #Expands levels if frameserved in YUY2
IResize(source,720,480)
function IResize(clip Clip, int NewWidth, int NewHeight) {
Clip
SeparateFields()
Shift=(GetParity() ? -0.25 : 0.25) * (Height()/Float(NewHeight/2)-1.0)
E = SelectEven().Spline36resize(NewWidth, NewHeight/2, 0, Shift)
O = SelectOdd( ).Spline36resize(NewWidth, NewHeight/2, 0, -Shift)
Ec = SelectEven().Spline36Resize(NewWidth, NewHeight/2, 0, 2*Shift)
Oc = SelectOdd( ).Spline36Resize(NewWidth, NewHeight/2, 0, -2*shift)
Interleave(E, O)
IsYV12() ? MergeChroma(Interleave(Ec, Oc)) : Last
Weave()
}


Test 3 - Rendering time 1504 secs (486% - multi-threaded would be much faster)
Demonstrates more advanced method adapted from a suggestion by Didée on the doom9 forum.
HD clips in 1080-60i project.
Frameserved in YUY2 (RGB not supported by script).
High quality Bob using TDeint.
Resizing by sequential bicubic passes.
Smoothing with QTGMC.
Sequential blur and sharpen filters.
AviSynth Script:
AviSource("d:\fs.avi")
ColorYUV(levels="TV->PC") #Expands levels if frameserved in YUY2
AssumeTFF
TDeint(mode=1)
bicubicresize(1440,960)
bicubicresize(720,960,-.8,.6)
p1 = bicubicresize(720,480,-.8,.6)
p2 = p1.QTGMC(TR0=1,TR1=1,TR2=2,InputType=1)
p2.blur(0,1).sharpen(0,.51).blur(0,1).sharpen(0,.85)
i1 = assumetff().separatefields().selectevery(4,0,3).weave()
# p1 # straight to 60p
# p2 # plus more calmed
i1 # re-interlaced p2
return(last)


Test 4 - Total Rendering time 1822 secs (589%)
Demonstrates down-converting of HDV to DV in my Sony Z1P camera.
HD clips in 1080-60i project.
Render to HDV (quality 31)
Down-convert to DV in Sony Z1P camera.

Comments

craftech wrote on 4/30/2011, 4:29 AM
Thanks for all the work Nick.

I just downloaded it.

John
farss wrote on 4/30/2011, 4:48 AM
Same here but it'll have to wait until tomorrow.

Bob.
NickHope wrote on 4/30/2011, 9:05 PM
Looking forward to any opinions you might have as my viewing options here are rather limited and don't include a broadcast monitor.
john_dennis wrote on 4/30/2011, 10:04 PM
I burned the project using a Plextor PX-716A burner on to Verbatim DVD-R single layer media. The playback devices are:

Blu-ray player: Sony BDP-S550, output HDMI @720p

Plasma Panel: Pioneer PDP-4361HD, native resolution of the panel is 1024x768. Picture adjustment set to “Standard”. Screen size ~43 inches, viewing distance ~9-1/2 feet.

1) Belle Nuit Chart _Vertical lines in section 3 and 4 are jagged as if broken in half and top half shifted right. 16-235 test smoothest of all tests.

2) iResize Belle Nuit Chart _section 1 (red) flickered rapidly. 16-235 test non-uniform, showed vertical bars.

3) TDeint 16-235 test non-uniform, showed vertical bars. In this test the bars appeared darker gray where in the iResize test the bars appeared lighter.

3) Sony Z1P: Belle Nuit Chart _ Red section 1 flickering rapidly. 16-235 test faint gray vertical bars.

To me, the difference in the videos was unremarkable. The limitations may be 1) my equipment, 2) my tired eyes, 3) my skill or 4) my knowledge.

edit I changed the output of the Blu-ray player to 480p and I got slightly better results on the amendegw test. I was seeing slight color separation on the gray wavy background on all tests. It appears to be "just gray" with 480p output. From this I posit that, given my equipment, it may be pointless to upscale DVDs. Most of the material that I watch is 720p so that's where I leave the Blu-ray output all the time.
NickHope wrote on 5/1/2011, 1:26 AM
Thanks very much John.

Vertical lines in section 3 and 4 are jagged as if broken in half and top half shifted right

The Belle Nuit test chart was 1920x1080 and the sections labelled 4,3,2 and 1 refer to the width of the black and white stripes in pixels. The right hand side of each section is shifted to the right (or down) by 1 pixel, so the lines should appear broken. I haven't really got my head around what one ought to see in these areas in an ideal world after downsizing to SD, especially as it's different on CRT and LCD displays.

16-235 test smoothest of all tests

It's funny, I hadn't even really studied that gradient yet as I just chucked a couple of seconds of that on the end as a luminance check. But yes, each of the tests that use AviSynth exhibit some banding, which is confirmed on the scopes. Perhaps this could be improved by using SmoothLevels instead of ColorYUV (perhaps after the resize). Incidentally, the smoothest of all was a render I did using the MainConcept MPEG-2 codec in Vegas, although it took 15% longer to render than CCE.

In this test the bars appeared darker gray where in the iResize test the bars appeared lighter

There really shouldn't be any difference at all. You can check this by putting the VOB files onto the Vegas timeline and checking on the scopes. I can do this in 10 by dragging them from an external Windows Explorer window. For some reason VOB files don't show up in Vegas' own Explorer window. Stacking the VOB files on the timeline and muting higher tracks is a good way to compare renders.
farss wrote on 5/1/2011, 2:56 AM
" I haven't really got my head around what one ought to see in these areas in an ideal world after downsizing to SD, especially as it's different on CRT and LCD displays."

Ideally what you should see is either the lines or grey. No blinking bits or anything that you wouldn't see using your own eyes to look at the same thing.

Of all of the methods 3 is arguably the best as it handled the test chart very well, whereas 1 and 2 and 4 showed some of the swatches as blinking patches. Given that 3 adds a low pass filter this is to be expected. On the other hand a decent camera should avoid getting such high frequncy detail into the image anyway.

Unfortunately I was only vieweing this on a 4:3 Sony 14" CRT Sony monitor and the DVD player is letterboxing the 16:9 so the image size is pretty small. As a result I was not really able to discern any difference in resolution in any of the camera footage regardless of the method.

The hula doll is a bust, period, all methods showed unacceptable results and I'm even more convinced that the fundamental problem lies with the camera. The recorded footage is damaged, pretty much beyond repair.

To my untrained eye the other shots all looked fine regardless of the method. Nothing leapt out of the screen as being repulsive. The underwater footage was probably the most pleasing and that's because of the lighting underwater. The ballet dancers looked a bit underexposed to me but apart from that very good. Craftech's stage scene seemed to have the blacks a smidge crushed by looking at the black jackets but that's a tough scene what with the pale jackets and the harsh lighting.
What looked really great to me was part of the island dancers, the part just to left of centre. They looked just like my Indian performers, all color, low key lighting.

Bob.
NickHope wrote on 5/1/2011, 3:21 AM
Thanks Bob.

In case they wonder, I didn't put any correction on either John Meyer's or Craftech's clip. The others have been tweaked with color curves/levels. Incidentally the Fijian singers were shot through my Bolex Aspheron lens, which explains the slight softness in the corners.

So, feedback so far doesn't seem particularly against just doing a simple Vegas downsize. Would be interesting to hear what someone who has something like a big old Trinitron might have to say.
johnmeyer wrote on 5/1/2011, 11:06 AM
Would be interesting to hear what someone who has something like a big old Trinitron might have to say.I'm out of town, but I do have a Sony WEGA CRT at home. The DVD player is an older (not progressive and not capable of up-resn'g) model connected via component. Between John Dennis and me, we should be able to provide some feedback (in my case, not until Wednesday).
amendegw wrote on 5/1/2011, 5:37 PM
"I'm out of town, but I do have a Sony WEGA CRT at home. The DVD player is an older (not progressive and not capable of up-resn'g) model connected via component. "ditto, ditto & ditto - I'm out of town, have a Sony Trinitron & a non-progressive DVD player. Maybe by Wednesday I'll be able to test.

...Jerry

PS: Also have a 52" Samsung LCD with BDP1600 Blu-Ray player. I'll also test on that.

System Model: Alienware Area-51m R2
System: Windows 11 Home
Processor: Intel(R) Core(TM) i7-10700K CPU @ 3.80GHz, 3792 Mhz, 8 Core(s), 16 Logical Processor(s)
Installed Memory: 64.0 GB
Display Adapter: NVIDIA GeForce RTX 2070 Super (8GB), Nvidia Studio Driver 527.56 Dec 2022)
Overclock Off

Display: 1920x1080 144 hertz
Storage (12TB Total):
OS Drive: PM981a NVMe SAMSUNG 2048GB
Data Drive1: Samsung SSD 970 EVO Plus 2TB
Data Drive2: Samsung SSD 870 QVO 8TB

USB: Thunderbolt 3 (USB Type-C) port Supports USB 3.2 Gen 2, DisplayPort 1.2, Thunderbolt 3

Cameras:
Canon R5
Canon R3
Sony A9

NickHope wrote on 5/1/2011, 9:13 PM
Thanks guys, looking forward to your feedback.

I'm also happy to take requests for other methods using the same project. I had wanted to include a simple Vegas MainConcept render, and also a "Reduce Interlace Flicker" render, but decided to leave it at just 4 renders to keep the download file size more reasonable. So if anyone thinks "Why hasn't he done it such and such a way?", then ask me and I'll do it in version 2!
GS1966 wrote on 5/3/2011, 7:25 AM
Some cents from me

craftech wrote on 5/3/2011, 7:55 AM
I tested the DVD on two different setups.

1. Ten year old Panasonic standard DVD player connected via RCA composite to a 10 year old 19 inch Toshiba CRT television.

2. 12 year old Panasonic standard DVD player connected via S-Video to a two year old 42 inch Panasonic Plasma television. No enhancement on TV. Set to factory default Panasonic calls "standard".
-------------------------------------------------------------------------
Results:

On the menu the black box with white writing on the left was almost impossible to read on the CRT, but easy to read on the Plasma.
-------------------------------------------------------------------------------------
The Belle Nuit test card exhibited twitter on the CRT as follows:

Test 1- Box 2 & 3: a lot of twitter. Box 4: some twitter
Test 2- Box 1: Twitter. Box 3: some twitter
Test 3- Mostly no twitter. A little on Box 2.
Test 4-Twitter on 1 and 3. Slight twitter on Box 4.

On the Plasma there was no twitter at all so I rated the apparent resolution:

Test 1-Box 1: Poor. Right half of Box 2: Poor
Test 2-Box 1: Poor. Box 3: Lossy
Test 3-Box 2: Lossy
Test 4-Boxes 1 & 3 Lossy. Box 4: slightly lossy
--------------------------------------------------------------------------------------
For the video tests it was difficult to get a really good qualitative rating for each, but here goes:

A. Island Dancers B. Hula Doll C. Underwater Scene 1 D. Underwater Scene 2
E. Highway Scene F. Ballet G. Musical

CRT Tests:

1A: Good 2A: Good 3A: Better 4A: Good
1B: OK 2B: Good 3B: Better 4B: Good
1C: Good 2C: Good 3C: Very Good 4C: Good
1D: Very Good 2D: Very good 3D: Very good 4D: Very Good
1E: Good 2E: Better 3E: Very Good 4E: Good
1F: Good 2F: Good 3F: Very Good 4F: Good
1G: Good 2G: Better 3G: Very Good 4G: Good

Plasma Tests

1A: OK 2A: OK 3A: Better 4A: Good
1B: OK 2B: Better 3B: Better 4B: Good
1C: OK 2C: OK 3C: Very Good 4C: Good
1D: Good 2D: Good 3D Very Good 4D: Very Good
1E: OK 2E: Better 3E: Very Good 4E: Good
1F: Good 2F: Better 3F: Very Good 4F: Very Good
1G: OK 2G: Better 3G: Very Good 4G: Good

In summary, the only one that appears to work really well for both CRT and Plasma viewing is Method 3. It struck the right balance for both.

Just a thought. Nick:

Going back and forth isn't that easy the way the menu is set up. If you do it again it might be better to put them in the order I rated them above.

Test 1: A, B, C, D, E, F, G etc

Thanks again for doing this Nick. Really appreciate the work.

John
amendegw wrote on 5/3/2011, 7:56 AM
Okay, here's what I see from my testing. First, the bottom line - my vote is for Test 3 as the best quality.

To my eyes, the real-life clips looked very good in all four versions. The Hula Dancer clip had particular problems with flicker/moire in test 1, but I was quite surprised with the quality of in tests 2 & 3. I did see some jaggies in the highway stripes in the stringer video in all versions - worst in test 1 & 2 & 4; best in test 3. Also, noticed some jaggies in the "wand" held by the diver in Test 2.

These conclusions resulted from testing on 3 platforms:
1) My Laptop LCD,
2) A Samsung 52" LCD - LN52A650 Blu-Ray BDP1600 upconverted to 1080p (HDMI), and
3) Trinitron Interlaced CRT KV2729R w/ S-video contection. The Trinitron had the poorest quality display as well as the most flicker (see below).

The biggest difference between tests appeared in the Belle Nuit charts. Here's screenprints from my Laptop. The Laptop did not exhibit any flicker (nor did the Samsung LCD), but the Interlaced Trinitron displayed lots dramatic flicker reflected in the comments below.


Test 1 (above) Lots of flicker in panels 2 & 3 on the Trinitron Interlaced CRT


Test 2 (above) Lots of flicker in panels 1 & 3 on the Trinitron Interlaced CRT


Test 3 (above) No flicker on panels 1-4 on the Trinitron Interlaced CRT


Test 4 (above) Lots of flicker in panels 1 & 3 on the Trinitron Interlaced CRT

...Jerry

PS: Nick, thanks so much for putting in the effort to prepare these tests!

System Model: Alienware Area-51m R2
System: Windows 11 Home
Processor: Intel(R) Core(TM) i7-10700K CPU @ 3.80GHz, 3792 Mhz, 8 Core(s), 16 Logical Processor(s)
Installed Memory: 64.0 GB
Display Adapter: NVIDIA GeForce RTX 2070 Super (8GB), Nvidia Studio Driver 527.56 Dec 2022)
Overclock Off

Display: 1920x1080 144 hertz
Storage (12TB Total):
OS Drive: PM981a NVMe SAMSUNG 2048GB
Data Drive1: Samsung SSD 970 EVO Plus 2TB
Data Drive2: Samsung SSD 870 QVO 8TB

USB: Thunderbolt 3 (USB Type-C) port Supports USB 3.2 Gen 2, DisplayPort 1.2, Thunderbolt 3

Cameras:
Canon R5
Canon R3
Sony A9

john_dennis wrote on 5/3/2011, 3:15 PM
Using the disc that I burned previously, I tried this on a CRT device from a DVD player that was built before upscaling was cool.

The playback devices are:

DVD player: Sony DVP-S500D, output Component Video

CRT: JVC 32 inch CRT Picture adjustment set to 16x9 so the video is letter-boxed, viewing distance ~9 feet.

1) Belle Nuit Chart _ Left Side of section 2 exhibited significant flicker. 16-235 test extremely slight banding on the darker shades. Slight shimmering on the amendegw video.

2) iResize Belle Nuit Chart _section 1&3 Grayscale flickering less flickering Section 1 (red). 16-235 test more pronounced banding than test # 1 in the darker shades.

3) TDeint _ Belle Nuit Chart was rock steady16-235 test slight vertical banding.

3) Sony Z1P: Belle Nuit Chart section 1&3 exhibited significant flickering

The difference in the videos was still unremarkable to me. I agree from the Belle-Nuit chart results, the TDeint method appears to produce the best measured result. Based on the time to render and my inability to see the result, it probably wouldn't be worth the effort to me.
johnmeyer wrote on 5/3/2011, 9:28 PM
#3 looked best on my Sony WEGA 36" CRT, from an old DVD (non up-res'ng) player, using RGB component connection. Most of this was the absence of any twitter, and excellent handling of the doll torture test.

I'll look at it again tomorrow when I'm not jet-lagged.

I think that most of the improvement is doing the multiple passes at changing the resolution. Doing passes that are preciesly 1/2 resolution gets rid of ALL of the approximations that end up causing the problems. The remaining re-scaling requires approximating a very small pixel shift, and this apparently results in fewer artifacts. I'm not sure of the math behind this, but it certainly seems to produce a great result. Obviously the numbers in this script will have to be changed for each and every situation in order to ensure that the first re-size is exactly one half, but still more than the final target resolution.

I'm not sure what QTGMC is doing in the context of this script, because TDeint has already created the intermediate fields. Also, the absence of line twitter may be entirely caused by the "low-pass filtering" line:

p2.blur(0,1).sharpen(0,.51).blur(0,1).sharpen(0,.85)

It would be interesting to apply this near the end of some of the other scripts, or to remove it from the #3 script and see what the results look like.

FWIW, the only clips that I think are really going to stress things are the doll clip (of course), the ballet dancer, and the driving clip. I'm not sure the stage production will demonstrate much difference in detail. The "clapping" clip may help find motion estimation artifacts, and will certainly help detect color shifts. The underwater clips, while pretty, don't stress things very much.
NickHope wrote on 5/3/2011, 11:22 PM
Thanks to everyone who has given feedback. Nice to see some consensus over the preferred render (test 3). I'll concentrate efforts on developing this as the "perfectionist's" method for quality-critical jobs. I want to use SmoothLevels to get rid of the banding, but it only supports YV12, not YUY2 yet, so I'll have to find a way around that or an alternative to it. We can't currently use YV12 in the script as there is a chroma shift problem during re-interlacing.

I'm not sure what QTGMC is doing in the context of this script, because TDeint has already created the intermediate fields.

It's apparently to reduce horizontal shimmer. From the QTGMC help file:

"Can remove horizontal shimmering effects from progressive sources. Experiment with InputType=1, 2 or 3 for best results. FPS will not be doubled. This script is designed for deinterlacing and so by default expects an interlaced clip. However, much of its operation concerns the reduction of horizontal shimmering. It is possible to use the script to remove similar shimmer from a progressive clip by using the InputType setting. InputType=1 is used for general progressive material that contains less severe problems."

Also, the absence of line twitter may be entirely caused by the "low-pass filtering" line:

Not sure how Didée arrived at those numbers (I'll ask him) but they certainly do the trick. I also tried some scripts containing the slightly simpler code below, which does 2 stages instead of 4. It left a little moire on the doll background but may be less lossy and therefore more appropriate for many less demanding jobs:

# input bobbed and resized source (by whatever method)...

# reduce line twitter on interlaced displays:
blur(0.0,1.0) # vertical blur
sharpen(0.0,0.5) # vertical sharpen

# re-interlace:
assumetff()
separatefields()
SelectEvery(4,0,3)
Weave()
johnmeyer wrote on 5/4/2011, 10:08 AM
I looked again (same setup I reported above), but this time when I was fresh.

I have slightly different results to report.

I have decided that #3 may not be that great after all. As I suspected, I was being influenced by the lack of twitter on the test chart and doll background. Both those are due to in large part to the nature of interlaced video, rather than the specific problems of downsizing.

On this second viewing, I instead concentrated on the slash though the zero on the test chart; on the horizontal lines in the cloth behind the doll; and on the bright sparkly dots on the horizontal skirt of the ballerina's tutu. What I found is that #1 and #4 did a better job on these things. The horizontal lines in the cloth behind the doll is especially interesting. The moiré patterns are so overwhelming that when a particular method reduces them, there is a tendency to acknowledge that method as superior. However, if you can no longer distinguish lines that were there in the original (look especially in the upper left quadrant of the frame), then some detail has been lost. Since the moiré is the result of using a somewhat pathological case (meaning that this is B&W cross-hatched clothing is something that seldom shows up in most video), perhaps it shouldn't be taken as an indication of success or failure of a particular technique. As has been pointed out in other posts, NTSC video has certain limitations (as does any other video technology) and as a result, we have long asked talent not to wear pure white shirts, and also avoid clothing with cross-hatch or herring-bone patterns.

It would be useful to be able to look at each individual clip one after another. I think this is doable with a playlist. I just tried, and I was able (in DVD Architect) to make a copy of each MPEG-2 file, assign that copy a different name, edit that copy on the DVDA timeline to just include a portion of the video, and then take all these copied, edited MPEG-2 files and put them into a playlist. This way, you can look at all the doll clips, one after another, for instance. You have to count (1,2,3,4 ...) to know which method you are looking at, but I think it would help to not have to remember everything about the previous clip after seeing so much other stuff.

Finally, I'd sure like to have a fifth method, which would be a reference encoding using nothing but the native Vegas tools. I understand wanting to reduce as many variables as possible, and that this decision leads to using the CCE encoder for everything, but since the original objective was to improve on what can be done in Vegas, it seems like that benchmark needs to be included.

I once again applaud all the work Nick has put into this, and even if he doesn't want to do any more, I think we are all better off having these alternative workflows to choose from.

amendegw wrote on 5/4/2011, 10:50 AM
I also had a second look. I alluded to jaggies in my comments above and I'd like to change my testimony (is that allowed?). Worst in Test 1. Test 2, 3 & 4 all seem about the same (although Test 2 might be the best).





...Jerry

System Model: Alienware Area-51m R2
System: Windows 11 Home
Processor: Intel(R) Core(TM) i7-10700K CPU @ 3.80GHz, 3792 Mhz, 8 Core(s), 16 Logical Processor(s)
Installed Memory: 64.0 GB
Display Adapter: NVIDIA GeForce RTX 2070 Super (8GB), Nvidia Studio Driver 527.56 Dec 2022)
Overclock Off

Display: 1920x1080 144 hertz
Storage (12TB Total):
OS Drive: PM981a NVMe SAMSUNG 2048GB
Data Drive1: Samsung SSD 970 EVO Plus 2TB
Data Drive2: Samsung SSD 870 QVO 8TB

USB: Thunderbolt 3 (USB Type-C) port Supports USB 3.2 Gen 2, DisplayPort 1.2, Thunderbolt 3

Cameras:
Canon R5
Canon R3
Sony A9

craftech wrote on 5/4/2011, 12:18 PM
This test is pointing out what I had hoped for. An insight into which criterion each individual evaluating it finds important. That is what is so difficult with this.

For example, in my subjective analysis above, I looked at each one many times on both the CRT and the Plasma. I tried to pretend I was the target audience viewing it.
A: Island Dancers: their faces and their dress
B: Hula Doll: the doll. The moire patterns would probably be there even if I watched something like that on a broadcast so I was tolerant of it unless it was totally distracting.
C: Underwater Scene 1: The most general because of the abundance of these on Discovery Channel, etc. Pretty with great colors.
D: Underwater Scene 2: Same
E: Highway Scene: The location and the overall clarity. Since the aliasing was there for every render I accepted it and looked at the rest of the scene.
F: Ballet Scene: The dancer's face and her performance. I want to see the kids or performers in the show, but mostly I want to be able to recognize their faces despite the lighting especially if I am a parent.
G: Stage Show: Same as the ballet.

What I counted the least, but commented on anyway was the chart. It's a still shot.

I also know the DVDs will be viewed on both CRT and LCD or Plasma so I wanted the best balance between the two. When I looked over my subjective evaluations and averaged them, number 3 seemed the best overall within my criterion for both types of TVs.

John
farss wrote on 5/4/2011, 3:53 PM
Depending on what and how you viewed the outcome you should be seeing "jaggies".
Later versions of Vegas have a problem that nearly caught me out in some of my tests, when you capture a frame as a still it only captures one field.

Bob.
craftech wrote on 5/4/2011, 6:05 PM
Nick,

Is there more detail regarding how you did Method 3 in terms of settings for the plugins, etc. posted somewhere on the net. Also recommendations for how to use the multithread version of AviSynth. I'd like to try that method on an hour and 20 minute video.

Thanks,

John
johnmeyer wrote on 5/4/2011, 8:47 PM
I'll let Nick answer about the plugins and settings.

As far as using the multi-threaded version of AVISynth, you have to download the MT version and replace the avisynth.dll in your WINDOWS\SYSTEM32 folder with this new version (rename the old one so you can switch back). I've never been sure whether you need to re-boot after you do this, but I usually re-boot, just in case.

Then, in your script, you make it multi-threaded as follows:

SetMTMode(5,0)
AVISource = ...
SetMTMode(2)

In the first line, the "0" tells AVISynth MT to use ALL your cores. If you only want to use, say, four cores, then you would use:

SetMTMode(5,4)

The second SetMTMode statement tells AVISynth which level of multi-threading to use. I haven't read the documentation for a long time, so I don't remember the differences between the various modes, but Mode 2 is one of the fastest modes, and I've seldom had problems with it.

Not all AVISynth plugins will work with multithreading. In my 2+ years of using the MT version I have run into two problems, although neither has happened often. The first is instability (crashes). I did have this problem with the complex QTGMC script and it was this problem which forced me to change from the 2.5.8 MT version I was using to a 2.6 MT version. The 2.6 version seems both better and faster, but it will not read RGB24 from the Vegas frameserver, so sometimes I switch back to the older 2.5.8 version.

The other problem I have had a few times is that the multi-threading fails to split frames accurately between cores. This leads to skipped frames, or frames that aren't processed. You will see this pretty quickly if you simply walk through a few dozen frames on the VirtualDub timeline (I usually preview the results in VirtualDub even if I serve into something like MeGUI or the standalone MainConcept MPEG-2 encoder for my final render).

There is also a second way to us multi-threading which involves using an MT statement within the script, and making a function call using this statement. It always seemed like an inferior way to go, although apparently it will work in situations (usually older plugin DLLs) where the SetMTMode approach fails.

Hope that helps!
NickHope wrote on 5/4/2011, 9:09 PM
It would be useful to be able to look at each individual clip one after another.
I agree. I'll do it that way on version 2. I'll probably join the differently-rendered clips back together as they should smart render.

Finally, I'd sure like to have a fifth method, which would be a reference encoding using nothing but the native Vegas tools.
I agree, and indeed a Vegas-only result was in there right until the last minute, when I kicked it out to keep the file size down. For a Vegas-only render, please let me have your votes for including one or more of: a) no sharpening b) sharpness 0, or c) unsharp mask "light". My own preference when viewing the results was for no sharpening but I know that adding a little sharpening is popular with some of you.

I'll drop the in-camera conversion for future tests. I just wanted to throw that in to address the opinion that occasionally comes up that the cameras do the best job of down-scaling. It apparently doesn't do a particularly better job, at least in the case of my Z1, and is very time-consuming.

I'll also drop or change one or both of the underwater clips, as I can see that they're not so useful in making distinctions since, as John M points out, the water itself is acting as a low-pass filter. I really just wanted to make sure I got to see how some of my own typical footage was going to look.

I had some conversation with Didée about the blurring/sharpening. He added the extra blur/sharpen specifically to deal with that horrible background. A single blur>sharpen should suffice for most jobs, with 0.75 a good starting point for the vertical sharpening:

blur(0.0,1.0) # vertical blur
sharpen(0.0,0.75) # vertical sharpen


Is there more detail regarding how you did Method 3 in terms of settings for the plugins, etc. posted somewhere on the net. Also recommendations for how to use the multithread version of AviSynth.

Download TDeint from here. There is an html guide in the download. "mode=1" simply puts it into "bob" mode (= separate frames/double height/double framerate). Other bobs that I tried were Bob() and Yadif(mode=1) and QTGMC("Faster",Sharpness=0). I thought TDeint(mode=1) was probably the sweetspot in terms of quality/speed.

There is guidance about AviSynth/QTGMC etc. in my web video guide. The extra "InputType=1" here makes it expect progressive frames, which is what we have after bobbing.

There is a section about multi-threading here. I haven't really done it myself, since my quad-core is out of action, so it's based on what John M and others have found. Be careful about which versions of things you install, and keeping good records of what combinations of filters give you stability on your particular system, and the speed.

There is plenty of QTGMC stuff is all in the doom9 QTGMC thread. The specific discussion regarding HD>SD starts with this post. Beware, it's a very multi-threaded thread, and quite difficult to follow.
NickHope wrote on 5/5/2011, 12:21 AM
Here's a commented version of method 3, but by default with a little less anti-twitter filtering, and with the multi-threading code added. You need to replace values M, X and Y, depending on your machine. I've rewritten it as sequential stages to make it easier to comment and follow.

# High quality HD to SD conversion script. It is slow.
# Retains detail but reduces artefacts such as line twitter on CRT displays.

# Set maximum memory.
# Setting M:
# - First try without the SetMemoryMax line
# - However, using the SetMemoryMax line and a good value for
# M might allow more threads and so give more speed.
# Particularly important for slower QTGMC settings
# - Try values 400,600,800,1000 etc.
# SetMemoryMax(M)

# Set multi-threaded mode.
# Setting X:
# - Start at the number of cores in your machine.
# - If it crashes, decrease 1 at a time.
# - Otherwise increase 1 at a time until CPU usage is very,
# very close to 100%, don't go too far or it will slow down.
SetMTMode(5, X)

# Open frameserved source. Frameserve in YUY2.
# Change path and file name as appropriate.
AviSource("d:\fs.avi")

# Expand levels from [16,235] to [0,255]
# Otherwise DVD will have insufficient contrast.
# Counteracts levels squeeze when Frameserver converted from RGB to YUY2.
# Omit out if you conformed levels to full range in your NLE
ColorYUV(levels="TV->PC")

# Assume footage is top field first.
# If it's bottom field first then use AssumeBFF.
AssumeTFF

# Set multi-threaded mode.
SetMTMode(2)

# Bob with bi-directionally, motion adaptive, sharp deinterlacer.
# Produces double-rate, progressive, full height output.
TDeint(mode=1)

# -----
# Here one could save a lossless intermediate
# (e.g. YUY2 Lagarith avi in VirtualDub)
# Then do the remainder in a separate script.
# -----

# Resize to double NTSC
bicubicresize(1440,960)

# Reduce width with sharpening
bicubicresize(720,960,-0.8,0.6)

# Reduce height with sharpening
bicubicresize(720,480,-0.8,0.6)

# Temporally smooth / remove horizontal shimmering
# Setting Y:
# - Start at about half number of cores and tweak upwards
# or downwards for best speed. Y=1 often works well.
# - Balance this setting with X (i.e. if you increase X,
# you might need to decrease Y and vice versa).
QTGMC(TR0=1,TR1=1,TR2=2,InputType=1,EdiThreads=Y)

# Reduce line twitter on interlaced displays
blur(0.0,1.0) # vertical blur
sharpen(0.0,0.75) # vertical sharpen

# Replace the above 2 lines with the following 4 lines
# for more low-pass filtering
# blur(0.0,1.0) # vertical blur
# sharpen(0.0,0.51) # vertical sharpen
# blur(0.0,1.0) # vertical blur
# sharpen(0.0,0.85) # vertical sharpen

# Re-interlace:
assumetff()
separatefields()
SelectEvery(4,0,3)
Weave()

# This line may or may not be necessary.
# Try removing it and see if you get more speed.
Distributor()