AVCDH to Mpeg2 (DVD) - final output too soft

Christian de Godzinsky wrote on 12/7/2009, 11:54 AM
Hi,

First, my sincerest apologies. This issue might have been discussed a zillion times before, and under different headings...

Simply put - pristine AVCHD source material (1920x1080 50i PAL) looks softer than good quality source SD material (720x576 50i PAL) - when rendered out to DV (720x576 50i PAL) in Vegas 9.0c.

Believe me, I have tried all settings and their combinations - with no satisfactory results. SD rendered out to SD looks always sharper, even if the source HD material is better than the SD source material. I am fully aware of the fact that deinterlacing must be selected when downsizing.... As well best quality and lada lada...

Can some pro user here be so kind and publish his/her settings???

How do you get professional looking resluts rendering from HD to SD in Vegas???

The new tick mark "prefer quality over speed" seems not to have any impact in the quality (sharpness) of the final result.

I seriously think that there is something seriosuly wrong with the main concept encoder inside Vegas. It is not up to par what comes to quality. Or is it Vegas that runs havok the downsampling before the encoding?

This drives me nuts. Is it so that I am forced to use a 3rd party SW go get professional results, from a professional software (Vegas Pro)??? And what would that applicaton be?

Any constructive feedback is more than welcome :)

Christian



WIN10 Pro 64-bit | Version 1903 | OS build 18362.535 | Studio 16.1.2 | Vegas Pro 17 b387
CPU i9-7940C 14-core @4.4GHz | 64GB DDR4@XMP3600 | ASUS X299M1
GPU 2 x GTX1080Ti (2x11G GBDDR) | 442.19 nVidia driver | Intensity Pro 4K (BlackMagic)
4x Spyder calibrated monitors (1x4K, 1xUHD, 2xHD)
SSD 500GB system | 2x1TB HD | Internal 4x1TB HD's @RAID10 | Raid1 HDD array via 1Gb ethernet
Steinberg UR2 USB audio Interface (24bit/192kHz)
ShuttlePro2 controller

Comments

musicvid10 wrote on 12/7/2009, 12:14 PM
Your project settings are Best, Blend, correct?


I've experienced the same behavior, and I don't think it is Vegas' encoders in particular, because I've tried it in other converters with no better results. AVC to MPEG-2 transcodes just seem soft to me.

One thing I have been experimenting with is rendering a CQ progressive with Decomb in the new version of Handbrake, and then rendering as MPEG-2 progressive in Vegas. Some of it looks promising, but I haven't been able to do any real comparisons, yet.
Christian de Godzinsky wrote on 12/7/2009, 12:38 PM
Yes, my project settings ARE Best & Blend. Have also tried to select the fixed and highest allowable bitrate for the video output - no difference in sharpness. It seems the problem is either in the downscaling or decoding, the lack of sharpness is not due to due to lack of bit bandwidth. Again, why is the SD source giving such sharp output thru the same process???

Christian

WIN10 Pro 64-bit | Version 1903 | OS build 18362.535 | Studio 16.1.2 | Vegas Pro 17 b387
CPU i9-7940C 14-core @4.4GHz | 64GB DDR4@XMP3600 | ASUS X299M1
GPU 2 x GTX1080Ti (2x11G GBDDR) | 442.19 nVidia driver | Intensity Pro 4K (BlackMagic)
4x Spyder calibrated monitors (1x4K, 1xUHD, 2xHD)
SSD 500GB system | 2x1TB HD | Internal 4x1TB HD's @RAID10 | Raid1 HDD array via 1Gb ethernet
Steinberg UR2 USB audio Interface (24bit/192kHz)
ShuttlePro2 controller

johnmeyer wrote on 12/7/2009, 12:44 PM
This is what I do for HDV that I am rendering to SD DVDs. I assume it will do the same job for AVCHD.

Put the Sony Sharpen fX onto every AVCHD event, and set it to ZERO. Even when set to zero, it still provides a touch of sharpening (at least this was true in 7.0d and 8.0c -- I don't use 9.x).

You can also assign this fX to the media in the media pool (which may be faster than assigning it to dozens of events). Or, you can assign it to the track or to the output bus (see the fX button on the Preview window), but only if you don't have any other media (photos, SD footage, generated media, etc.). You really only want to sharpen the AVCHD, and nothing else.

I have found that this completely solves the "soft" problem. Make sure to render using Best when doing this. The only downside is that the Sharpen fX is slow, so it does add quite a bit of time to the render.
farss wrote on 12/7/2009, 1:50 PM
Generally a good idea to add the sharpening after downscaling or you can add further to aliasing problems.

Bob.
Christian de Godzinsky wrote on 12/7/2009, 2:01 PM
John,

Thank you a zillion for pointing out this trick!!!

Sould I hug you now or immediately???

Just amazing how simply adding the Sony Sharpen FX (with zero as the setting) to the clip makes a huge difference!!! With the filter in place the output looks as sharp as I always expected it to be!!!

Now comes the tricky questions:

1. Why on earth must we add a sharpening filter to keep the quality of HD source material as good as source SD material?

2. Why on earth does this filter do what it does - even if the setting is ZERO? What is going on?

Well, if the result is fine - why should I bother to know why it works? Still this leaves me an uncertain feeling about what is going on inside the Vegas rendering engine. Do the guys at SCS even know how it works?

Highly appreciated. This trick (of yours?) should be a sticky!!!

BTW - does it also work if the sharpen FX is assigned to the track or media level containing the HD material???

It would be really interesting to get a comment from SCS about this. So many people have complained about the lousy downconverting results (hd to sd). Should be in their interest to guarantee best output quality in a Pro product... This trick really works!!!

Cheers,

Christian

WIN10 Pro 64-bit | Version 1903 | OS build 18362.535 | Studio 16.1.2 | Vegas Pro 17 b387
CPU i9-7940C 14-core @4.4GHz | 64GB DDR4@XMP3600 | ASUS X299M1
GPU 2 x GTX1080Ti (2x11G GBDDR) | 442.19 nVidia driver | Intensity Pro 4K (BlackMagic)
4x Spyder calibrated monitors (1x4K, 1xUHD, 2xHD)
SSD 500GB system | 2x1TB HD | Internal 4x1TB HD's @RAID10 | Raid1 HDD array via 1Gb ethernet
Steinberg UR2 USB audio Interface (24bit/192kHz)
ShuttlePro2 controller

farss wrote on 12/7/2009, 2:21 PM
" Why on earth must we add a sharpening filter to keep the quality of HD source material as good as source SD material?"

Because SD material has a lot of sharpening added to it by the camera. That's one attribute of the "video" look.

When you downscale HD to SD you bypass the sharpening process that happens in a SD camera. This is not something unique to Vegas.

Bob.
Christian de Godzinsky wrote on 12/7/2009, 11:13 PM
Hi Bob,

Ok, I accept the fact that there is some internal "peaking" added to all SD material, already during the recording phase. This is btw true also for the earlier analog recordings... And HD material has such a high spatial sampling frequency so that additional peaking is probably not required during the recording phase, to emphasize the sharpness...

However, how do you explain that SD material looks fine on the timeline, and HD material even better (in best quality preview)? AFTER the render it is just the opposite, HD material looks softer than the SD material - that still looks the same after the mpeg2 decoding???

Shouldn't I see this difference on the timeline in the preview? Does not ring my bells, this one...

And still - why is Sony Sharpen fx sharpening even if the setting is 0,00??

My wild guess is that the sharpen filter somehow upscales whatever material you throw at it - just before the sharpening process. Due to the upscaling (even if the setting is 0) the following downscaling to SD works better? However, this is just a guess...

Still I think that something is not as it shoulde be in the downscaling of HD to SD. Probably I'm not alone with this feeling... Chime in if you agree...

Christian

WIN10 Pro 64-bit | Version 1903 | OS build 18362.535 | Studio 16.1.2 | Vegas Pro 17 b387
CPU i9-7940C 14-core @4.4GHz | 64GB DDR4@XMP3600 | ASUS X299M1
GPU 2 x GTX1080Ti (2x11G GBDDR) | 442.19 nVidia driver | Intensity Pro 4K (BlackMagic)
4x Spyder calibrated monitors (1x4K, 1xUHD, 2xHD)
SSD 500GB system | 2x1TB HD | Internal 4x1TB HD's @RAID10 | Raid1 HDD array via 1Gb ethernet
Steinberg UR2 USB audio Interface (24bit/192kHz)
ShuttlePro2 controller

farss wrote on 12/8/2009, 2:09 AM
The answer to most of your questions would have me typing until the new year.

I deal with many people and visit many fora, many, many people seems to have exactly your problem regardless of which NLE they use. I think we can take the NLE out of the picture.

"However, how do you explain that SD material looks fine on the timeline, and HD material even better (in best quality preview)? AFTER the render it is just the opposite, HD material looks softer than the SD material - that still looks the same after the mpeg2 decoding???"

In general because the SD material is sharper. Resolution and sharpness are not the same thing and therey's many other factors that determine how we react to images. I've seen NHK's 8K and to be honest it really wasn't that impressive. I'd rather watcha 35mm print and one is 8,000 line and the other 700 line resolution.

"And still - why is Sony Sharpen fx sharpening even if the setting is 0,00?? "

Now that is a very good question that I do not have an answer to. I'll try to delve into that but I'm kind of snowed under at the moment. On the other hand if I shoot 1080p or even 720p I have to do the exact opposite if I'm delivering 50i on a SD DVD. I have to add Gaussian Blur to reduce the resolution of the HD source otherwise I get problems with aliasing and/or line twitter. After the downscale I may add some Unsharpen Mask to get some harder edges into the image.

Are you editing on a HD timeline and from that downscale to SD and rendering directly to mpeg-2?

Bob.



Porpoise1954 wrote on 12/8/2009, 6:02 AM
There's a whole wealth of info on DVInfo.net about this issue but this is the one that has produced the best results for me. DVINFO

Bear in mind though, that all my footage is first converted to Cineform intermediates before I even start editing. then the finished edit is rendered to Cineform .avi 1920x1080x25P (PAL) as the master. DVDs are then created by resizing using Vdub and bring that into DVDA.

This produces the best quality DVDs I've managed so far - even over 2 hrs duration on DL discs - I've certainly not seen any commercially produced DVDs with much better quality results (and I've seen a lot worse).

OK, it's a much longer work-flow but if ultimate quality is the criteria, then it's worth it - to me at least.
vtxrocketeer wrote on 12/8/2009, 10:06 AM
Porpoise, I've adopted *exactly* the same workflow as you, thanks to that DVInfo thread you linked to. I tried in one variation by adding Sony Sharpen (at 0.000) to my Cineform SD master avi *after* the downsize in VDub, and just before the compression to MPEG-2 for DVDA. Why? Because of advice here.

Yikes! My finished film looked WAY too video-like; harsh, almost. Not doing that again. Maybe it had something to do with my source material (HDV 24p from Canon XH-A1), other effects, or phase of the moon. I just didn't like the overly crisp output (think Spanish-language soap operas or Sunday morning talk shows on steroids).

My SD films on DVD using that workflow -- minus sharpening -- look absolutely stunning on my 52" LCD (uprezzing Blu-ray/DVD player). A friend even asked if my plain old DVD was a Blu-ray. I shook my head; he was surprised.

$0.02,
Steve
ECB wrote on 12/8/2009, 10:31 AM
If you are going the Cineform route it is not necessary to use Vdub for the resizing. You can resize using CineForm importer's own Lanzcos 3 scaler and it will give you exactly the same results. Read about it here .

-ed
vtxrocketeer wrote on 12/8/2009, 10:58 AM
Ed, I hadn't thought of that before. Vdub was touted because of its excellent rescaler. So same/better quality resizing can be had by rendering an HD avi master to a SD avi master from Vegas just by selecting the CF codec? That seems to be the point of the blog you linked to, i.e., skip VDub and stay in your own NLE because VDub's Lanzcos 3 scaler is the same as CF's Lanzcos 3 scaler that the NLE can access. I'll have to give it a try!

Steve

EDIT: NeoScene itself does not resize, but NeoHD does, according to specs at Cineform. Question: does simply choosing a CF codec for rendering in Vegas for the down rez automatically trigger/invoke/use the Lanzcos 3 scaler, no matter what flavor of Cineform product is installed? (I have NeoScene.)
farss wrote on 12/8/2009, 12:07 PM
Vegas at Best uses the Bicubic algorithm. They don't get any better than that. Much of what you read on photog fora relates to upscaling so be careful, one can waste a lot of time only to realise the guys in Maddison did do their homework. It does pay though to invest some time in trying to understand video science rather than searching for some holy grail.

Bob.
Christian de Godzinsky wrote on 12/8/2009, 1:40 PM
Bob,

Please, don't wear out your keyboard for my sake, that was not my intention. However, many thanks for your good comments. Video science is complicated, but not totally uncomprehensible. My goal has always been not to overload my brain with unnecessary data. Just concentrate on the essentials, and trying to understand how those thins work you depend on. This issue is still a mystery for me, or alternatively I'm totally dumb!

The Vegas Bicubic algorithm seems to be so and so. I am not looking for the holy grail, I'n simply trying to get professional results from a Pro software. Without special knowledge (or 3rd party sw) it seems to be (in this particular case) impossible. Unfortunately, I have no other video editing SW so I cannot compare. I believe you when you claim that some other SW also have similar problems. Even with bigger reason, this HS to SD downscaling problem must be sorted out.

The question still remains, why does the much sharper looking HD material (compared to the SD material on the same timeline) blur more after a render to mpeg2 ??? The original SD still looks the same, but the HD looks very soft. This is still a total mystery for me. And seem to be to many other people. You say that this is because the SD material is originally sharper, even if the resolution is lower. How come that the HD material looks sharper on my timeline, comparet to the SD material???

I have set my timeline (or project properties that is) to the same as the source HD material. I usually set in mixed project the project properties to the exact same as the highest definition video on the timeline. To my understanding, this is the recommendation. At least timeline playback is then very smooth. Then I render out to whatever format I need. Usually SD or HD.

I have experimented with every setting withing Vegas Pro 9.0c. It is still unclear what is the correct way to achieve properly sharp SD video out of HD (other than adding the Sony Sharpen FX). Don't get me wrong, I am grateful for the fx trick, it keeps me going for now. Still I think that this is just curing the symptoms of something that limits the quality under the hood... That's at least how I now see it. Please correct me if I'm wrong.

Thanks for all comments so far. This seems to be a hotter topic than I thought...

Christian

WIN10 Pro 64-bit | Version 1903 | OS build 18362.535 | Studio 16.1.2 | Vegas Pro 17 b387
CPU i9-7940C 14-core @4.4GHz | 64GB DDR4@XMP3600 | ASUS X299M1
GPU 2 x GTX1080Ti (2x11G GBDDR) | 442.19 nVidia driver | Intensity Pro 4K (BlackMagic)
4x Spyder calibrated monitors (1x4K, 1xUHD, 2xHD)
SSD 500GB system | 2x1TB HD | Internal 4x1TB HD's @RAID10 | Raid1 HDD array via 1Gb ethernet
Steinberg UR2 USB audio Interface (24bit/192kHz)
ShuttlePro2 controller

musicvid10 wrote on 12/8/2009, 2:26 PM
Although I am by no means an expert, I believe part of the answer to the mystery will be found by investigating dithering as is used in video downsampling. It is noise that is purposely added to create some "fuzziness" to the regressions that occur when four pixels are combined into one, for instance. Noise-shaping (or "smart dithering") uses the output of one regression to influence the input of the adjacent ones, reducing the randomness of the noise further. Without dithering, we might end up with say, a bright red or black pixel occurring on a thin gray line in the reduced image, which if happened in enough frames, would be quite annoying.

Unfortunately, adding dithering also adds some softening, as any still photographer or printer knows. It is unaviodable. Also, in many front-end video applications, we have little or no control over the type and amount that is applied. The addition of a slight amount of sharpening, as in John's technique, might draw the errors in just a bit tighter to an "ideal" fit (whatever that means), noticeably improving the edges and perhaps some detail.

I know this explanation is oversimplified, so please don't jump me on my math analogy. I also know this is not all that is going on, because there is added softening that happens when going from AVC to MPEG-2 at the same resolution and quality, that is not as readily apparent in other conversions, so it cannot be fully explained by pixel loss or dithering alone. I have run tests with different converters and codecs, and I too, haven't been able to get past it. It is probably something to do with the compression and GOP of AVC codecs, that maybe Nick or John will be able to expand on.
farss wrote on 12/8/2009, 3:24 PM
Dithering has it's place in the video world to wrangle problems such as banding, I don't believe it has any relevance to spatial downscaling though. Vegas certainly doesn't dither video however most cameras are noisy enough for it not to affect most of us. CGI is a different matter.

Bob.
farss wrote on 12/8/2009, 4:57 PM
I've just finished cutting a stage show that was shot with a mixture of AVCHD and DV cameras or at least the later recorded in DV.

Indeed the footage from the DV footage looks better once onto 16:9 SD DVD. No real surprises, the AVCHD is full of noise and horrid compression artifacts. Things always go downhill with compression and the curve is very exponential.

I'd suggest you try a simple test. Take a good HiRes still image or two and try encoding that for DVD using the same workflow as you've been trying with your AVCHD. This reduces the number of unknown quantities. If the outcome looks very good then you really need to get back to the camera.

We've got a el cheapo HC5 HDV camera and I have to say despite stressing it with horrid stage lighting it has never produced footage as bad as what I've seen from these AVCHD cameras, there's something about the codec that means when it gets stressed it falls apart very badly. The HC5 footage under extremely bad ligthing doesn't look too good, the stauration goes south and the noise levels can get a bit nasty but if I crush the blacks just a little after setting up the camera properly it compresses very cleanly, so well sometimes I really wonder why I spent $10K on an EX1. Only joking, I do love my EX1 but it shows how much you've got to spend to get a really good HD camera, I'd say it's still not as good as the old PD150 under duress.

Bob.
johnmeyer wrote on 12/8/2009, 6:34 PM
I decided to spend a little time on this and did some tests.

I used some AVCHD NTSC 29.97 interlaced video taken outdoors on a bright day, looking towards my garage. It has all sorts of diagonal issues on the roof tiles; detail in the pine tree; letters on packaging you can read; shadows inside the garage, etc. Here's a snap from the original HDV:



I rendered this test clip to MPEG-2 using the standard SD Widescreen DVD Architect template, and bitrate set to CBR 8,000,000. That was my benchmark.

I then rendered using the Sharpen fX on the output bus (as I recommended in my initial post in this thread), with the sharpening set to zero.

I next removed the fX and then frameserved the video through various AVISynth scripts into another instance of Vegas, where I rendered using the same MPEG-2 settings. The purpose of these AVISynth scripts was to re-size from the HDV resolution of 1440x1080 PAR 1.3333 to the 720x480 SD resolution with a widescreen PAR of 1.2121. Thus, the second instance of Vegas was simply encoding SD widescreen video to MPEG-2.

I then burned a bunch of test DVDs (using rewriteables) and viewed the result on a Sony 33" CRT interlaced monitor. I also lined up all the MPEG-2 files on timelines below the original HDV video and did A/B/C views on the Vegas preview screen, with the preview set to Best/Full.

My conclusions (and these are obviously specific to this particular test clip) are these:

1. The straight render from Vegas does indeed, to my eyes, look a little soft when viewed on an interlaced CRT monitor.

2. The render from Vegas after using the sharpen fX provides the most dramatic difference. It definitely has a "snappier" feel. However, the criticism noted by others in this thread that the result looks a little too much like "video" is absolutely correct. For those who use the "enhancement" circuits on their TVs or monitors, this will have a very familiar look and feel. It definitely did make the hard diagonal lines in the roof tiles twitter a bit. Despite those downsides, there was definitely an increase in not only apparent sharpness, but I felt I could see more detail.

3. When I started creating AVISynth scripts, things got pretty complicated. Therefore, I won't provide all the details of the almost ten different AVISynth scripts I used. However, one script definitely provided a real improvement over the straight render from Vegas (the one without the sharpening). It is enough of an improvement that it is probably worth doing, and has the advantage that it doesn't degrade the rendering time (unlike the sharpening fX which, on my pleasantly fast i7 computer took almost three times longer to render compared to my initial, baseline render).

Here's the simple AVISynth script I used:

# Script to downsize HDV to SD prior to encoding
# December 8, 2009

AVISource("e:\frameserver.avi")
assumetff().Bob(height=480)
LanczosResize(720,480)
assumetff().separateFields()
SelectEvery(4,1,2)
#SelectEvery(4,0,3) #TFF source - use (4,1,2) for BFF
Weave()


To use this, I "rendered" my project from Vegas using Debugmode's frameserver (using YUV output). I then opened my AVISynth script in VFAPIConv and created an AVI file (this takes less than half a second to create). I then imported this AVI into a second instance of Vegas, set the Vegas project properties for SD Widescreen, made sure the imported AVI had the correct PAR and field order (this is the one tricky thing in this workflow and you have to encode a test DVD to make sure you get it right). I then encoded using the MPEG-2 encoder in the second instance of Vegas.

So, as others have pointed out, the scaling in Vegas apparently is not being done as well as it could be. This seems to be the same thing that others have discovered and posted about at all sorts of other sites across the Internet. It definitely is something that the Vegas engineers should improve because it is clear that they are causing their customers to create less than optimal DVDs, and despite Blu-Ray, it is pretty clear that a lot of people are still creating DVDs, and are doing so from various types of HD sources.
farss wrote on 12/8/2009, 8:00 PM
I just tried a synthetic test.
I took the ISO 12233 res chart, converted and cropped it to 1920x1080 BMP and droped that into a 1920x1080 25p project and rendered that to 16:9 PAL SD mpeg-2 at CBR 8M.

Results were what I have seen before, max res is around 550 lines i.e. the trumpets touch somewhere between 500 and 600 lines. That's more than enough for SD PAL, literally.
The biggest problem is the rendered file shows output at even 1,000 lines, obviously very seriously aliased and there's major jitter on an interlaced TV.
This is a well known problem with all video downscalers, implementing an effective sinc function is impossible however I'm a bit surprised that it is still letting so much HF through.
What I did just notice though which is truly odd is that the vertical res is higher than the horizontal and this makes little sense at all. I'm hard pressed to award the image more than 350 line of H res!

So another test, this time I rendered from 1080p to uncompressed 16:9 SD square pixels, I assume this is what Vegas passes to the MC encoder. Now we're cooking! The res chart pegs out so close to the resolution limit of the pixels there's nothing in it except maybe I need better glasses or some such, I had to upscale the image to my 24" monitor to get a good look at it.

All I can conclude from my tests is that if anything deserves to be questioned about Vegas it's the MC mpeg-2 encoder. I'd love for others to try these tests though, the ISO chart is readily available for download from here:
http://www.graphics.cornell.edu/~westin/misc/ISO_12233-reschart.pdf

Bob.
UlfLaursen wrote on 12/8/2009, 9:33 PM
Thanks John and Bob - this is great learning stuff for me too.

Btw. Nice neigborhood John :-)

/Ulf
Christian de Godzinsky wrote on 12/10/2009, 1:34 AM
John, Bob,

Uhhh... you really did some extreme testing on this. Great! I planned to do something similar, but you saved me some effort. I would probably not have been able to perform such extreme testing myself, since I don't have all those tools or skills you guys have...

You have proved the already known and much discussed fact - that the MC mpeg2 encoder (or the downscaling - before the encoding) is not up to par. It is not producing acceptable results when using HD material as source for SD output. Or lets say, the results are too soft as they are. You are forced to perform some additional acrobatics, like the sharpen fx, or frameserving to another encoder, to get Pro results.

Taking these things into consideration - my opinion is that Vegas Pro does not deserve in this respect the "Pro"-label. Again, I cannot compare with other video editing SW myself. But even just using Vegas, this is quite evident.

Thanks for the ISO card! I will run some additional tests next week.

I really hope that SCS has something in their sleeves, to fix this issue. I would gladly even pay for a better quality rendering engine, even if such a beast should already be embedded in a "Pro" software... Or do I assume too much?

Christian

WIN10 Pro 64-bit | Version 1903 | OS build 18362.535 | Studio 16.1.2 | Vegas Pro 17 b387
CPU i9-7940C 14-core @4.4GHz | 64GB DDR4@XMP3600 | ASUS X299M1
GPU 2 x GTX1080Ti (2x11G GBDDR) | 442.19 nVidia driver | Intensity Pro 4K (BlackMagic)
4x Spyder calibrated monitors (1x4K, 1xUHD, 2xHD)
SSD 500GB system | 2x1TB HD | Internal 4x1TB HD's @RAID10 | Raid1 HDD array via 1Gb ethernet
Steinberg UR2 USB audio Interface (24bit/192kHz)
ShuttlePro2 controller

ECB wrote on 12/15/2009, 3:36 PM
Bob,

I ran your tests using the ISO 12233 res chart rendered to 16x9 NTSC SD mpeg2 CBR 8M. I tried Premiere 4.2 MC encoder - max render quality and Tmpgenc and could not see any difference from the Vagas MC mepg2 encoder. The mpeg2 encoder in all cases looked very bad. The downconverted NTSC DV files looked great.

- ed
farss wrote on 12/15/2009, 5:13 PM
That's what I saw as well. As I think I mentioned before the unknown (to me) is the impact of the chroma sampling.

All of that aside my mpeg-2 downconverts look fine,as good as anything else but I'm working mostly with EX footage.
A few days ago I had a conversation with someone else about AVCHD and how well it downconverts and he too has noticed it does not look good at all. I recall our b3t complaining about Vegas and the quality he was getting from AVCHD however a detailed look at the source footage shows it was pretty bad to start with and it just went downhill from there..

Bob.
johnmeyer wrote on 12/15/2009, 6:22 PM
The tests using the resolution chart were both done going from progressive to interlaced. I think there might be issues, which this same group of people have discussed before, about what happens in Vegas when rendering interlaced material. Thus, the only way to tell if Vegas is really doing a good job in maintaining sharpness is to somehow start with a test case that is interlaced, and which contains movement.