Downscaling HD to SD, oddball idea.

farss wrote on 1/30/2009, 8:25 PM
i've read hundreds of posts about this, considered all manner of convoluted workflows and still I was not happy with the results and I know I'm not alone. A few days ago though another very simple thought entered my head. Maybe there's nothing wrong other than the content itself or more to the point how I've shot it.
This idea came from something I read ages ago along of the lines of "when we shoot with a low res medium like video we shoot tight". Going back over my content I realised all the tight shots look just fine when converted to SD, it's only the really wide shots that suck. They look fine in HD of course.
I'm suspecting I at least tend to shoot wider and wide for longer in HD because I somehow know I can but ignore how it'll look in SD. Looking at my SD work from a while ago I used to shoot tighter and it looks fine.... technically.

The other thing to support what I'm thinking is if I crop my wide shots before they're downscaled to SD they look better, well mostly, at times. Theory says they should look worse.

Anyone else think I'm nuts or not?

Bob.

Comments

John_Cline wrote on 1/30/2009, 9:35 PM
I think those are two different questions; whether you're nuts vs whether you're correct about tights shots. As far as I'm concerned, the answer to both questions is yes.
Coursedesign wrote on 1/30/2009, 9:50 PM
Those "tights shots," were they for Japan?

:O)

For the other question, just think how great commercial movies can look when downrezzed to SD for DVD.

But there's a lot of work that went into getting those wider scenes to look good in SD.

You need to stop thinking program conversion and start thinking scene conversion, one at a time...

Real work to be sure.

Christian de Godzinsky wrote on 2/3/2009, 8:44 AM
Hi,

Doesn't tight(s) shots always look good, be it HD or SD ;) ???

I have been doing some experimenting donconverting HD to SD (in Vegas - and PAL50i). I have never been very satisfied with the results, or let's say - I have higher expectations now when the source material has 4 times the resolution as SD sources. I have not too high expectations for SD, I know how SD looks and its limits, albeit PAL SD is much better than NTSC SD. Still downconverted HD(V) looks terrible. The "crispiness" is gone.

I then captured one STILL (in the Vegas previes window at full resolution from HD material). Rendering this still as a photo to SD looks crispier than the same live HD video (non - moving target and cam). Everything looks fine and crisp on the timeline, it is just the rendered mpeg2 LIVE HD(V) video that is not up to expectations.

There is something spooky going on that I do not understand...

Christian

WIN10 Pro 64-bit | Version 1903 | OS build 18362.535 | Studio 16.1.2 | Vegas Pro 17 b387
CPU i9-7940C 14-core @4.4GHz | 64GB DDR4@XMP3600 | ASUS X299M1
GPU 2 x GTX1080Ti (2x11G GBDDR) | 442.19 nVidia driver | Intensity Pro 4K (BlackMagic)
4x Spyder calibrated monitors (1x4K, 1xUHD, 2xHD)
SSD 500GB system | 2x1TB HD | Internal 4x1TB HD's @RAID10 | Raid1 HDD array via 1Gb ethernet
Steinberg UR2 USB audio Interface (24bit/192kHz)
ShuttlePro2 controller

Lou van Wijhe wrote on 2/3/2009, 9:58 AM
I get very crispy looking SD from HDV shot with a Canon HV20.

I shoot in PAL 25p and I do not de-interlace when downscaling (I tried de-interlace blend mode but that only makes the images fuzzier). For downsampling I just use the DVD PAL Widescreen template; when I have enough space on the DVD I use CBR at 8000 kbps.

The only problem I have is that the SD image looks a bit darker and has more contrast on my Sony Bravia LCD TV than the HD original. It's not clear to me if that is something in the Vegas ITU conversion or some setting in my LCD TV.

Edit Feb. 4, 2009: I couldn't sleep and then it dawned on me why the downsampled image had more contrast: When you go up equal length ladders and one has only 720 rungs instead of 1920, you'll have to take larger steps. When you use de-interlace blend or interpolate mode before downsampling (which I don't because of the progressive source) you already automatically lower the contrast, as you drop every second scan line and you then average the missing one.

When I lower the contrast on the output (Video Output FX button) I get a contrast that is virtually equal to that of the HDV source. A setting of -0.05 to -0.1 is enough.

Lou
Coursedesign wrote on 2/3/2009, 10:52 AM
HD and SD have different color spaces, you need to convert.

Lou van Wijhe wrote on 2/3/2009, 12:12 PM
At the moment I have a test encode running where on the tab Advanced Video I set the color primaries to Rec-709 and leave the rest at Rec-624-4. Is that how I should convert?

Lou
farss wrote on 2/3/2009, 12:52 PM
"(I tried de-interlace blend mode but that only makes the images fuzzier). "

That will happen on fast motion although typically motion blur will mask that. You could try Interpolate as an alternative.
Specifying a de-interlace method does not produce de-interlaced SD, your output will still be interlaced unless you specify your output as progressive which I would not recommend.
If you do not specify one of the two de-interlace methods you will get truly horrid looking interlace artifacts on motion.

Bob.
Lou van Wijhe wrote on 2/3/2009, 12:57 PM
Bob,

Re: If you do not specify one of the two de-interlace methods you will get truly horrid looking interlace artifacts on motion.

I don't get artifacts. Could that be because I shoot in progressive mode?

Lou
farss wrote on 2/3/2009, 1:07 PM
"You need to stop thinking program conversion and start thinking scene conversion, one at a time..."

Please explain this further. I'm not talking about compression, I'm talking about downscaling uncompressed to uncompressed. I note Photoshop offers a couple of different scaling algorithms. Differences are very small.

I do know that in part one needs to add more edge enhancement after downscaling to get good looking SD. That can need to be done on a scene by scene basis.

Bob.
farss wrote on 2/3/2009, 1:10 PM
"I don't get artifacts. Could that be because I shoot in progressive mode?"

YES. What I was talking about is only applicable to downscaling interlaced video. If you shot P then most likely your video is 25PsF50. So long as your source is flagged P Vegas will simply merge the fields and downscale that to a single frame.

Bob.
srode wrote on 2/3/2009, 5:45 PM
I've found rendering and original 1920x1080i 60i source in Vegas to AVCHD 1440x1080 and then letting DVDA recompress to a DVD format gives the decent results for display on a SD TV - it doesn't match the quality of a studio/rental DVD disc when played on an HD TV but it's substantially better than rendering to a DVD format in Vegas. Curious what other folks are using for their prefered approach.