HDV to SD worlkflow

vectorskink wrote on 12/7/2008, 2:40 PM
Hi guys

I have shot a Christmas concert for the local preschool in HDV for myself (My daughter was in it). After being inundated with requests for a copy of the footage, I have edited it and am ready to render to SD for DVD.
Before I do, is there any "best practice" procedures I should follow to get the crispiest SD downconvert? Or is it simply render with the "Best" setting?

Thanks guys

PS. I have done a search!

Comments

D. Collins wrote on 12/7/2008, 3:07 PM
Sometimes back, the guys told me, for louder audio, when rendering, under Custom, select Dialog Normaliztion & change to -31.

Go to preprocessing, change Line Mode to None
and RF Mode to None.

It works for me.

Dave
DavidMcKnight wrote on 12/7/2008, 3:13 PM
Assuming you're using DVDA to author the dvd and that you're in ntsc land, here's what I'd do

Render As - MPEG2 - DVDA NTSC Widescreen
Click Custom; Video Rendering = Best; click the Video tab; Constant bit rate of about 8,000,000 to 8,500,500 if your material is an hour or less. If it's longer than that, search for postings on VBR (variable bit rate) and two pass rendering.
vectorskink wrote on 12/7/2008, 3:17 PM
I'm in Aus (PAL).
OK. No tricks. Easy!

Thanks for your help!

cheers!
Tim
Christian de Godzinsky wrote on 12/7/2008, 11:35 PM
Hi,

Just also remember to DEINTERLACE !!!

When shinking HDV video to SD (PAL or NTSC) you get better results, much better results.

Don't aske me why, but it just works out that way...

Christian

WIN10 Pro 64-bit | Version 1903 | OS build 18362.535 | Studio 16.1.2 | Vegas Pro 17 b387
CPU i9-7940C 14-core @4.4GHz | 64GB DDR4@XMP3600 | ASUS X299M1
GPU 2 x GTX1080Ti (2x11G GBDDR) | 442.19 nVidia driver | Intensity Pro 4K (BlackMagic)
4x Spyder calibrated monitors (1x4K, 1xUHD, 2xHD)
SSD 500GB system | 2x1TB HD | Internal 4x1TB HD's @RAID10 | Raid1 HDD array via 1Gb ethernet
Steinberg UR2 USB audio Interface (24bit/192kHz)
ShuttlePro2 controller

johnmeyer wrote on 12/8/2008, 7:53 AM
Just also remember to DEINTERLACE !!!I assume you are talking about making sure you have the Project interlace property set to something other than "none."

I don't know if this is relevant, but this weekend I tried rendering from 16:9 to 4:3. The source was SD, not HD, but I initially tried the render with deinterlace set to "none." I got horrible line twitter which looked like field reversal. I played around with render settings and finally gave up and set the interlace setting to blend (it seemed to work better than interpolate on my particular project). The resulting video looked OK, but not great.

This sure seems like a bug to me. I haven't had time to investigate, but I sure suspect that setting a deinterlacing parameter does nothing more than mask the bug, and that the result is substandard video.

However, I haven't yet done any tests to try to prove this.
johnmeyer wrote on 12/8/2008, 8:09 AM
OK, this is a postscript to my last post.

I just did a test, using SD MPEG-2 4:3 footage. This footage is similar to HDV footage, because it is top field first. I rendered this footage using the standard DVD Architect Widescreen template (which is bottom field first). When I did this, I used Project Properties which had the deinterlace method set to NONE.

I burned the results to a DVD+RW. IT LOOKED TERRIBLE.

I then did the same thing, but this time set deinterlace to blend. I rendered using the same template and burned the results to DVD+RW. It looked pretty good.

Finally, I set the deinterlace method back to NONE, but this time started with the DVD Architect Widescreen template, but just before rendering, I went to Custom in the Render As dialog and changed the field order to TOP field first (which matches the original MPEG-2 camcorder source footage.

This time, even with no de-interlace, the footage looked great.

So, the moral of the story is that I think this may be a case where using deinterlacing is actually masking the real problem, namely field reversal, and that by using it, you are actually degrading your footage significantly (because, as discussed in my posts over the past two days in another thread, you are throwing out half of all the temporal information -- something that degrades your video almost as much as if you threw out half your pixels).

Thus, pay attention to field order, and you may not need to deinterlace.
Christian de Godzinsky wrote on 12/8/2008, 9:37 AM
Hi,

What you say is 100% true for SD to SD.

However, going from HD(V) to SD and deinterlacing has always produced better looking results for me.

Actually, you do not practically loose practically any information at all since you are going down from 1080 lines to 576 in PAL, and definitevly not with NTSC's 480 native lines...

The problem is that 576 or 480 are not even multiples of 1080. Not deinterlacing produces strange looking video.

Please correct me if I'm wrong, but this is just my experience - and I have played around with the settings a lot - too :)

Christian

EDIT: Sad to hear that there might be some kind of bug in reading the settings for the render. That would, however, explain some of inconsistent results I have had in the past...

WIN10 Pro 64-bit | Version 1903 | OS build 18362.535 | Studio 16.1.2 | Vegas Pro 17 b387
CPU i9-7940C 14-core @4.4GHz | 64GB DDR4@XMP3600 | ASUS X299M1
GPU 2 x GTX1080Ti (2x11G GBDDR) | 442.19 nVidia driver | Intensity Pro 4K (BlackMagic)
4x Spyder calibrated monitors (1x4K, 1xUHD, 2xHD)
SSD 500GB system | 2x1TB HD | Internal 4x1TB HD's @RAID10 | Raid1 HDD array via 1Gb ethernet
Steinberg UR2 USB audio Interface (24bit/192kHz)
ShuttlePro2 controller

johnmeyer wrote on 12/8/2008, 10:51 AM
owever, going from HD(V) to SD and deinterlacing has always produced better looking results for me.

I had a few more minutes to spare, so I re-did the test with some HDV footage I captured from my FX1. I just waved the camera back and forth as I captured.

I am going to post the results in another thread, because I don't understand at all what happened.
goodtimej wrote on 12/8/2008, 9:38 PM
John, does it really make that much of a difference to choose upper field first?
Laurence wrote on 12/8/2008, 9:54 PM
Thus, pay attention to field order, and you may not need to deinterlace.

Wow, that isn't what I would have expected at all.

Here is one thing I noticed when I experimented some time back that may be relevant:

You get different results on a new progressive scan TV than you will with an older CRT TV that actually displays an interlaced image.

Here's how I tested with an older CRT TV: I resized the video from interlaced HDV to interlaced SD then paused the video in the middle of some fast action. When I chose a deinterlace method what I got was an image that jumped back and forth between the two fields and kept each image field intact. That is what I wanted. When I played the same DVD back in a newer LCD TV (that deinterlaced on the fly and displayed a deinterlaced progressive scan image) this effect was lost and the field order became irrelevant. The same thing happened when I viewed this on a PC using DVD player software. This was driving me nuts because for a while I was making SD DVDs that looked fine on my computer and some TVs but horrible on others (most notably the now-vintage Toshiba 1080i CRT TV I happen to have in my living room).

A good way to test this is to use a firewire convertor like a Canopus 110 or an MiniDV Camcorder as a monitoring device with a 16:9 CRT TV for display. Set the project properties for SD at the same aspect ration as your TV and set the preview quality to best and full. With this setup and a deinterlace method selected, pause the video in the middle of some fast motion. What you'll see is a single frame that flickers back and forth between intact even and odd fields, just like if it was shot in an SD interlaced mode. Now go back to your project properties and uncheck the "select deinterlace method box" and go back to your preview. The image is now screwed up and you should be able to see exactly what is going on.

Here's another monkey wrench that further screws things up:

When you make an SD DVD out of HD footage, the SD DVD is usually 16:9. On the other hand, most CRT TVs that actually display video in an interlaced fashion are 4:3. When you display 16:9 interlaced footage on a 4:3 interlaced TV, what actually happens is that your DVD player deinterlaces the footage then drops every fourth line. Thus you don't see some of the interlace problems that may exist on 16:9 SD interlaced DVDs on quite a wide variety of playback gear. In order to see 16:9 interlaced SD DVDs properly (and thus see problems with the field order) what you need is a 16:9 CRT TV, and there really aren't that many of them. That or a 4:3 TV with a 16:9 "squeeze mode".

As an early HD adopter I have an early Toshiba 4:3 CRT HD TV with a 16:9 squeeze mode that displays both HD and SD interlaced material with the proper interlace in both 4:3 and 16:9 modes. This is the TV in my living room and the one I would use to preview and proudly show off my video creations. A few years ago, my projects started to look really horrible on this TV. This is probably why I noticed all this so early and why so many people have missed it. You really don't see this type of problem with most viewing configurations. This is likely why the Sony development team missed it as well.