HDV Camera Vs Vegas Downconvert

fldave wrote on 4/17/2006, 5:35 PM
I did some testing and I have to agree that Vegas does better in converting HDV footage to DV.

My details can be found Here (includes pics and downloads)

The main finding in my tests is that the project setting "Deinterlace Method" must be set to "None" to get the results I desired. Interesting, though, is that Laurence contends that he gets better results with anything other than "None".

This was raw-unedited footage, auto everything except focus, and didn't even have a fluid head, so no gripes from the pros!

Lots of downloads, and if the server gets over 100GB, I'm going to have to pull the mpgs.

Thanks to winrock for prodding me along to get this done!

Dave


Comments

Spot|DSE wrote on 4/17/2006, 6:06 PM
Very cool, Dave. Thanks for taking the time out. This is a better way of showing it than I show on the HDV tour, wish I'd thought of doing it this way!
Now maybe folks'll believe me. ;-)
fldave wrote on 4/17/2006, 6:14 PM
Feel free to share away. The peacock footage is mine, without the lettering, though!

Edited:
Just a verification, though, Spot...What are your deinterlace settings when you downconvert? Is that the key?
Laurence wrote on 4/17/2006, 6:38 PM
That is weird. I absolutely can only make the downconvert work acceptably with a deinterlace method set (it doesn't matter what it is). Otherwise Vegas resizes the interlace lines and I get wavey vertical lines on even the slightest horizontal motion. I'm certainly not offended that you would get exactly the opposite results, though it does puzzle me completely!
fldave wrote on 4/17/2006, 7:01 PM
Laurence,

The only big variable that I found with strange results was the Project settings. I had to put the m2t on the timeline, and make sure that the Project Settings matched the m2t, in my case HDV 1440x1080i. If I had the project settings set to the target (NTSC widescreen interlaced), then I got the black bars down the sides and a slightly horizontal squeezed aspect ratio. I knew that was wrong, so I didn't look at the zoomed dvi in detail.

Dave
Laurence wrote on 4/17/2006, 7:23 PM
I do it the same way, with the project properties set to HDV. Have you downconverted anything with a quick horizontal pan or any kind of fast side to side motion? That's the stuff I need the deinterlace tab set for.
johnmeyer wrote on 4/17/2006, 7:24 PM
I think I need to do this test myself at some point. I am very confused -- not because of your explanation, which is quite clear --but because I am not sure I agree with the conclusions.

The deinterlace=none footage sure looks smoother in the single still frame, but it also looks like detail is being lost.

Also, did you see any motion artifacts when playing back the various samples at full speed on a big-screen? I've done a lot of work with noise reduction, and I can show you some amazing single-frame grabs that look great, but the resulting video when projected at full speed on a big screen is pretty awful. I'm not saying that this is the case with the deinterlace=none, since I haven't seen it myself, but it sure doesn't seem like the right thing to be doing.

Laurence wrote on 4/17/2006, 7:35 PM
I'd love it if other people tried this. I ran into the same thing a while back with 4:3 to 16:9 conversions (I need the deinterlace tab to be set there too) and I seemed to be alone in experiencing this.
fldave wrote on 4/17/2006, 7:36 PM
johnmeyer:

"I'm not saying that this is the case with the deinterlace=none, since I haven't seen it myself, but it sure doesn't seem like the right thing to be doing."

Agreed. Deinterlace shouldn't even come into play here. Interlace to Interlace.

Regarding the "detail lost": I watched this peacock violently shake those feathers. I would have to say that the "lost" detail of the smoother looking "None" footage was more accurate than the staircased camera downconvert or the "Blend" Best render. I watched this footage over and over for about 45 minutes on my 65" tv.

Other eyes are welcome to rebut. Feel free to take the mpgs and burn a dvd. I only want to know what to set Vegas to for the best footage.

What is telling is that many editors have to pan/crop some footage. While I would think that it is more desirable to pan/crop on the original HDV footage, if you have to pan/crop on the camera downconvert, you can see from my pics what you will end up with.
Laurence wrote on 4/17/2006, 7:49 PM
If you guys do test this stuff out, be sure to test it on old fashioned 4:3 CRT sets as well so that you can really tell that the interlacing isn't screwed up. An old 4:3 tv with a DVD set to 60i rather than one of the newer progressive settings really shows off interlace problems.
winrockpost wrote on 4/18/2006, 6:39 AM
Wow, what a huge difference, can't wait to get to the office and try this again, this time following your procedure and see what i get. Thanks for all the obvious time, effort you put into this. and the step by step methods used in testing.
johnmeyer wrote on 4/18/2006, 8:23 AM
I would have to say that the "lost" detail of the smoother looking "None" footage was more accurate than the staircased camera downconvert or the "Blend" Best render. I watched this footage over and over for about 45 minutes on my 65" tv.

That's useful additional information. Thanks!
JohnnyRoy wrote on 4/18/2006, 9:48 AM
I don’t get this comparison at all. It’s quite obvious that the downconvert you did in Vegas in these examples are deinterlaced. That’s not a fair comparison. You are looking at interlaced footage compared to deinterlaced footage on a non-interlaced monitor (i.e., PC). On interlaced-to-interlaced comparison there is no difference as you originally saw.

The fact that a parameter in the Vegas project properties that has to do with deinterlacing is affecting an interlaced render is also suspect! How can this be? It shouldn’t matter what that is set to unless Vegas was asked to deinterlace the footage which it was not! I am interested in what the field order is of your Vegas downconverted file? From the images it looks like none (progressive).

I did similar tests myself on my own footage using a metronome so there was plenty of motion. I used the NTSC DV template and there was no difference. In fact, I thought the in-camera downconvert was a little cleaner when magnified. If you want to start tweaking templates in Vegas then all bets are off! You are no longer doing an apples-to-apples comparison. If your are asking if Vegas can do a better job than the camera by tweaking parameters in Vegas then of course you can get all sorts of results. But you can probably get the same results by applying them to the in-camera downconverted footage as well.

To make sure the test is fair, take your downconverted footage from the camera and deinterlace it in Vegas and see if you can tell the difference between that and the HDV downconverted footage in Vegas. I bet they look the same at that point. Or view both on an interlaced monitor (TV) and see if you can tell the difference. I would bet the FX1 downconvert would actually look better because it looks like your Vegas downconvert got a little soft in the deinterlacing.

Since people are obviously interested in this, we should be scientific about it. Scientifically, you need to provide a Step-by-Step approach that everyone can follow. Then at least 3 or 4 other people need to get the same results to corroborate the findings. Otheriwse, this is still speculation on everyone’s part including mine.

I appreciate you try to solve this Dave, but all you may have done is find a bug in Vegas that obviously affects the way interlaced footage is rendered when the deinterlace setting is set to none (which should have NO effect on interlacing at all)

I am currently away from both my editing computer and Z1 so I cannot share the results with you right now but I can tell you what I did that led me to believe tat there is no difference:

1. Set Z1 in front of a metronome that is in motion
2. Record metronome in motion for 1 minute
3. Rewind the tape and capture the footage once using in-camera downconvert (Squeeze method)
4. Rewind the tape again and capture same footage again as HDV m2t file
5. Open a new Vegas project and set for HDV 1080-60i (no tweaks!)
6. Place captured m2t file in an HDV 1080-60i project and render using the NTSC DV Widescreen template (no tweaks!)
7. Open a new Vegas project set to NTSC DV Widescreen
8. Place in-camera downconvert footage on the timeline
9. Place Vegas downconverted footage above the in-camera footage on the timeline and sync so that each starts at the same frame.
10. Use Pan/Crop to zoom in any amount you want.
11. Copy and paste keyframe to other track event so they are equal zoom
13. Use Track Mute to turn Vegas downconvert footage (top track) on and off comparing the exact same frame (Optionally I guess you can do what Dave did and do a split screen)
14. Preview full screen on a secondary monitor or capture as full PNG files (no deinterlacing!!!)

You will see no difference regardless of zoom. In fact, the in-camera footage looked a bit sharper to me. I could read the markings on the metronome clearer.

Variables I’m not sure of were the camera settings. Was I zoomed all the way in? Does that make a difference? How about shutter speed? I don’t remember if it was locked at 60 (I think it was) but I do remember the metronome was set to 60BPM. What if the metronome was set different than the shutter speed? This is still not as scientific as I would like but it’s what I did a few weeks ago to see for myself.

OK, now we need some volunteers to put on their lab coats and try this proceedure and see what they get.

~jr
winrockpost wrote on 4/18/2006, 10:09 AM
been messin with this for a couple a hours, ( overall a zillion hrs) , on the cam downconvert footage hit reduce intelace flicker and check it out.
Still confused, but still testing and would love the sony boys and girls to chime in on this subject, ( if they have i missed it)
johnmeyer wrote on 4/18/2006, 10:09 AM
JohnnyRoy,

What you said gets to the exact thing that was bugging me, namely that it sure looked like the Vegas-converted footage looked deinterlaced. This will definitely produce much better looking still photos, and will produce different-looking motion. On a progressive scan display, this deinterlaced footage might actually look better. However, I WANT interlaced output from my 1080i input (or at least that's what I am usually trying to get). I understand the difference between progressive and interlaced, and also understand that progressive is not "better" than interlaced, although some people enjoy arguing that point.

Don't get me wrong, I am not trying to gang up on fldave. No way. His tests are excellent, and very much appreciated. I am just trying to make sure I understand what they really mean so I don't waste time or create bad quality doing something the wrong way.
apit34356 wrote on 4/18/2006, 12:19 PM
JohnnyRoy, the primary testing you are talking about is between the Z1 and vegas? or any software as well? Just a heads up, the camera has limited processing power and a fixed time period to downconvert the video signal between frames. This set of facts seriously limits the camera to compete with computer apps( well written ones, at lease). Now, I well agree that the camera firmware is well written, better designed that most computer apps, but the time limit between frames serious limits what can be done. Vegas should be able to win this one. Now if vegas is analyzing the pixels just outside the frame field for lumi adjustments for the pixels that define the frame border, this may cause the appearance of softness around the edges where motive may exist.
fldave wrote on 4/18/2006, 12:36 PM
JohnnyRoy,

I'll post more later (I'm going to need a page 2 to the web page!). My method used was exactly everything you outlined, except I have an FX1, not a Z1, and not sure what the Squeeze method is. Is that an option different than the HDV/DV output?

I said in my page in the first paragraphs that Camera vs. Best/Blend were nearly identical. Since a peacock shaking his feathers has much more violent motion than a metronome, I will post the one frame (out of my 7 compare points) that shows the difference.

After I did every step you outlined is when I tried the "None" setting.

I verified that all of my DV shows lower field first. Whether that means that the footage is truely interlaced, I'm not sure. But I was very careful to use the lower field first settings on everything.

I rendered the DV camera downconvert to progressive and used it on the timeline instead of the raw camera DV downconvert. Either the "Best/None" option for the HDV is a crappy progressive generator or a great interlaced downconverter. The cam progressive is clearly progressive, while the Best/None categorization will have to be made by the experts who stare at this stuff all day long. I will post pics of this, as well. To me, it looks interlaced compared to the progressive.

So if we can determine whether this HDV Best/None footage is useable, and how to use it, all the best.

I will post more pics on my site later tonight.
Laurence wrote on 4/18/2006, 12:45 PM
If you have a firewire monitoring setup, here is an easy way to tell if your footage is properly interlaced: load some of the downconverted footage on to an SD timeline, find a motion section and pause the video in the middle of the motion. If it is properly downrezzed and reinterlaced, you will see the image flickering back and forth between two positions, each a sixtieth of a frame apart. Played at regular speed this produces the same smooth motion that shooting in SD to begin with would create. This is what gets messed up if I don't check the tab to select a method of deinterlace.
dsaelwuero wrote on 4/18/2006, 12:46 PM
I just purchased a FX1 and have not had the time yet to capture any footage on my editing machine. I have simply viewed the video by connecting the FX1 to my LCD (which looks terrific).

I read through the V6 manual and online searches and didnt find my answers.
Anyways, I am still learning a lot and want to understand the method to get the video in the proper format to use in Vegas. Can anyone be so kind as to spell out the basic steps for "downcoverting in camera" and in "Vegas 6".

For example in Vegas I assume
1. I just connect the FX1 with iLink cable to my editing machine.
2. Open Vegas and select Capture (Is this where I will find the settings that are recommended?)

Or do I just capture the footage in Vegas, place in timeline as usual and once I have done all of my editing, etc. and I select Render as: Is this where the video is downcoverted and I use the recommended settings?

To downconvert in camera I see there are certain settings I need to make inside of menu of FX1. Once I set the downconvert settings in camera I simply connect to editing machine and open Vegas and Capture as I would with a standard SD DV camera?

Is there any settings I need to be aware of?

I apologize in advance for these basic questions that most of you will say shows I am messing with things obviously way too advanced if I am stuck at stage one, maybe I am but I still want to learn and advance with this. I have very limited time to work with these things because of my day job but I am really excited to learn and improve.
JohnnyRoy wrote on 4/18/2006, 4:17 PM
> [apit34356 wrote] JohnnyRoy, the primary testing you are talking about is between the Z1 and vegas? or any software as well?

Just the Z1/FX1 and Vegas. No other software.

> [apit34356 wrote] Just a heads up, the camera has limited processing power and a fixed time period to downconvert the video signal between frames. This set of facts seriously limits the camera to compete with computer apps( well written ones, at lease).

Actually, the camera is quite capable of taking uncompressed HD from 3 CCD’s and converting it to an MPEG2 transport stream in real-time. That is it’s primary function. I think you underestimate what the camera hardware/firmware is capable of. Firmware will always be faster than software and hardware will always be faster than firmware. The camera does an outstanding job of downconverting in-camera. I have not seen any serious limitations by doing so. I’m not saying they don’t exist... just that I haven’t seen them.

> [fldave wrote] not sure what the Squeeze method is. Is that an option different than the HDV/DV output?

The Z1 has several downconvert options to Squeeze, Edge Crop, or Letterbox. The FX1 might not have these so squeeze might be your default mode.

Dave, your findings are very interesting. I will have to test them for myself later this week.

~jr
fldave wrote on 4/18/2006, 4:22 PM
JohnnyRoy, yes the results are interesting. I'm beginning to think I wasted my and everyone's time if using "None" turns out to be progressive. I'm going to try a difference mask type compare to see what I can come up with.

I will also do the pause on the output monitor to compare the interlacing.
Spot|DSE wrote on 4/18/2006, 4:27 PM
Although everything I'm doing points to better downconversion in Vegas, that point seems to be still debatable with some. But, there is still one very important point that seems to be getting lost here...
if you downconvert in-camera...you're done. You don't have a hi-rez master to go back to. You also don't have a high-rez master that you can pan/crop/zoom on. You don't have a high rez master to key on. You don't have as much color information to push/correct.

You can't frame-accurately re-capture the HD media to simply drop into a project that was already edited. If that's not important...then convert away.
farss wrote on 4/18/2006, 4:37 PM
There's another path one can go down and so far it's looking very good.
Use the HD Connect LE to convert to SD SDI, gives you 10 bit 4:2:2.
Only tried it going into a monitor via SDI and onto DB tape but it looks pretty sweet. Seems to be frame accurate.
You can also get HD SDI out of the box but I can't capture that but it looks good going into a monitor, albiet only a small one so it's hard to judge HD on a 14" screen.
Warning though, don't try doing this with a live feed, the delay is quite high.
apit34356 wrote on 4/19/2006, 2:57 AM
JohnnyRoy, for many years I designed cpus, then microprocessors, then high-end camera DSPs, as well as the microcoding/firmware. Today, retired, I still am called in to consult with Sony, TI, and a few others on future designs. I get to see tomorrow products alot, some are exciting even for a 54 year old. I really do understand Z1 hardware, but NDAs control my comments about it, but their are many published overviews about the processing chain of events in the Z1/FX1. Careful reading of the documents will give you the answer you're looking for. Tape to firewire is handled differently (hardware path and what DSPs are used) vs CCDs to tape. Sorry about the lack of "details".
JohnnyRoy wrote on 4/19/2006, 7:26 AM
Apit34356, Understood, I know you can’t reveal anything that’s under NDA. I was just responding to your general comment about software and firmware (I’ve been a software architect for 22+ years). I realize that given all the time in the world that a computer has to process the data, it should do a better job than the real-time environment of the camera. I won’t argue with that as a general statement. It’s just that the in-camera downconvert does a darn good job.

I was commenting that the default render to NTSC DV Widescreen from Vegas looks the same as the in-camera downconvert in my testing, even under extreme magnification. What Dave is doing is finding what tweaks are required to get better output and right now this deinterlace setting is very interesting given it should have no affect at all.

What DSE points out is equally as important. What do you loose by downconverting in camera and you loose a lot. I recently recorded a play where two actors were at opposite ends of the stage and I only had one camera. What I did was record a wide shot for that scene, then in post I captured as HDV and cropped each side of the stage to 720x480 and was able to cut medium shots between the two actors as if I had shot with two cameras. You can’t do tricks like that once you downconvert.

I am not debating whether downconvert should or shouldn’t be used. I am trying to debunk a myth that affects people deciding to even get into HD in the first place. I’ve seen it happen in various forums and it goes something like this:

A DV shooter makes the statement, “I need a new camera but I’m not ready to jump to HDV because [...insert excuse here, my computer is too slow, my boss wants DV, whatever...] so can I buy a FX1/Z1 and just downconvert in-camera to DV for now?”. Then the answers comeback, “downconvert sucks, you need to capture HDV and downconvert with Vegas to get any quality, etc.” Absolute BUNK! Dave proved it when he saw no difference in his original tests. What happens next in this story is the DV shooter decides he/she is not going to wait for hours for Vegas to downconvert all their footage before they can even get started editing and so they buy an XL2, PD170, etc. because HDV is too much work. This is what I’m trying to avoid. People giving misinformation that cannot be scientifically proven.

All the DV shooter needs to know is that the in-camera downconvert of the FX1/Z1 will yield results that are substantially better than any GL2/XL2/VX2100/PD170 etc. can produce with their SD acquired footage. It will also look the same as if you had used Vegas with the NTSC DV Widescreen setting to downconvert. I got the same results, Dave got the same results, I think Laurence got the same results too. So several people have proved that the in-camera downconvert is just as good as Vegas with the NTSC DV Widescreen template without any tweaks.

What we are on to now is, HOW can you tweak Vegas to make the downconvert look even better? No one has explained that yet. Dave is exploring this deinterlace thing, and we will see what the final results are.

~jr