OT: HDV to HDTV via analog?

BrianStanding wrote on 10/25/2004, 9:57 AM
I posted this question in an earlier thread, but got not takers, so I'm reposting.

With all the interest in High-Definition video, there's been a lot of talk about the ability to up-convert HDV video (such as shot by the new Sony HD-FX1 camers) into true High Definition formats. I've been hearing a lot of concern about 4:2:2 and 4:2:0 color spaces and other potential conversion problems.

Now I confess that a lot of this talk goes over my head. But I know in the world of audio sampling, you can avoid all kinds of problems by simply doing an analog, rather than digitial, transfer. If both recording mediums and soundcards are all good quality, the hit you take in generation loss is low enough that it's worth it to avoid all the problems inherent in a digital transfer from one format to another.

Would a similar approach work for HDV? I would think the analog coverters of any HDV or HDTV device would be pretty darn good and obviously an HDTV medium would be capable of handling any HDV-generated imagery. Say you hook up an HDV deck to an HDTV deck (whatever format that is) and send the video signal over a component analog connection. How much quality would you lose for a single generation loss?

Is this a reasonable approach, or is my logic faulty?

Comments

klaatu wrote on 10/25/2004, 2:21 PM
Hi Brian,

Yes, you are correct, and this is a topic I've tried many times to get across to people !!! After being a broadcast engineer for several years, not to mention trying REAL world hardware and software doing something very similar to yourself, here's what I've discovered from trial and error:

1. for a 1-2 hour HDTV production you will need around 500-600 gig in a SATA RAID 10 ( striping/mirroring ) to hold the data and your renders. A HIGHPOINT 1820a and Western Digital WD2500JB drives or SCSI drives are
the ONLY way to go ( due to their capture speed ).

2. As far as the best way to capture video ( standard or HDTV ), there are MANY factors to consider. The 1st is that the video is quite perfect UNTIL YOU RECORD IT. What that means is the TAPE FORMAT that you record onto will degrade your picture right off the bat; here is where you must carefully consider your options. I see many people talk about the miniDV format being better than analog HI-8, this is not necessarily the case....here's why ( along with some basic terms ).....In a standard NTSC ( american system ) picture you have 525 lines in a single "frame" of video ( HDTV has 720 or 1080 lines ). The "Color Space" is a measure of how many "Pixels" ( colored dots ) gets sampled from the orginal video comming in, the best being 4:4:4 ( No loss ). "Interlacing" is the old method of drawing one "even" field and then one "odd" field which is what some people refer to as seeing "Lines" in the screen. "Progressive" video does away with the "lines" completely by drawing the entire screen very rapidly, which is a huge improvment over interlacing. While the miniDV format can record the full 525 lines in a standard NTSC picture, it has to compress the video at a ratio of 5:1 and also cuts your "color space" down to either 4:1:1 or 4:2:0. first off, the loss of color space means that just by using the miniDV format ( no matter what camera/device it is ) you've just lost half or more of your color and resolution ( sharpness ), then to make matters worse it then compresses the video down to a ratio of 5:1 in order to fit all that video information onto the miniDV tape. This results in major blockiness, not so much during a still shot, but you'll definately see the effect under bright light and when quick movement occurs. The way I like to capture is with analog HI-8 ( canon L2 Pro ) thru S-Video, and then up-convert ( line triple or quadruple using a farajouda line tripler/quadrupler - these are expensive so you must go to a professional video company to have this done, or try the new windows media player add-in to up-convert to 720p ) to 720p or 1080i/p. While this is "analog" it does not suffer the effects of any artifacts or compression because the HI-8 format dosen't need to use compression. The only down sides are, don't make more than a 1 generation analog dub, and it only captures 400 lines of the NTSC picture. And while were at it, many people mistakenly use the "DV" term to mean various things, what it really refers to is the 25 Meg per second digital transfer rate in/out of the camera; for a miniDV camera this is sufficent; but this has NOTHING TO DO WITH THE PICTURE QUALITY !!! If you're doing a S-video capture into Vegas v5.0 using either an ATI video card or BLACKMAGIC capture card, then just for standard or widescreen video you'll need 20 Meg per second hard drive capture speeds or else you'll start dropping frames ( a very bad thing ). By using the BLACKMAGIC card ( about 600 dollars ) you will be able to do about the best capture that you're going to get in a 4:2:2 color space, this is what DVD's use ( MPEG2 compression at various bit rates from 3.5 to 9.8 Meg per second in a 4:2:2 color space ). While DVD's use 4:2:2 you really wont notice any visual difference unless you have true 4:4:4 RGB/component titles/graphics in your production. Please see http://www.onerivermedia.com/html/index.htm for an explanation ( just wait a while for the intense macromedia flash graphics to load !!! ).

I hope this explanation helps you; if not post here again.

----- BRIAN -----
farss wrote on 10/25/2004, 3:38 PM
I thin some of your assumptions are faulty. Yes in the analogue domain there is no such thing as compression or color sampling however there is still the critical issue of resolution. The whole idea of compression is to INCREASE the resolution and the chroma bandwidth. The resolution of Hi8 is very low and the color bandwidth is way lower than even lowly DV25. Even BetacamSP is only on about par with DV25.
Try watching anything with fine detail on Hi8, notice how it cannot resolve the color difference on things like a field of grass, what you are seeing is a B&W image with a very low res color wash!
Most of these things are hard to see on low res analogue monitors, it's only when you start using proper broadcast gear that you'll see the difference.
I'm not for a minute saying that DV25 is a perfect format and some may find it's limitations more distracting than high quality analogue however to suggest analogue is a better road to go down is simply wrong. If you find the chroma edginess of DV25 distracting Vegas does have a Chroma Smoothing FX.
The same goes for audio, it wasn't until the development of hi res digital audio and it's demands for very accurate studio monitors that we started to notice all the limitations of low res digital audio but go back and listen to any analogue recording and you'll immediately hear the lack of clarity, rise time and noise.

Bob.
rs170a wrote on 10/25/2004, 9:14 PM
...a 4:2:2 color space, this is what DVD's use ...

Brian;
I have to disagree with you on this point. According to my own reading as well as Section 3.4 of the DVD FAQ, "Pictures are subsampled from 4:2:2 ITU-R BT.601 down to 4:2:0 before encoding, allocating an average of 12 bits/pixel in Y'CbCr format."

Mike
BrianStanding wrote on 10/26/2004, 1:14 PM
O.K., I'm an idiot. I did a Google search on "1920 x 1080" "vtr" and started looking at the specs of the Sony SRW-5000 HDCAM deck.

After looking at the incomprehensible descriptions of the video outputs, I'm beginning to guess that there is NO SUCH THING as an "analog HD signal." It seems like anything that is "true" HD has to be carried by some kind of digital, encoded output. Do I understand this correctly?

I guess that wouldn't surprise me, since there's so much more information to be carried over the cable. In which case, my assumptions in my original post were all wet.

In which case, as Roseanna-Hosanna-Danna used to say, "....never mind!"

(arrrgh. So much to learn, so little time)
farss wrote on 10/26/2004, 3:38 PM
No, you can output HD over a component feed, that's the main way it goes into consummer displays (sadly!).
My biggest gripe with what you are saying is that analogue cameras such as Hi8 are somehow superior to DV25!
Bob.
BrianStanding wrote on 10/28/2004, 8:04 AM
Not me, Bob. Different Brian -- the Brian you're thinking of goes by the handle KLAATU (see above.) I swore off Hi-8 after six brand new tapes showed dropouts after 1 or 2 plays. I'm strictly a DV-firewire guy now.

So analog HD DOES exist? Does it stay at HD resolution, or is it down-converted into standard definition?

Are there many (or any) HDCAM decks that have component analog inputs? Didn't see that option on the specs for the Sony HDCAM unit.
klaatu wrote on 10/28/2004, 2:03 PM
To Brian Standing, yes, you can have both a digitial and analog component out in HDTV. Analog typically uses seperate R-G-B BNC or RCA connectors, these are usually found on the back of DVD players. Digital component ( or DVI as its called ) can carry either analog or digital ( encrypted ) HDTV signals. The DVI connector comes in 1 of 3 different pin configurations, due to it being able to pass 2 video channels of information at the same time.
---------------------------------------------------------------------------------------------

To RS170a ( cool handle by the way ), You are correct, I should have elaborated there. If we begin with the most popular studio signal known as D-1 or CCIR-601 digital video. This signal is coded at 270 Mbits/sec. We derive the 270 Megabits/sec using the following:

luminance 858 samples/line * 525 lines/frame * 30 frames/sec * 10 bits/sample ~= 135 Mbit/sec
R-Y 429 samples/line * 525 lines/frame * 30 frames/sec * 10 bits/sample ~= 68 Mbits/sec
B-Y 429 samples/line * 525 lines/frame * 30 frames/sec * 10 bits/sample ~= 68 Mbits/sec
Total 27 Msamples/sec * 10 bits/sample = 270 Megabits/sec or 33.75 Megabytes/sec.

The studio standard CCIR-601 represents the chroma signals with half the horizontal samples as the luminance signal. At the same time, it employs full vertical resolution. This ratio of subsampled components is designated 4:2:2. MPEG-1 and MPEG-2 both define the use of 4:2:0 for consumer applications. In this case both chrominance ( color ) signals have half the resolution of the luminance ( brightness ) signal.

thanks for the correction.
--------------------------------------------------------------------------------------------
to farss, While its true that you have better color and resolution due to the compression, what difference does it make how much color or resolution you have if the picture blocks-up every time there's movement due to the compression artifacts ???? You are far better off adjusting the color thru a time-base corrector and then line tripling/quadrupling, there you get as much color as you want and as much sharpness as you want provided you shot your video with a pro HI-8 camera or better, and not a "consumer toy". And as I say again, the DV25 simply refers to the SPEED of the data transfer NOT PICTURE QUALITY. DVCPRO 25 is still compressed 5:1 with a 4:1:1 color space, the same as miniDV. So yes, I'm saying that a good analog s-video or analog component can look as good as miniDV or DVCPRO 25. The only time that I use a canon GL-2 ( miniDV ) is when I'm shooting an absoloutely still shot or a computer monitor, as the GL-2 can keep the monitor flicker down without all the hassels of genlocking.

----- BRIAN -----