When DV is inside a PC…

Family_Voices wrote on 7/11/2003, 10:09 PM
When DV is inside a PC what is its file format at its most basic and least compressed form? For example, for a DV video clip inside a PC, what is the file extension? This relates to asking, is DV a file format or just a specification for a type of video file. Perhaps it will help if I list what I think I do know concerning DV (and if any of this is wrong, obviously I need straightening out on my “facts” too):

A list of my DV “knowledge” / “facts”:

DV is a consumer format for digital video invented by Sony for consumer products that grew into something so popular (because it worked so well and was so cost effective) that many professionals started using it.

DV is characterized, so far as its sampling goes as 4:1:1. This notation indicates that color sampling is low (in a sense only ¼) but that full luminance sampling (that’s the 4) is employed (consistent with the the NTSC derived roots limitation of DV that it samples only to a vertical depth of 480 pixels and not much more than 640 pixels horizontally—I think the full 16:9 wider aspect ratio may be supported by the format, but if used as 4:3, perhaps the horizontal limit is 640).

DV is encoded from a video stream such that the data rate is maintained at 25Mb/s (but does not exceed this). This means that a minimum compression possible consistent with not exceeding the bandwidth limit is achieved. But this is minimum compression after the ¼ color sampling limitation and the fixed pixel count of the format. (I don’t know but think this 25Mb/s rate includes an allowance for accompanying sound as may be present or could be present. I am as interested in sound recording as I am in DV but I am starting my questions with the DV.)

DV works hand in glove with the IEEE-1394 a.k.a. firewire bus in the Windows PC world to move into and out of PCs, and PC peripherals such as external hard drives and DV dedicated devices such as DV camcorders and DV decks.

DV can be written to DVD (as data) without further compression or be wrapped into special envelopes with sound (after being compressed by MPEG2) to become movie DVDs (a different critter then data DVDs but often can be “burned” onto discs by the same hardware). I know that the flavors of DVD and DVD media include DVD-RAM, DVD-ROM, DVD-RW, DVD-R, DVD+RW. DVD+R. DVDA (DVD audio) and that some burners can do most of these things (depending on instructions and media fed into it) but that DVD-RAM is often somewhat itself.) I know that there is a consortium that is developing blu-ray DVD like devices that in the same form factor (disc size) can store some six times more information. The first blu-ray products are already being sold in Japan.

On DV media, the DV video is written to the widest track and audio is written to separate track(s?). This means that if pure audio is written to DV tape that the utilization of the format is very poor (because most of the tape is unused). Mostly, pure sound data seems to be either kept on hard drives (as are some video files) or are copied to DVD media as data, rarely but can be converted to DVDA, or to CD (as CD-R, or CD-RW). Audio CDs are also a popular final destination of audio files where video files are not involved.

DVDcam is a special (above high end consumer level) media and tape writing specification and camcorder deck format. The information written to and read from DVDcam is exactly the same files as with consumer DV media. The DVDcam media is less stressed though for the tape reading / writing speed is higher and the tracks are wider as well as the type of magnetic coating being more robust. (I mention this because I have a Sony DSR-11 DVcam on its way to me. I decided to get one for the reasons cited.)

Both DVDcam and DV consumer media come in a larger and a smaller cassette size. The practical difference is that the larger cassette size can hold more tape. It was an innovation that two sizes of tapes (and in the case of the DSR-11 and other DVcam decks even two different thicknesses of cassettes) can fit into the same deck.

Here ends the list of what I think I know about DV.

Stating my question again, when a digital data stream is read from DV or DVcam media into a PC in what form is it being captured? What kind of file(s)? I am personally interested in using the most basic, most universal and least compressed form of DV that is possible.

On a different forum I was hearing that inside a PC the DV exists in some kind of .avi file (avi is the extension). I don’t know that that is the most basic form though. It is my understanding that there are many kinds of avi and that some are compressed. Again, I am wanting to know what the most basic form that DV can exist in for use in NLE and for copying from tape to hard drive (via the mediation of Vegas 4) and for copying from hard drive to tape. (I have learned that when going to tape the terminology is “print to tape”.) Once I get this information for the DV video files I will be back asking about the sound files.

This is the first of several very basic questions that looking and thinking about DV, chasing links, and asking quite a few questions (but not here, the mother of all DV forums for Vegas 4 users so far as I know) have not resolved.

I thank you for your assistance in getting my foundation of these matters straightened out.

Best regards,
Ralph
July 11, 2003

Comments

filmy wrote on 7/11/2003, 11:02 PM
Just sort of a little sort of, kind of, anal thing - At it's core "DV" = Digital Video and what you are asking and talking about is mainly mini-dv format. I pointed out on a thread, I think it was about Avid but not sure, that people just sort of say something like "I am making a film" or "I am shooting film" and not say what type of film stock they are shooting with...so now "DV" has sort of become the new "film" and people just pick up a mini-dv camera and say "I'm shooting DV" and they are, but so are people who are shooting with DV-Cam and HD Vari-cam's too. Anyone who creates a movie such as Monsters, Inc or Final Fantasy is working with "DV" as well but chances are you won't hear those refered to as being "made with DV." And when you do hear about "DV" features they are not always talking about 'mini-dv'. About 1993/1994 we shot a feature and when the negative was cut we had it telecined straight to a D-2 master. We went to the film markets and part of the selling point was we had a "DV" master of the film. Now if we went to the film market and said we had a film on "DV" it just wouldn't have the same affect, more than likely it would have a negative affect because people would feel it was shot on 'mini-DV'. Anyhow...anal mode off now. :)

Short answer to your main question about "inside a PC the DV exists in some kind of .avi file" is that the mini-dv digital information is brought into the PC via 1394/firewire/i-link and put inside a 'wrapper' so the computer can read it.

A bit more info - For a windows based computer that is most commonly an AVI based wrapper. If you use QT you can save it with a ".dv" extension but it is still a wrapper. Some "wrappers" are better than the others, I personally have used the Main Concept DV codec/wrapper for some time now. Some of the hardware based solutions use wrappers that will only play back with their own hardware. In its most simple form the wrapper is telling the computer to decode and decompress that digital video information so you can view it. When you put it back out to tape it is recompressing and re-encoding it so to speak. If no changes have been made to that video stream than it doesn't have to render, it just plays out all the little 1's and 2's back to the tape. If you change something in the stream than it will have to, more or less, decode - decompress - recompress - render - encode and than back out to the tape.

As for what DV formats you can have on your hard drive - pretty much any kind. Again - with mini-dv is is wrapped up in a wrapper and an ".avi" extension. With HD it can be a few things - the specs are still sort of kind of being made but with some utilities you end up with a ".ts" extension, with others you end up with an ".mpeg" or ".mpg" extension and with others you end up with a ".m2v" extension and I think there are other extensions as well. And again - some capture utilities and NLE's use their own extensions/codecs.

As for firewire and 1394 - you can go to the 1394 Trade Association site and get more than enough info. http://www.1394ta.org/

And I am not being very techie with all of this. Just trying to say it is easy to understand terms. But I wanted to make sure that when you talk about DV coming in via firewire you understood that a lot of "DV" cameras and decks now have 1394 but not all are Mini-DV streams. An example is the JVC HD-1 camera and the JVC D-VHS decks. The HD-1 uses mini-dv tapes but the output via firewire is HD, based on a 'new' HDV spec that is not finalized yet. (http://www.sony.net/SonyInfo/News/Press/200307/03-0704E/) The D-VHS decks can input a mini-dv stream via firewire but it's output is not a DV stream but an HD stream. This HD DV (or HDV in this case) stream doesn't end up on a PC the same way a mini-dv stream does even though they both come into the computer via firewire.
John_Cline wrote on 7/11/2003, 11:06 PM
Go to Adam Wilt's DV Website

John
john-beale wrote on 7/11/2003, 11:18 PM
Some people use DV to mean simply "Digital Video", but it also refers to a specific logical and physical video format, originally described in a document titled "Specifications of Consumer-Use Digital VCRs using 6.3mm magnetic tape"; HD Digital VCR Conference, December 1994. The current DV standards document is IEC 61834 published by the IEC http://www.iec.ch/

"Mini-DV" is a mostly consumer tape format, runtime 60 or 80 minutes in SP mode. There is a physically larger, mostly-"industrial" tape format called "DV", "Standard-DV" and even sometimes "Large-DV" with runtime up to 4 hours, which uses the same codec and data specs. Sony's DVCAM is another physical format variant, but again using the same codec. This specific codec and logical data format used on each of these tape formats is commonly known as "DV".

For an exhaustive compendium of detail regarding all things DV, consult http://www.adamwilt.com/DV-tech.html
Family_Voices wrote on 7/12/2003, 11:02 AM
Taken together this group of responders (3 in the immediate tree above from filmy, jbeale, John_Cline) are helpful and go a long way. It surprises me how little information you get about wrappers for DV on most sites that seem to have a lot of DV information. The file formats *.avi are Microsoft de facto standard wrappers I take it. Is that okay that Microsoft has an iron in this fire? Are these the most basic and useful choices? My first consideration in making a choice would be to avoid doing anything to reduce the performance of the DV or to harm the archival usefulness of retained DV files / DV tapes but I need to know more before I can apply such a consideration.

I saw references (following a link in jbeale to a page on a site maintained by Adam J. Wilt) to a discussion of two types of *.avi files: Those that contain DV type 1 and DV type 2. This might warrant a new thread start but I'll try it here:

First of all, is it the *.avi that is different or just what it is wrapping. (It is not clear to me whether the type 1 and type 2 refer to the *.avi wrapper containing these or just to what’s in there when the wrapper is opened.

I notice that DV type 1 means the audio and video files are entwined in the wrapper. Perhaps this has some advantages such as possibly (?) better maintenance of sync(?).

Type 2 has the audio and DV files within the wrapper untangled and separated. This may facilitate removing or adding audio tracks or copying audio tracks for other purposes. Obviously, when DV is in its tape media environment it is separated as to its audio and video components.

In the above two paragraphs the "descriptions" of DV type 1 and DV type 2 are my own inferences written to induce comment and are more likely then now wrong (a disclosure least anyone misunderstand).

Within the context of Vegas 4, are both types supported? How does one select one or the other? What determines which type one gets, the codec whether in software or hardware? Is this something that is already achieved in the IEEE-1394 / firewire data stream as the DV comes out of the deck or camcorder? Or are conversions made within the PC and untangled and unambiguous DV is all that travels over the firewire bus? (That could not be completely correct. The wrapper must be present when using a data stream going and coming from an external HDrv* for example.)

Perhaps I have gotten very far a field here. Help is requested and appreciated.

I did intend to use DV in its technical meaning (along the lines of the comment of jbeale) and in the meaning of how it is used in Vegas 4. DV as an acronym* would likely usually mean "digital video" and obviously that encompasses a huge universe as filmy noted. Apparently less confusion is engendered if one uses the term in an expanded form DV25 (except that we don’t see it written that way very often).

Best regards,
Ralph
July 11, 2003

A bonus section is down here for those who want to read a bit more background (and to spare everyone else). At the end of this section are the * and ** footnotes.

Using DV in its technical meaning to refer to a specific format a.k.a. DV25:

It is my understanding that the way that codecs work to encode and decode DV from analog video or to analog or display signal streams is not part of the DV standard. I have heard and read that some codecs work very well and achieve great utilization of the available bandwidth (limited to 25Mb/s) while others are not so effective.

I notice (again) having been reminded by a responder that DV is by definition format for 480 high by 720. I notice that in square pixels this is neither 4:3 nor 16:9. Accordingly the pixels are not square but horizontally crunched quite a bit (unless a true 16:9 image is encoded within all the pixels which case on display the pixels must be stretched horizontally a little). It seems that the display codecs for when these are viewed adjusts the output video streams so that the sets show the correct aspect ratio (usually 4:3).

If letterbox (in the usual sense) is employed, a significant amount of the vertical pixels are blanked (or used for purposes other than displaying the video).

True to its NTSC roots, DV does not offer very good vertical resolution.** I understand that some use the PAL codec for DV thus getting an improvement in vertical pixels to 576 but even that falls far short of the lesser of the two HDTV specs.

*I have problems with acronyms. For example, try searching for articles (posts and responses) on DV within this forum. I can find no way to not get as well articles on DVD, and a variety of other hits that have nothing to do with DV. Trying a search for " DV " defaulted to DV before the search. Likewise, a search for AVI turned up Avid and some other non-AVI articles. The best I could do was to set to set search for 100 hits per page and then use my browser to search for "match for exact words" plus “match case” within the hits.

D in some combinations of acronyms has been variously taken to mean "definition" or "digital".

If not for HDs association with "high definition" or "High Definition" it would have been a useful acronym for "hard drive". It is unfortunate that the industry does not have a shorter single word that is a known synonym for "hard drive" exclusively. Maybe one could write HrdDrv or HDrv.

Many know by now that DVD is now said to mean nothing except DVD although it almost certainly originally meant "digital video disc". (The term was never trademarked or the trademark was lost due to generic use.) An industry effort to declare by fiat DVD to mean Digital Versatile Disc (and perhaps that would have been trademarked) was just generally ignored. Just as AARP now means just AARP (this is an organization that used to be the American (?) Association of Retired People) DVD now means just DVD.

** Most TVs toss 3 or more percent of the picture by both width and height (meaning perhaps 10% or more overall). The TV set parameter is called over-scan, just to assure that no one sees a dreaded picture edge.

I recall when in some rear projection television sets (the field I was working in with Philips) were set for as much as 8% over-scan (causing a loss of nearly 16% of the image. I think that was to conform to the direct view TV practice. I expect / hope that the industry has gradually reduced over-scan to account for improvements in design and tolerances but I don’t know that. My 3% over-scan figure was an optimistic guess. This is all regrettable and a little sad for a TV standard that started around 525 vertical (albeit interlaced field) lines. Sad because so much of what seems to have been possible as concerns TV resolution was never realized.

I was reading a discussion on the history of color TV (could be one person’s view so it may not reflect reality) that a proposal / initiative to improve the resolution of television at the time that color was going to be introduced was not pursued in the interest of maintaining backwards compatibility. Concerning the choice that was made, the intricate encoding of the color signal (hidden within the B&W TV specified broadcast signal required electronics that was virtually (or perhaps simply was) beyond the capability of contemporary consumer electronics. This set back color TVs introduction (according to the same person’s view) for many years.

For those choosing to have telecine transfers of family movies over-scan considerations leads to a dilemma. The two most obvious choices to deal with the over-scan issue each seem unsatisfying:
(1) Don’t fill the video with the image of the film frame;
(2) Don’t see all the video when the transfers are watched)
John_Cline wrote on 7/12/2003, 11:51 AM
Ralph,

The video gets compressed to DVC format within the camera and stored on the tape as a compressed digital bitstream. DV as a compression scheme and DV as a tape format are two completely different things. Sony's Digital8 format uses the exact same compression as DV, but writes it to an 8mm cassette.

When you do a Firewire transfer, the bitstream gets copied unmodified to the hard drive and placed inside an .AVI wrapper. (And, yes, it is absolutely OK that Microsoft had a hand in this, without it, there would be no standard. Personally, I don't ever view Microsoft as an "evil empire." Of course, I'm biased, I should mention that I knew Bill Gates quite well back in the mid-70's when a bunch of us were working on developing software for the Altair 8800 and he was, and most likely still is, an extremely nice guy whose altruistic mission was to bring computers to the masses. The fact that he has gotten so wealthy as a result doesn't bother me in the least. )

Anyway, you can edit it all you want to and, as long as the video is unmodified (i.e. no titles, fades or filters) that data is identical to what came from the camera. If you write it back out via Firewire, it will be bit-for-bit identical to the original footage. The same holds true for the audio. If you do modify the video in any way within Vegas, the video will have to be decompressed, modified and recompressed. However, Vegas has a VERY good DV codec and you will be hard pressed to tell the difference between that and the original footage. The audio from the camcorder is uncompressed 48k, 16bit stereo and' like the video, as long as it isn't modified (i.e. level changes, EQ, dynamic range compression, etc.), it will also remain identical to the original audio bitstream throughtout the editing process.

Regarding Type 1 and Type 2 files, it is generally best to use Type 2 because not all software will deal with Type 1 files. The are no audio sync issues between the two formats. Vegas generates Type 2 files.

In answer to your original question, if you copy the video from the camcorder to the hard drive as a DV .AVI file, do a "cuts-only" edit on it and copy it back to the camcorder, there will be absolutely no audio or video quality loss whatsoever.

Regarding overscan, that's just the way it works, keep the area of interest inside the safe action or safe title areas and everything will be fine.

John
jbeale1 wrote on 7/12/2003, 12:25 PM
I'm no PC expert but there are a number of different ways to store the same basic DV25 stream as a file on a PC. Part of this is apparently whether the stream is tagged as belonging to a "DirectShow" codec or a "Windows Driver Model" or a "Video for Windows" (VFW) type codec (?)

I like to use the VirtualDub program http://www.virtualdub.org because it has filter functionality not yet available in Vegas. However it accepts VFW codecs only and cannot read DV exported from Vegas 4, fails with error message "Couldn't locate decompressor for format dvsd (unknown)". I was able to use VirtualDub after I used the Canopus DV File Converter to change it into a "Canopus Edit compatible AVI". Whatever that is. http://www.canopus.com/US/products/dv_file_converter/pm_dv_file_converter.asp
mikkie wrote on 7/12/2003, 1:39 PM
A video file on or in a PC is basically a whole lot of data taken from the pictures in the original video. How you sample that data varies, how the data is recorded varies, and how you compress it is another issue. [http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnwmt/html/YUVFormats.asp] Once you have all this data it has to be read and understood by software to be useful. An avi file is basically just one of these data files that also sticks to certain standards, so that other software knows where to look for what data within the file - is said to be a container because the avi file spec concerns itself with how the data is presented, not the actual data itself. It's convenient, but by no means the only *container* type available or widely used.

You'll see stuff about avi being a *wrapper*, & I'm as guilty as the next... Jargon that means it does some translation so that other software knows what to do with the data, and where the data is located - loosely the same thing as a container, though I think used more often when there is more translation going on. If you do away with containers like avi, then you get into issues as with image files and all the variaties, what can open what and so on.

Rather then worrying about type 1 or 2, following the link from the page where Adam discusses this (http://www.microsoft.com/whdc/hwdev/tech/stream/vidcap/dvavi.mspx), the more important part is the increased file length possible using the newer spec (in my opinion anyway). You can dig up some interesting commentary on video/audio interleaving at virtualdub.org, and thru it's forums...

In any case, my understanding is that interleaving is mainly a way of presenting the data within the file, doesn't change the content really, & where it can be a problem not having interleaving is when the audio has problems to begin with. If the audio doesn't have problems, then storing it as a separate stream is cool - think of all the DVDs out there. Given the level of advanced audio handling in vegas, less a worry normally then one might expect in my opinion, but if you wanted to check your files, if they have audio interleaving, before/after etc., there are several utilities out that provide detailed info on avi files of most any variety and if nothing else might prove useful in learning. Vegas itself offers options when it comes to the type of avi file created, and generally you won't have problems playing with the tracks whatever the type (1 or 2) if muxing/demuxing...

RE: various codecs, pretty much all will decode well enough, but it's in the encoding that differences abound. True whether you're talking mpg2 or DV or mjpeg, which is why it matters less (if at all) that the DV codec in Vegas is not used say for decoding by wmplayer.

When it comes to pixel aspect ratios, in my opinion one of the few areas where less knowledge is more... You can have anamorphic widescreen, popular with DVD & SVCD, which happens because the spec is 720 x 480 for NTSC. With DV, think of it as a way to similarly fit more data into a smaller footprint, basically the same thing as using a condensed font when printing. Generally a fair dose of common sense, ie. if it looks wrong it probably is, will get you through that muddle that comes from reading too much on pixel aspects. Focus on what works.

Be careful talking about blanked, as this has entirely different meanings in TV. [There is a blanking interval or signal that carries all sort of info, from web content years ago to CC content etc., & is something you'll not see or deal with unless involved directly in broadcast.] Letterboxing is just preserving the aspect ratio of the video, so if it doesn't fit the display screen, you still see the frame as intended, there's just areas with nothing to show usually above and below the frame. Some camera's manipulate their basically 4:3 data from the CCD(s) to fudge a more 16:9 ratio, perhaps more as marketing, while some offer real benefits from the wide screen mode - still others may shoot a true widescreen picture, so you'll see a lot of discussions that might make learning more confusing.

"True to its NTSC roots, DV does not offer very good vertical resolution.** "
OUCH! HDTV is kind of irrelevant to what I think you're trying hard to learn, and far from a universal spec at this date, so my advice would be to just leave it alone for now. That said, it is much, much better then either NTSC or PAL. Which is best is a matter of opinion, and will land one in discussions similar to the mac/pc wars - hence the ouch earlier. Both have advantages, both disadvantages. When it comes to DV, if there is a loss of quality due to the format itself, it comes from the way that the data is initially scanned or sampled, it's not being 4:4:4 or 4:2:2. Adam Wilt has a bunch of info on this, as do other sources on the web. Generally it doesn't matter that much unless you're titling or going through several generations, rendering/converting files multiple times before you're done. A final note of sorts, don't get confused with digital images on your PC, where higher resolution generally means more quality - 576 lines are taller then 480, but the fact that you have a taller picture alone has nothing to do with how that picture looks in terms of quality. I'm NOT saying a still capture from a PAL source might not look better then NTSC, but the height really has nothing to do with it.

RE: Overscan, something not likely to change soon, simply because there's no great economic benefit to reducing the amount in my opinion - it's questionable how many would pay extra to see more of the picture when the programming has already been edited so that nothing important is ever shown there, in the area that would be exposed.

What the public will buy, and how, has and will always be more of a black art then science, and in my opinion at least, has had more to do over the years with what's offered, and what technology is pursued. Broadcast, cable and satellite companies are all too aware of this, and hesitant to spend on equipment that will definitely become outdated, and likely as not will not improve the bottom line.

Way back when Mission Impossible was one of a handful of shows, possibly the most well known, that stuck with B & W, actually advanced the art of it, believing that aesthetically it was more pleasing to the eye. You had the fear in Hollywood, the one that originally led to widescreen in theaters (35 mm film is 4:3), that TVs would replace movie theaters. And you had enormous costs to the broadcast stations, updating all their equipment.

Telecining itself has a different meaning, and doesn't really apply to transferring vhs footage - nor does overscan in a technical sense. If your footage was on Super8, 16mm, or 35mm, you would have to have it transfered to some sort of video format that you could access electronically. If you did this by transfering the picture, one video frame for every frame of film, you might then have the result telecined to up the fps to 29.97i for NTSC for broadcast. [PC output to tape hardware could do the conversion at home, & more recently, it would be left as it was and encoded to mpg2 for DVD]

If old movies are (s)vhs, they're simply recordings of the original TV signal - CC data and all are normally recorded, though I won't swear that in all cases the original vertical resolution has been captured to tape. Unless the capture device is incapable of really capturing 720 pixels width, you should get the whole thing, as it was shown on the TV by the vcr, overscan and all.

Aesthetically a concern perhaps, there will be some of the original picture cut off, parts of the picture that never did show on the TV, but that is something shooters have dealt with since there were cameras & TVs... kind of like a golfer hating rain, or a skier hating summer, not something one can reasonably do anything about.
mikkie wrote on 7/12/2003, 1:51 PM
"... a "DirectShow" codec or a "Windows Driver Model" or a "Video for Windows" (VFW) type codec (?)"

FWIW, http://www.microsoft.com/whdc/hwdev/tech/stream/vidcap/dvavi.mspx

"it accepts VFW codecs only and cannot read DV exported from Vegas 4, fails with error message "Couldn't locate decompressor for format dvsd (unknown)""

The Vegas DV codec is used inside vegas, but windows will use whatever you install for decoding. Installing a vfw friendly dv codec (including the matrox ones discussed here in the forum) will allow V/Dub to open, handle DV files just fine. One caveate, is that you have to then save your edited files as something, & I'd recomend mjpeg along the lines of picvideo for import back into vegas where you can re-encode DV.

Yeah, you might loose some data, but not that much according to the arguments at adamwilt.com which I agree with (most of the tossable data has already been tossed). Yeah you'll loose some quality re-encoding everything, but you'd re-encode anyway if you're applying V/Dub filters, & mjpeg at highest quality levels might preserve more with it's 4:2:2 pattern.

"I should say that I knew Bill Gates quite well back in the mid-70's when we were working on developing software for the Altair 8800 "

Hey John - Cool

Look kids, someone as old as me!!!
GaryKleiner wrote on 7/12/2003, 2:31 PM
>... mean nothing except DVD although it almost certainly originally meant "digital video disc".<

Actually, it stands for Digital Versatile Disc.

Gary
Family_Voices wrote on 7/12/2003, 10:37 PM
This begins as a response to the interesting letter (response) from Mikkie. It is an interesting letter. Read it now if you haven’t.

The old VHS and S-VHS movies that I was referring to are family movies, family videos that were shot with camcorders. I use the term movies because functionally they replaced the 8mm and S-8 film movies (and also of course were talkies, had sound). I can see that that was confusing. I have little interest in anything that was recorded off of a TV. The home movies that were taken on VHS video (some are on S-VHS) though are unique and priceless. They also are not going to last forever.

Without going back to my previous post I am fairly confident that when I mentioned telecine that I was talking about transfers from film. I usually am careful on that point. I don't think the previous response was confused on that but to be sure I mention this to clarify.

I am sure that I made the point earlier that I am keeping our original films for they are someday going to be able to be converted to HDTV.

PAL TVs are 4:3 just as ours are. They squeeze more horizontal lines into the same space. Such material cannot be properly watched on NTSC, they are going to just zoom up and look taller (except that NTSC cannot deal with the extra horizontal lines). So what good are they here in the US the mother of NTSC? They are good if one is viewing the "movies" on PC monitors (of higher resolution then NTSC) or projecting those using LC projectors which have higher resolution then NTSC.

Most of us, when we watch projection using LC projectors on front screens or when we are watching a video / movie on a PC screen do not allow the image to over-scan/overscan. We watch "the whole thing” by having a border, often black, around the whole image. Because of this I did not have the telecine operator shrink the films size on the video screen (so that all of the frames could be seen on a regular TV with borders). I wanted to have all the available pixels with information on them. My mode for watching these home movies will be using an LC projector.

I stay way away from MPEG and lossy compression. I just don't want to deal with it right now. I am trying to stay as close to the available content possible with DV. For access I will favor using mini-DV-cassettes played through decks into a PC to get the interpolation to higher resolution. My LC projector has XGA height but has the width of 16:9. For the family movies I won't be able to use the extra width but it doesn't cost me anything but some brightness. I like to watch the movies with the lights off so there is plenty of light for me (in home projection situations).

I am sticking to my guns about the shortcoming on vertical resolution of any HDTV derived video format whether analog or digital. Progressive scans makes the available vertical resolution much better over interlaced video, no question about that, but print out an 8 x 10 picture (8 tall) with 480 vertical lines and ask someone what they think of the quality of the picture. My very old laser printer does 600 "true" lines per inch. Suppose I took a B&W printout on my laser printer that was 3/4 inch tall (that would be approximately 480 vertical pixels I think) and then blew that up to be an 8 inch tall picture? What's it going to look like? Everyone now is running around talking about mega pixels. What the mega pixel content of DV? I just worked it out; it’s a bit less than 0.35. That is without over-scan. With 10% over-scan (each direction) this drops to a bit less than 0.29. That’s less than 10% of the 3.3 or so mega pixel cameras that many are running around with taking snapshots.

I have read analysis that 4:2:2 50Mb/s upconverts to HDTV much more satisfactorily then 4:1:1 25Mb/s. Personally I would rather be getting HDTV native scans (I am waiting for that with my grandfather's films) and not be doing any upconverting at all. I have to suspect though that we think DV looks good because (1) it is viewed progressively (or can be), (2) we are not getting generational losses and post production losses that are excessive so long as everything is done uncompressed (within the starting sample and codec compression limitations) through to the last step anyway, (3) we still compare everything video to broadcast NTSCS and VHS video, and (4) we are either watching on small screens or we are watching on modest size screens but sit across the room from the set.

I am not knocking DV. I am glad we have it. It is the only way I could do some of the projects I am doing now for access to the old family films and it is the only way I could practically save the VHS home movies into the digital realm. I used to be really excited about VHS video however and I am not anymore. If we are doing DV to provide programs for today's NTSC television systems, even when used without the degradation of broadcast signals, good call! DV is a good match for the purpose.. If we are doing DV production because we are creating the works for the enjoyment of humanity in the year 2200 (or possibly 2020) those folks might be asking us (if they could), "What were you thinking?"

I frequently ask when I talk to someone about these things, what do you see when you watch a baseball game or a football game on television? Can you really see what's happening? "Oh yeah, its great, I can even see the breath of the quarterback as he calls the signals." Then I could ask, "When you watch the quarterback throw the ball can you see who he’s throwing it to?"

Today's video production of a football game has a myriad of cameras and everything is a sequence of tight isolation and mid shots rapidly cutting around. Our minds are frantically trying to correlate the pieces of the puzzle and merge everything into a sense of the big picture. So what's wrong with this picture? I look forward to the day when the cameras can just stand still and we can see the play unfold. We can direct our own attention here and there on the screen. Today we don’t have to move our eyes much—just keep watching the center of the screen. Every thing will be there. (Hyperbole, you have to look around a little.) We won't have to have someone (the camera director) we cannot control directing our attention.

This is the way it works at the real movies. We look here or we look there. We have to decide where we are going to look. It can be more like this in our video rooms at home. When that day arrives we will have to have more than 480 horizontal lines (or 480 vertical pixels in the image) and no, the picture won't look tall, it’s going to be fine. When my grandfather's movies are shown in that same setup they won't look that good but I want them to be as good as they can be. For that possibility I have to wait a while longer until I can get them scanned to HDTV. Meanwhile, I did what I could do today; I had them professionally scanned to DV with a commercial quality telecine (old but updated and with the added wet process does what was required while being safe on the film).

Without going into the specifics, whenever images are being translated from one standard to another and eventually come back to where they started, even if no lossy compression has been involved something will be lost because of accumulating sampling errors/artifacts. I just don't see the need to do it. If it can be done, I am staying close to the DV all the way.

I appreciate the introduction to the term container. It does sound a little different then a wrapper doesn't it? You put something in a container and it is just protected and such. A "shipping container" does not stretch, tug, and distort. But a wrapper, we can think of shrink wrap. Wrap conveys an idea that something is being done to what's being enveloped. It’s the difference between a well protected painting packed for shipping and "meet the mummy". I will be watching for how these terms are used but I have to say that I have not really been seeing much of either one of them yet. (I had been calling the things an envelope but I don't think that is a term used at all.) Because of that, being introduced to the term in a response was a breath of fresh air.

Like aspect ratio and over-scan (or overscan) there is very little mention of what I view as some rather important points in the discussions and articles about DV.

Opening this up now to comment on other responses to the thread. I appreciate all the responses that have come in. My thanks to everyone.

About that the *.avi wrapper / container / envelope (whatever we call it), which came first, the *.avi formats or DV? Why was my first experience with AVI files these dinky little tiny movies (and still having low resolution) that were a mini-window in the middle of my PC screen? When I first started hearing of AVI files being used in serious work it was quite a while before I could believe it or understand it. (I am still working on understanding it). It was my first experience with *.avi (I think they were dinky little movies) that caused me to ask "is that okay" as much their being formats defined by Microsoft.

I appreciate the comments about "type 1" and "type 2". How does one select those though? Is it the codec, options in the Vegas 4 menu, none of the above? I don't know yet.

Best regards,
Ralph
July 12, 2003
kentwolf wrote on 7/12/2003, 11:51 PM
>>Actually, it stands for Digital Versatile Disc.

That's right!

...there. My post for the evening.

Thank you.
riredale wrote on 7/13/2003, 1:41 AM
When compared to a digital still camera, 480x720 sounds horrible, but we're not comparing it to a digital still camera. The human mind is able to overlook some pretty obvious limitations in forming an image of motion video. Studies done many years ago determined that the "optimum" viewing distance for video was the point where individual scan lines could just barely be distinguished by the human eye (which typically can resolve about 1' (one minute of arc). That point occurs at about 10 picture-heights from the screen. In other words, if you sit about 10 picture-heights from your TV, you are getting all the resolution your eyes can see anyway.

HDTV employs more scanning lines precisely because NHK determined that the optimum viewing experience was from a distance of about 4 picture-heights, hence the need for more scan lines. I have to smile when people buy itty-bitty HDTV sets, and then sit back at conventional viewing distances. Kind of defeats the whole idea of HDTV.

The overscan issue has always been a pet peeve of mine. I relish the day when some company offers a set that is adjustable and stable, just like the monitor most of you are using now. You can set the edges on your monitor to just touch the bezel, and once set, the picture will stay that way for a long time without drifting much. So why can't our TVs use a similar design? I think it would be a great marketing point to say (honestly) that "Our TVs offer 10% increased sharpness!" simply due to the fact that overscan has been eliminated. Certainly when I watch TV on my PC (I use a WinTV PCI card), I see the entire NTSC frame. It's amazing how much more image there is than what appears on a regular TV.

Finally, one of the amazing bits of trivia I picked up this past year was that DV was NOT 4:3, but a bit wider than that. That's why you see narrow black bars on the left and right edges of the Preview window when you import a 4:3 still photo into Vegas.
mikkie wrote on 7/13/2003, 12:09 PM
FWIW and all that... Much more will make sense the more you play with this stuff.

"I am sure that I made the point earlier that I am keeping our original films for they are someday going to be able to be converted to HDTV."
Some facilities do the conversion now, though the problem as I see it is that you're limited to what data is currently stored on your existing tapes. DV or high bitrate mpg2 will preserve what's there, but upsampling might be a bit trickier, & even pristine svhs would have to be upsampled to reach HDTV spec... What you might want to explore is having the tapes upsampled now, before more deterioration of the tapes occurs, and from the tapes themselves - the conversion to DV or mpg2 etc. will lose some of the original picture data, effecting the quality of any HD results. DV is indeed lossy.

For archiving & repeated use purposes DVD or CD discs last much longer then any tape. Might want to consider going the DVD route, having the DV masters if you like stored somewhere strictly climate controlled.

4:3, 16:9, such are thrown around a bit, but very often can't be taken litterally. That said, PAL is interesting from more aspects then pixel height. However in my opinion it's a moot point dealing with NTSC VHS or SVHS as the extra pixels aren't there to begin with. A partial exception, if I recall correctly - years back I think non-studio hardware was able to capture the 540 height, though how much of that was usable I don't recall. Anyway, don't know of a way to convert 480 height to whatever without increasing width as well & maintain aspect.

If the object was/is to present the most data possible on a screen, pad it. Sample or resample the picture so the actual picture width is 640 or perhaps 704 (cropped fullscreen), and add the needed black pixels to arrive at the 720 spec.. Another method might be to stick with PC formats such as winmedia or real or odd sized mpg2. Visually you might not be able to tell the difference if done right, and by playing back analog from a PC to a TV or projector, you generally control the overscan, so that you'd see a full 720 width, perhaps with a bit of letterboxing top and bottom.

Regarding avi files and such... The ideal is to capture/store uncompressed video, but the problem is it's only recently begun to become a viable option without a really HUGE outlay of cash. Video compression in or for a PC has been progressing rapidly since before CD ROM was a factor. Back then, postage stamp video was quite an accomplishment. Technology has come a long way, as any video then was really pushing the envelope. As I understand it, remember it anyway, Microsoft stepped up with the avi format as video was just starting to appear on the PC, in an attempt to make things easier for developers to include video content in their products... Used to have it included in the program stream, actually built into the code rather then a separate file you could access.

Some folks will dislike Microsoft no matter what... least until someone hands them a big, fat check anyway... If one dislikes the fact that they invented avi, go to Matroska.

Before you had DV you had Hi-8, which was it for quality untill you took a BIG step up the ladder. I always thought it neat that the occassional effect was done shot in Hi-8 for Hercules and Zena. SVHS was just a tad below Hi-8 I think. DV was the result of trying to do better cheaper, never as a replacement for then current hi-end equipment. Today they're working on/with a host of formats superior to the std mini-DV or DV 25, whatever one wants to call it, so there's plenty of options to choose from depending on one's wallet. At least one HDTV facility I think is archiving in mpg2.
Family_Voices wrote on 7/14/2003, 10:11 PM
In responding to a something that I wrote Mickie in turn responded (in part):

What you might want to explore is having the tapes upsampled now, before more deterioration of the tapes occurs, and from the tapes themselves - the conversion to DV or mpg2 etc. will lose some of the original picture data, effecting the quality of any HD results. DV is indeed lossy.

Ralph comments:

I appreciate the interest that several have taken in my views and appreciate the responses. I am working both with old VHS/S-VHS tapes (or soon will be) that I call "family movies" to try to remind people that they are not old soap opera recordings or similar. However, I am also working with the old family films. When I say movie I mean either video or film but as often as not it is video--analog video, the VHS/S-VHS media. Films as in the phrase “movie films” means the way I use it movies that happen to be on film. I cannot get HDTV transfers done yet (cost prohibitive) but the films should endure another 50 years so I will come back to them.

I have written within in the past few days in this forum that I am working towards a capability to capture our old family movie tapes to DV. Once I do that I don't expect that there will ever be another conversion that will yield as much information as I will get on the next transfer. These videos have not been played in fifteen to twenty-five years and longer. If I were set up for it, I would capture to D-9 or similar (4:2:2 50Mb/s digital video) but I am not. I think the gains would be minimal yet I know that someday when these DV tapes (being 4:1:1 25Mb/s) are upconverted to become HDTV that that will introduce artifacts that will make them look worse then they would have otherwise. I think these issues are more important in the case of movie films for there is more detail and quality in the originals to capture then there is in the old VHS analog video tapes and also more then I expect is present in the S-VHS tapes.

Mickie seems to have been tripped up by my subtle semantic conventions that I was using to indicate old films on the one hand and old (analog) video tapes on the other. I agree with Mickie about doing what we can and are going to do with the analog video tapes without significant delay. Time is not going to wait for perfection in transferring of family analog video tapes. Time can be our ally with the old movie films if we store them safely. Eventually there will be good procedures to go from small gauge movie film to digital video at a higher sampling and data rate then is the case for today’s NTSC based standard DV (in its formal meaning).

This is written in case some others might have been confused on these matters. This letter does not break any new ground over what I wrote before.

Best regards,



Family_Voices wrote on 7/15/2003, 11:50 AM
Some thoughts and analysis based on the posted response to this thread by riredale. None of this in anyway disagrees with his response. I just wanted to take the numbers that he provided and work through some implications. This is a five page post. There are a variety of results, you can’t just read the last page and get this one.

Viewing television at 10 times the picture height (if that is the correct number, I took it as a given) was arrived at by someone trying to figure out where the human visual resolution was just satisfied by the capability of NTSC. For a modern 30 inch TV that would be 19 ft (eye of observer to the face of the set). With today’s sets, most of us cannot get 10 times the picture height away from our main televisions!

A minute of a degree is 1/60 of a degree which is 1/60*(pie)/180 = 3x10^-4 or 0.0003. The angle alpha between two lines seen from a distance would be the separation of the centers of the lines d divided by the viewing distance s

alpha=d/s.

For a 30 inch TV screen of 4:3 aspect ratio (the diagonal, the way sets are measured, would be in the proportion 5 to those others from Pythagoras’ theorem) so our screen height h is the diagonal d times the ratio derived fraction 3/5

h=30*3/5=18 (inches)

s=10*h; s=180” (inches) or 15‘ (feet)

With the presumed separation of the lines being this divided by the number of lines approximately 480 (if this is true, some 45 lines of the NTSC 525 raster are not used to make an image but I think that may be correct) is

d=h/480; d=0.0375” (for the assumed 30 inch set, “ means inches)

This gives us what we need to solve for alpha, the separation angle at the presumed optimal viewing distance

alpha=d/s; alpha~=0.0375”/(12in per ft*50’); alpha~= 2.08x10^-4
alpha~=0.000208, and it is measured in units of radians.

alpha=0.000208rad

If one carefully looks at the units in that calculation, inches in the numerator is canceled by the inches in the denominator, and feet in the denominator is canceled by the reciprocal feet unit also in the denominator, per mean divide, so the outcome of the units for alpha is dimensionless. Alpha is an angle and has an angular meaning but this alpha is measured in radians and radians has no dimension in the conventional sense.

We didn’t have to use a specific television height to get this. Angles don’t have dimensions in the usual sense (they are ratios) and always scale with the systems they describe. Had we used algebra the whole way we would have had when solving for alpha had (exactly)

alpha=d/s=(h/480)/(10*h)=1/(480*10)=(1/4800)rad

This number should compute to be the same as before (aside from rounding in the preceding calculation if present)

alpha as shown is measured in radians, the true dimensionless units of degrees. Following a custom that goes back to the Babylonians or somebody, we can put this angle into minutes of a degree. The conversion from radians to degrees follows in any number of ways, the most apparent may be to recall that 360 degrees is 2*(pie). Why? The circumference of a circle implies a value for pie. Most people know that the circumference of a circle is 2*(pie)*r, its radius where pie is whatever number makes this work out. (pie) is an irrational number (it can not be expressed as a number that can be written down, except approximately, not even as a ratio of such numbers), it relates to circles in this simple way and it is the same in all system of units (it is a pure number, it has no dimensions). The first few digits of pie are 3.14. Because 2*(pie)=360deg and pie is approximately 3.14, a radian angle is converted to degrees by multiplying the angle by 360/(2*pie) or 180/(pie). The first few digits of this number is 57.3.

What is the alpha in degrees?

alpha~=(1/4800)rad*57.32deg/rad=0.01194deg

Notice how the units have again canceled out. How large is this angle in arc-minutes (degree minutes)? Multiplying alpha by 60’/deg (‘ means degree or arc minutes in this context)

alpha~=0.716+’

Hey, that’s not 1 arc minute, it’s a little less, why? First notice that for it to have been an arc-minute one would have to sit closer to the set, 7 times the picture height, not 10 times it. Why do we have to sit farther away then the resolving power of the eye requires? This seems to be the Kel factor at work. I recall that the Kel factor was a number larger then 0.7 and the number we just found may be somewhat precisely the factor that made Kel famous.

Why is there a Kel factor? The Kel factor seems to be the apparent loss of vertical resolution of an NTSC TV interlaced display (consisting of two 1/60 sec fields making a 1/30 sec frame) over the resolution that a progressive frame would offer. Experimentally the adjustment factor (the lack of two interlace fields to provide exactly twice the vertical resolution that a single vertical field alone would have) is the Kel factor.

TV has horizontal resolution as well. For these to match the horizontal resolution (this is measured properly in broadcast television/engineering procedures as television lines per picture height) should be 480 times the Kel factor, the effective vertical resolution of the set. The effective vertical resolution of NTSC (believing in the Kel factor, not converted to progressive scan) is about 344 TV lines per picture height. (I have usually been confused about the origin of the Kel factor, thinking of it as relating to the problem of trying to project horizontal lines via a horizontal raster system but I believe that the two are different. The Nyquist limit of resolution of NTSC television would be even lower, as 480/2 the famous analysis of Nyquist is that it is half of the frequency of the system so by extension the Nyquist limit of vertical resolution of television (not to get aliasing/moire) would be only 240. I could still be wrong in this area. If someone needs to know about these things with less doubt I would have to do some research/reading.

So does broadcast television reach a horizontal resolution of some 340 TV lines per picture height? That’s another small research/reading assignment but it is in that ballpark. One has to be careful where you find a number like this. In the digital realm, horizontal is dealt with independent of the vertical and so the correction to picture height is not made. Not all TV manufacturers/distributors may report horizontal resolution correctly. In any case they are reporting the sets capability to resolve broad-band video, not broadcast TV signals. I recalling seeing televisions with horizontal resolution for video as high as 800 back in about 1990. That is probably more than twice what is possible with broadcast NTSC TV signals.

What’s different with DV25 from NTSC besides progressive scan? Horizontally we are 720 pixels wide rather than the 640 that 4:3 square pixel, 480 pixel high displays would have. Converting the resolution to Vertical, to put it into television engineering terms, not digital display terms, depends on the pixel ratios when the DV25 is clocked out to be viewed on an NTSC TV set. I had supposed that one adjusts the 720 to fit in the display width so by the ratio of a usual 4:3 display one gets 3/4*720=540 “pixels per screen height”. If one does have square pixels, because there are a few extra pixels horizontally in the ratio of 720/640, unless one can squeeze the vertical lines by this ratio, 0.88.. (repeating decimal) one just doesn’t see as many pixels in the horizontal within the frame of the 4:3 set (they are beyond the boundaries of the display), and the “extra” pixels change nothing. I expect that for TV viewing of DV25 the pixels are squashed, all are seen (more about “seen” in a moment) and so the horizontal resolution has been boosted, it has been boosted by the ration 720/640 which is 1.125. The “squash” of the horizontal pixel width would be the same, 12.5% (or more precisely the reciprocal of 1.125 which is that 0.888.. repeating decimal we saw before).

Why does DV25 have only 480 vertical lines? Because our television sets do not have a variable raster format, unlike our (CRT based) CRT monitors. They are frozen to display a fixed number of vertical lines. DV25 is married to NTSC. It was not possible to make a compatible system for NTSC countries that had more than 480 vertical lines. (Now, we should be writing DV25-480 or DV25-NTSC as PAL DV does have more vertical lines as we know.)

DV25 vertically is the same as NTSC TV without the Kel factor. The optimal viewing distance to get that famous 1 minute of arc resolution is the same as we calculated before, 7.165 the picture height because we still have a 480 line system

s(d25)~=7.165*h

This is how it is if there is no over-scan in the TV.

If there were 10% over-scan there is actually 480*0.9 or 432 vertical lines (some sets may approach this much) the optimal viewing distance back by the over-scan factor. Now for NTSC TV the “optimal” would be 11.1 times screen height. That original 10 times rule was either measured in a laboratory setting where a monitor with no over-scan was employed or the human eye resolution (of the observers involved in the study, maybe one guy and his lab assistant) was possibly a bit poorer then 1 arc minute (which sounds like a very round number, the 10 does too).

During the days I was beginning to notice such things and for quite a long time it seemed to me at the times, the normal large television had a 21 inch diagonal. Now those corners were pretty rounded so extended to square geometry (so that the 3:4:5 rule applies) perhaps those sets were “23” inch sets. Working with that number…

I come up with 13 feet from the eye to the face of the set. With a set protruding into a room 18 inches and a sofa pushing the viewer’s eyes 18 inches closer to a set, this requires a viewing room dimension of 13+3 or 16 feet minimum. My grandparents had a big house by the standards of the time but their living room was not 16 feet wide, it possibly was 15’ long, but I remember where we set to watch the TV. It was at most seven or eight feet from the set face and that was if we were on the couch. I think maybe we were. I don’t recall that in those days we thought there was anything to be gained by crawling closer to the set. In my view Americans have nearly ever watched TV from the recommended viewing distance. As the sets have grown larger we have not backed up. Now as the sets get “sharper”, more resolution” we probably won’t have to move any closer, but the resolution will better match (younger) eyes.

Other then not being able to get away from them far enough (they could have kept them smaller) why do we prefer to be closer to a television then resolution alone indicates? One arc-minute resolution capability would be the fovea resolution, a very small part of our total visual fields. We take care of that by dancing our eyes around on whatever we are looking at. We direct our gaze where we direct our mental focus. If the set is angularly too small our eyes are not able to dance around a very large space. It is something like watching life while looking through a tube or a keyhole but with the tube anchored, we cannot swing it around and look where we want to. It feels better when we are less consciously aware of the angular limitation of the space we watch on TV. In other words, I think that just backing up to get down to where the detail is matched to the display doesn’t correct for all the deficiencies of our traditional Television. It is not just the distance in terms of screen height. It is also how close we would need to sit to satisfactorily fill our visual systems. (Remember to be really filled, we have to add the angular range of our dancing eyes, we don’t watch the world with our eyes fixed.)

I have read analysis of how large cinema screens have to be to provide the best viewing for an appreciable number of the viewing seats in the auditorium. Minimum standards and recommended practice standards actually exist for such things. The required screen size is large enough that some mini-screen multiplexes do not measure up. Our TVs are going to have to get a lot bigger to offer anything comparable. However with LC projection and a large white wall some families have figured out how to get to eye filling screens. Although the screens are smaller then the mini-plex screens these LC projector / home theater owning families can compensate by sitting closer. With those large viewing screens though, those same families need a lot better resolution then they are going to get from DV25 to get a movie theater experience.

Ralph
July 15, 2003
rmack350 wrote on 7/15/2003, 12:53 PM
Ralph,

Others are addressing other points very well so I'll stick to a just a few points.

Analog video has no pixels. It is lines of signal or rasters. When the signal is digitized it is broken up into discrete samples (aka pixels). Just as you can sample audio at a variety of rates you can also sample a video signal at a variety of rates. You could sample a video signal to be 2x2 pixels or 2000x4000 pixels or anything you want. DV25 samples the signal such that you get 720x480 pixels for NTSC or 720x576 for PAL.

These numbers were derived for practical reasons that we don't really need to go into.

The other point to make is about that 720 pixel width. Even if you do the math to simulate square pixels 720x480 never translates to a 4:3 image. Rather 704x480 translates to a 4:3 image. (Approximately. Numbers get rounded up here)

Why? Simply has to do with the performance of analog devices over the last 50 years. When a TV comes to the end of a raster it must clamp the signal down before returning to the start of the next raster and brings the signal back up. It takes a little time to bring the signal up and down at each end of a line and that time is built into the signal. That extra time makes the digitized signal 720 pixels wide rather than just 704.

Some analog devices do this ramping up and down very quickly, others do not. When the specs were first devised they were for tube based electronics. Those were certainly not fast.

You will find that if you digitize the video from a vhs deck you will see soft edges at the left and right of frame. This is a good example of the analog signal ramping up and down at the ends of lines.

So, on a TV, you are only meant to see the center portion of the image, not the fading edges. For projection you would want to crop or mask your video to hide the soft edges, if they exist. When digitized, that center portion is represented by a 704 pixel-wide area.

Rob Mack
Family_Voices wrote on 7/23/2003, 10:12 AM
Comments prompted by Rob Mac's (rmack350) post preceding:

There are about 480 useful lines of horizontal video in an NTSC video frame. In DV(25) the 480 useful horizontal analog lines of NTSC video become an array of pixels 480 high. Although the horizontal lines in NTSC as in all raster scan analog video contain analog information that can be sampled to whatever digital slices one would choose, as Rob mentions, there is a form of digitalization of the raster itself because there is nothing arbitrary about the count of the raster scan lines, they are as they are. One can add lines when converting to pixels by doing an interpolative sampling but I would not expect enough to be gained for resampling of the vertical to be appropriate. The more appropriate action, in my view, would be to transfer using a digital video with higher resolution in the first page. My reference to transfer here looks ahead to my own application, small gauge movie film transfers. My comments concerning resampling do not disagree with anything that Rob wrote but I wanted to call attention to the lack of arbitrariness of the DV(25) image array being 480 pixels high.

NTSC television “fell” from its original scheme of 525 vertical raster lines per video frame to really have only 480 vertical lines per video frame before losses due to overscan. It is to the creators of the DV(25) credit that they got rid of the unnecessary 45 lines for a digital expression of the same format. Somehow I hope that they also got rid of whatever horizontal part of the raster scan are similarly not helpful. If the translation boxes that allow DV(25) to be viewed on our NTSC television sets can keep the sets from being worried about those 48 missing vertical lines they should be able to keep the sets happy concerning any missing portion of the horizontal scan interval time.

NTSC has two fields of video, each field contains about 240 useful lines in the raster scan. The raster scans are temporally interleaved to form a video frame. In the preceding discussion it can be taken that fields have been converted to a progressive scan of 480 lines although the term “video frame” in the other case includes both video fields.

NTSC video equipment can crank up the horizontal bandwidth far beyond that possible through the allocated NTSC television channel widths. As a consequence there can be more horizontal resolution (per picture height) in NTSC video then there is in the vertical resolution (that being constrained by the fixed number of horizontal scan lines). No corresponding means to increase of the vertical resolution exists. Following progressive scan conversion the effective vertical resolution is increased about 40%, as I understand things, but that only gets us to the 480 vertical lines aforementioned. Because DVD and DV support progressive viewing these formats do look sharp and detailed, when compared to NTSC video without progressive conversion.

Our regular NTSC televisions toss up to about 10% of the image both vertically and horizontally (one hopes the actual toss is 8% or less, still significant) through overscan. (I had been spelling “overscan” as “over-scan”, but I have decided I prefer the simpler written form which is probably the way it is usually spelled in the television technical field anyway.) This means that up to 10% of useful vertical lines (all 480 of the vertical lines are useful) are tossed. When we toss similarly up to 10% of the horizontal lines we maintain aspect ratio for a fixed pixel aspect ratio. Tossing still more of the horizontal to achieve 4:3 aspect ratio for the convenience of using square pixels sounds wasteful to me. In any case, if only the outer "soft" pixels are tossed, the remaining lines would be stretched to fill the viewing screen. This maintains 4:3 aspect ratio (and square pixels) but are not the objects in the display stretched horizontally--don't people look fatter--if this is done?

If the video originates as DV, as from a DV camcorder, there should not be a soft left and right edge to be concerned about. If it originates from an NTSC type analog system I would hope that only the "useful" portion of the lines are converted to the DV (or other digital format) pixels by the translation circuitry/software as the video in the lines is time sampled into pixels so that full resolution afforded by the DV format (let's specialize to that) as is also supported by the DVD (NTSC) standard are captured. Further, let's hope that when these digital representations (these pixels in the image array) are converted to a clocked out analog signal for presentation on a CRT based display that the timing can be adjusted by the output circuitry/software--making the pixels unsquare as necessary to preserve aspect ratio of the objects displayed while also filling the viewing screen.

What I have just written is consistent with the analysis in my previous response to this thread. There are more details there including several calculations.

I worked for Philips Electronics for 22 years. For the last 10 of those years I was involved in development of liquid crystal projection. Liquid crystal projection panels, the way everyone does it now, are actively driven pixel arrays. When used to project analog video the video is clocked into AD converter circuits that produce appropriately sampled video for the displays. In many cases this conversion is done with chips on the display panels. Depending, there may be one panel (and sub-pixels that provide the color) or generally always today three panels, one providing each primary color. The converged colored image is formed by a light path that recombines the light from the three panels outputting through a single projection lens. Although LC displays are pixilated in most cases they project video that has been converted to analog or originated as analog. For example, my LC projector takes its input signal from the “VGA” port of my laptop. I write “VGA” for my display is wide XGA format but the port got named in older days. Although within the computer the information probably exists as digital video (my laptop is a digital display) before outputting to the “VGA” port the laptop converts the signal to analog, raster scanned video. One might hope that one day digital displays will be able to accept digital inputs directly (I think that some devices already provide a digital output channel, my LC projector has a digital input channel) but as of this time there is commonly a lot of DA, AD translation going on.

Though I do not claim to be an expert, I have learned what I know about DV(25) digital video since leaving Philips, I am familiar with the similarity and differences of raster scan analog displays and pixel displays. DV(25) has its roots in NTSC video. DV(25) can offer possibly modest improvements in viewing resolution over NTSC video aside from the big boost in perceived vertical resolution achieved by the progressive scan. As a digital replacement for NTSC video DV(25) is an excellent format well deserving of its popularity. It will not be competitive with HDTV.

DV(25) and digital formats:

I have read that D-9, a JVC originated, a 50mB/s rate 4:2:2 sampled digital format, upconverts to HD television digital more satisfactorily then DV(25) with its 4:1:1 sampling even though both formats are subject to the same NTSC inspired 480 pixels high array. DV(25)’s color sampling rate is adequate compared to NTSC but the much better color sampling of D-9 provides a better future upgrade path. In at least this respect D-9 (and similar formats or better) seem superior to DV(25) but the benefit comes at greater expense. D-9 decks are also relatively rare whereas everybody with a DV camcorder can play DV(25) video. The D-9 media, because it has the VHS form factor, also has a storage bulk disadvantage. DVD (in its movie format form) can provide access to DV media without too much loss of resolution compared to the DV format. A DVD data disc, encoding native DV(25) format without MPEG2 compression, has quality equivalent to playing a DV tape from a deck in my understanding.

DVC and DVcam, what’s different? Less than some think:

The DV(25) digital stream is often captured to DVcam rather than to DV cassette (DVC) or mini-DVC (those are the same except for the x-y dimensions and length of tape available, tape widths are the same). The way a DVcam tape cassette holds DV(25) is a little different from that of a DV cassette, DV as the tape moves faster and the magnetic tracks are wider, yet the data stream going into DVcam and the data stream coming from a DVcam tape are exactly the same as with the DVC and mini-DVC media format. The advantage of DVcam is to provide more overhead for a successful recording and playback of the digital stream by the tape media operating farther below its maximum capacity. This implies that DVcam tapes are more robust making the format more suitable for archiving and for commercial use. DVcam tapes are physically wider then DVC, media costs more and the decks cost more. New Sony DSR-11 decks are available for about $1800. That deck plays DVC media as well as DVcam media so it is versatile. Many seem to believe that DVcam is a somewhat different digital format then DV(25). I have checked into this and I am confident that that is not the case, only the media is different.

Home Movie Film Transfers to Digital Media:

I use DV(25) to provide access to archived home movie films. The master digital transfers will be kept on DVcam. Digital access enables expanding numbers and geographically dispersing family members individual access to the unique and original family films. Meanwhile those original films remain in a single archive. Before the end of life of the films these movies should be either replicated to a higher resolution digital format--this requires a more sophisticated transfer process, most likely it will be HD format--or replicated to a new film "original". For now the former is not practical and the latter is too expensive for the benefit. I expect digital replication of small gauge films to be practical within 10 years. The films should last longer, perhaps 25 to 50 years so things are comfortable.

Even though film transfers to digital form are for “access” and not for replication (at this time) it is important to have the transfers done by a safe process, a process that will not damage the film. Close examination of home movie films reveal them to be generally quite scratched up. Presumably they get this way by projecting them. If the transfer process requires something akin to the film transport used in a film projector more damage to the film can accrue. I recommend use of commercial telecine equipment by professional post-production houses that are familiar with color reversal film, have the mean to correct color, provide (ideally) for supervised transfer (you are there when the films are transferred) and one on one attention by the timer/colorist/telecine operator with just one transfer at a time, charge by the telecine time consumed, not by foot of film transferred or by the time of the resulting production, and that is experienced with handling archival film. This can be costly, perhaps higher then $300/telecine hour, but these are not just rare films, they are unique films.

A wet process can eliminate nearly all the visibility of scratches. I do not recommend putting any permanent or semi-permanent chemical product on the film to condition, mask scratches, or for any other reason. The wet process chemical should be approved for film, come from a reputable film preservation aware company and be essentially evaporated by the time it reaches the take up reel. I recommend use of the Kodak particle transfer roller system for “precleaning” of the film. Remaining chemical grease/grime/cigarette/etc will probably be removed in the course of the wet process. I choose to have the film transfers that I supervised run at somewhat “slow motion” to lengthen brief scenes and reduce the jitter in the image. This increased the telecine time and so increased the cost of the transfer but the transfers, with no editing and subsequent processing at all were provided a good viewing experience. The ultimate wet process is the wet gate. There are very few wet gates available for small gauge film transfer work.

Many transfer houses do use commercial gates, rather they use a modified projector film transport to provide the film plane for the transfer process. Many of these systems project the light from the modified projector directly into a video camera. Such a scheme is very likely to cause a pronounced amount of speckle. Speckle is a phenomenon that can be taken to be film grain but it is caused by self-coherence of any light source where light rays travel similar paths from the lamp to the imaging array or sensor. Because speckle is not grain but always appears to be in sharp focus, do not make the mistake of viewing the presence of speckle as proof of an optically sharp transfer process.

In addition to color correction, elimination of visibility of scratches, avoidance of new scratches and film damage and avoidance of speckle, a quality transfer should also use image sensing arrays that have good pixel depth. Lack of depth would result in a washed out image (poor contrast) or muddled details in the image.

Remember, the original film is its own archive copy. At this time in most cases getting safe and quality transfers to provide access to the archive original is the reason for doing the digital transfer. Keep films in the dark, in cool ambient conditions in unsealed containers in relatively dust free areas of normal household humidity.

Ralph
July 23, 2003
SonyDennis wrote on 7/31/2003, 5:46 PM
>>Actually, it stands for Digital Versatile Disc.

It used to. Now it officially doesn't stand for anything.

///d@
winrockpost wrote on 7/31/2003, 7:19 PM
Wow!!!!,
Saw the thread
Got interested
read it
now have a headache
need a nap

seriously though, interesting stuff