Do a search or three, and you'll find a bit. Also visit sites like adamwilt.com
Something that can be kind of hard to get one's head around, try to focus on what happens, the results, rather then all the stuff behind it... Kind of like driving a car one doesn't need to know how the engine and transmission work - in fact, knowing can make driveing a bit more complicated ;-)
That said, perhaps this will help a bit...
Anamorphic video, as on a widescreen DVD, is encoded, stored at a much narrower frame size, and the player expands it back out to it's full width glory. What's good about it is this saves a bit of room, which is your goal when you compress video.
I imagine the same sort of things were considered when they came up with the DV format for video - how to get the most in the least space. If you take some DV footage, you record video that fills the screen, is displayed at 720 x 480 if NTSC. But, it's stored in a narrower frame size, not at 720 x 480 really. In a (failed?) effort at simplifying this concept, the DV standard calls for a .9 something pixel aspect ratio - instead of thinking of the video frame being squished and expanded, think of individual pixels that are squished and expanded.
Most important thing (in my opinion anyway) is to look at the results, and select the options that get you what you want - if there's something set wrong, you'll see it rest assured, so go back and play with the settings. If you have video at one aspect, generally you want to keep it that way, and you'll like as not never have to worry.
First, understand that an analog video signal has no pixels. It's just lines (rasters) of analog info at varying frequencies and voltages. (The voltage range corresponds to brightness).
When a signal is digitized it is, of course, sampled. You could sample a line of video:
10 times per line
640 times per line
654.5 times per line
or whatever you took a fancy to
It just so happens that DV sampling yields 720 brightness samples per line of video. There are mathematical reasons this was chosen but the basic reason was so that DV would have the same number of luma samples/line in both PAL and NTSC video.
Where pixel aspect ratio comes in is that computer screens essentially have a different sampling rate. We say they have square pixels while DV video has non-square pixels.
The upshot of it is that a DV image will look wider than it should on a computer monitor unless we use software to correct the display of it. If you keep the vertical dimension constant and just correct the horizontal dimension you would squeeze the image down to 654.5x480.
That should lead you to a number of questions:
1-Do I really want to convert the dv footage to 654.5x480?
2-What do I do with the extra half a pixel
3-Why isn't it 640x480?
1-For DV editing you only want to simulate square pixels to get an idea of what your footage will look like on a TV. If, however, your final output is for computer monitor or data projector then you may want to render the movie as square pixels (1.0 PAR)-assuming that the media player can't correct for the pixel aspect ratio. This is usually what you want to do.
2-Don't worry about the extra half a pixel. If you're just simulating square pixels then it doesn't much matter. If you are rendering a movie in a square pixel format don't fight the rendering template. It'll pick a nice whole number of pixels.
Another way to get whole numbers is to scale up to 720x528. This can be useful with stills but Vegas is built to work with 654x480 stills. To confuse matters, other NLE's use different numbers (in part because they did their math wrong-solving for the answer they wanted). Just follow the directions of your specific software.
3-NTSC 720x480 doesn't correct down to 640x480 square pixels (but 704x480 does). The truth about TV and 4:3 aspect ratios is that 4:3 refers to the visible area of the picture and not the entire signal from the start of each raster to the end of each raster.
Think about it. TV signal specs were set in the days of tube electronics. There was a lot of extra slop and tolerance built into the spec, including extra line width to give the signal voltage time to drop to zero before moving from the end of one line to the beginning of the next. If you digitize the output of just about any VHS deck you'll see soft edges at the right and left of the frame. This is allowable because the edges aren't supposed to be visible.
The part of the DV frame that makes up a 4:3 image is the center area of 704x480.
When you make a 640x480 square pixel movie from your DV footage one of two things will happen:
-The software will convert the whole thing to 640x480 and the image will be just slightly narrowed. You may not even notice this.
-The software will crop the frame to 704x480 and then render a 640x480 movie.
In the first case you should have cropped your DV project to 704x480 before rendering.
Now that I've confused things, let me say that I don't know what Vegas does about this. It may be that the different codecs handle things in different ways. Some may crop and resize while others simply resize.