lossy/losless has 2 levels:
1, from analog to digital it is allways lossy, because losless takes too big file size. the acceptable size a computer can treat will be called losless when the file:
a, by cutting and rejoining
b, transfering to another platform or
c, editing
remains the picture information, e.g. DV AVI.
2, any digiform that after the process of a, b or c gets lost pic info is lossy.
the dv avi, even after editing, lets say color correction, remains its information, you have only changed the looking.
the mpeg2 can be now cut and rejoined together without pic-information lost under certain condition, but not by editing, thus its lossy.
lossy or losless is not a will we would use. we must choose, according to our purpose.
The reason for lossy codecs is because generally they result in a significantly smaller file size than lossless codecs. Generally about the best a lossless codec can do is to reduce the file size by half. At this rate a standard double layer DVD could only hold about 8.5 minutes of video. Who would want that?
Lossy codecs allow much smaller file sizes, generally starting at about 1/8 of the size of uncompressed for the highest quality and going down to tiny fractions of that depending on how much quality you're willing to lose. They can achieve much smaller file sizes because they're allowed to lose some of the data in exchange for better compression.
I'll use uncompressed files (lossless) anytime I want to move a file from one program to another for additional work. If I used mpeg for example, then every time I render that file to mpeg and move it to another program to be opened, worked on, and rendered again, I will lose a bit of quality. Whereas if I work with uncompressed avi (lossless) between programs I will lose nothing to quality.
You need big drives to work this way though. An hour of HD mpeg which is about 13gigs, will be about 650gigs to 750gigs (depending on resolution) of avi.
If you multiply the number of pixels (720x480, for instance) times the number of bits for each pixel (which determine its exact shade of color and level of intensity between pure dark and pure bright), you end up with the actual storage required for one frame of video. You then multiply that times the framerate and you then know how much space is required for that video. It's about 60 GBytes for an hour of 720x480 NTSC video and I think the HD calculations someone just presented in that earlier post are probably correct, although I don't know those off the top of my head.
Even with today's processors, RAM and hard drives, that's way too much "stuff" to push around, so some compression is applied in order to reduce the total number of bits for each frame of video. Truly lossless compression is rare, but it does exist. The nature of images, whether still or moving, makes it tough to reduce the size of the file without losing any information because there is generally no order or pattern in nature: everything is random. Most lossless compression works by recognizing redundant patterns and stores those as "run lengths" rather than storing the original 1's and 0's. Thus, to get a significant (large integer multiple, as we engineers like to say) reduction in file size, you have to start "throwing away" some information while doing the compression. This leads to lossy compression.
Some compression loses more information than others. DV compresses each frame and loses a lot of color information, but is pretty good about preserving the spatial information. MPEG-2 and its cousin, HDV, compress a whole series of frames (Group Of Pictures, or GOP) and achieve even more compression, but far more information is lost.
When working on feature films or other critical work, almost all comrpession is avoided, so you need fancy hard drives and big workstations.
For slightly less critical work, some form of lossless or very low loss compression is used. The cheap version of this is the HuffYUV codec, but I think there are professional codecs that provide more performance and more "knobs" to twiddle.
When you are finally ready to deliver, then you must use one of the various delivery format, such as DVD or BD, and these use the very high compression levels already mentioned. You only want to use these lossy codecs once as the very last step in your workflow. You do NOT want to re-import that video and then compress it back to MPEG-2 or AVCHD a second time because you will almost certainly be able to see a significant number of artifacts (flaws) in the resulting video.
BTW, audio has a similar lossless, low-loss, and lossy (delivery) format structure, just like video.
This isn't to say that working with lossy compression formats is always bad.
For instance, let's say you shoot on an HDV camera that uses lossy mpeg2 compression. Let's say you edit without color correction using Vegas and smart-render the video into the same mpeg2 format and author it without rerendering onto a Bluray disc. You will actually have one generation less than someone who converted the original video into uncompressed, edited and then recompressed the video back into mpeg2 for Bluray authoring.
The problem with passing some material through / "smart" rendering (which isn't really that smart) is that you can have subtle issues when there are jumps between 1st and 2nd generation material. For DCT-based compression you don't really see this but the chroma can get screwed up.
In Vegas' case, box reconstruction is used so you will get boxy chroma if going to a format that doesn't use chroma subsampling. 4:4:4 sources also won't get converted into 4:2:2 properly.
An easy and reasonable way to avoid these issues is to recompress everything into second generation (while working with 4:4:4 intermediates) and to apply proper chroma resampling (and obeying chroma siting standards).
I thought that smart rendering was direct copying of the media data. That wouldn't be likely to happen if you are using 8bit media in 32-bit mode. At least not without Vegas bypassing all things 32-bit for that moment.
Yeah, and that's why the contrast is so great where the 32bit render changes the colorspace of the render, but only at the transitions and other parts that are rendered during a smart-render. The sudden jumps in color are what we (or at least I) am talking about.