I feel like an idiot...

Independence Films wrote on 8/1/2009, 12:58 PM
Before I get into why, I first want to say Kudos to SONY.

-The upgrade to Vegas 9a was the most painless transition I've ever made with any software ever- Download off the site, install, program registers itself online... cool! I was up and running in minutes. I wish everyone used this model- thanks Sony!

Now, why I feel like an idiot (videot ?) ...sorry...

What the hell does YUV actually stand for? I'm assuming it's something to do with uncompressed video. Why or when would I want to render out to this spec?

Also, "interleaving"? what is that? ...and why do I care how frequenly it happens?

-I've been using Vegas quite successfully for years now, but now I'm starting to explore a little more under the hood. Is there a "redbook" somewhere that outlines what all these various parameters actually mean? The online manuals are kind of poor in this regard...

Thanks guys,
-Ross

Comments

farss wrote on 8/1/2009, 2:15 PM
"What the hell does YUV actually stand for?"

http://en.wikipedia.org/wiki/YUV

In the context calling it "YUV" is almost certainly wrong as it's most likely Y'CbCr but that'd be quite a mouthful. I guess your question comes from finding the "SonyYUV" codec. If that's the case then it's a codec that uses 4:2:2 chroma sampling, it is therefore not uncompressed although it does not use temporal compression like HDV uses. It's fairly close to lossless, it is the same as the Black Magic Design codec and the AVI file is interchangeable accross every platform I've tried it on.

Bob.
gpsmikey wrote on 8/1/2009, 3:26 PM
Interleaving is a process developed originally to help smooth out motion - each "frame" is made up of 2 "field"s - every other scan line is from the first field, then it goes back and fills in the alternate scan lines with the second field - so in the NTSC world, you end up with 30 frames per second, but 60 fields per second. It was done to help smooth out the motion but it can give some strange "sparkling" etc to still images with lots of detail. I'll let someone else provide more detail than I know as to when each is best, but that is the basics of it. Here are a couple of links with some additional information:

interleave info
all about fields

mikey
John_Cline wrote on 8/1/2009, 3:36 PM
Mikey, your description is basically accurate, but the term is "interlacing." Interleaving is a different process, it refers to how much and how often that audio is interleaved with the video in a video file. Vegas defaults to a chunk of audio every 250 milliseconds.
gpsmikey wrote on 8/1/2009, 3:51 PM
Oooops -- brain fade -- it's too hot here today. You are correct John - read interleaving but saw interlacing. Well, in case anyone wanted to know about interlacing :-)

mikey
Independence Films wrote on 8/1/2009, 7:41 PM
Thanks guys,

yeah, "interlacing" I've figured out. The question arises from choosing a render format and the "Interleaving" options appear.

...so interleaving is real-time dependent and independent of frame counts etc.?
John_Cline wrote on 8/1/2009, 8:42 PM
AVI is a container format and the term "AVI" stand for "Audio Video Interleave" which means that the audio and video streams are interleaved together in the file.

Audio and video data are broken into blocks and mixed together; each video frame has its own chunk, but audio data is organized in small packets. This is the "interleaved" part of the file. The reason for it is so that a player can read the AVI file sequentially and pick up the audio and video it needs without seeking all over the place. This isn't too important for playback from a hard disk, and modern players are very good at handling non-interleaved or badly interleaved files, but correct interleaving is critical for proper playback in embedded devices and CD-ROM playback. Note that despite common belief, the interleaving of audio and video chunks has nothing to do with the timing of the streams, and thus no effect on sync.

The Vegas default interleave of 250 milliseconds works just fine for most purposes.
musicvid10 wrote on 8/1/2009, 9:07 PM
Anything from 250ms to 500ms works just fine, even on older computers.

Best advice is, don't sweat the small stuff. The defaults are not there by accident, but for a reason.
Independence Films wrote on 8/1/2009, 10:37 PM
Thank you everyone!

-This forum is outstanding; much obliged!
Ross
Grazie wrote on 8/1/2009, 11:14 PM
Using Bob, John and Mikey's description of what IS happening, this thread should be made a sticky - well done chaps!

And Ross, to feel like a real idiot you would have had to NOT asked the question - yeah? You did the best thing. And Ross, it helped me too!

Grazie
ingvarai wrote on 8/2/2009, 1:55 AM
This forum is outstanding; much obliged!

Agreed! One improvement on your side: A more descriptive subject would not hurt..

ingvarai
ECB wrote on 8/2/2009, 6:51 AM
One comment on interlacing from an ole duffer. Back in the days when TV was first developed the technology was using these devices called tubes with a very limited bandwidth. The development of a wide band tube amplifier called video amp made it all possible but it was still a tight fit. Without the video amp you would have received a TV picture about once per minute and you might loose interest. :) About all they could fit was 30 frames per second and the flicker in the bright area would drive you crazy. If memory serves persistence of the eye is about 50Hz minimum. Increasing the persistence of the phosphor in the crt would cause ghosting so they introduced interlacing creating the picture in two parts (fields) giving you an effective 60 frames per second and the flicker is gone. Similar to pull down in a projector. Next time you gripe about the shortcomings NTSC remember what they had to work with. RCA did some phenomenal engineering to fit color into the existing bandwidth (6MHz vestigial sideband) and not effect any of the (current B&W receivers. :)

Ed