Possibly useful observation... Frame boundaries...

Comments

farss wrote on 5/4/2012, 6:24 PM
All of which sounds fine until you consider what happens if your project's audio sample rate is set to 44.1 KHz, the default sample rate. 48KHz was chosen as the audio sample rate for vision because it yields an integer number of samples per frame for all frame rates.
You're also assuming that frame rates are nice integer values, they're not.
Combine those two factors and you find that a NLE that is driven by the audio sample rate faces a problem.

"You're assuming frames mean something, they don't, they're just a measurement of time."

That's completely wrong. A frame is an indivisable unit. At what point in time it gets played out can be varied as can the speed at which the frames are played out. The frame ticks as displayed on the timeline though are as you say just another way of viewing time.

Vegas has been built on a simple but flawed premise: "Sound and vision are just continuous streams." That works for sound, you could splice analog audio tape anywhere. You could not splice film just anywhere, you could not splice video tape just anywhere. There simply has never been and probably never will be a vision system where vision is a continuous stream.
Of course in the digital realm we have interpolation, two frames of vision or two samples of audio can be interpolated to yield an estimated value at an intermediate point in time as required. That point in time though must be at a frame tick for the target frame rate. The problem can be resolved but oh my, what a coding nightmare. For a vision editing system it makes life way simpler and less prone to problems creeping in to start with the unit that drives everything being the frame and having an enforced integer number of audio samples per frame.

Bob.

cybercom wrote on 5/4/2012, 9:00 PM
@WillemT:

Thanks for the info, I'll leave it checked!

@farss:

I totally agree!

< ")%%%><|