Another (basic) look at t/l playback

megabit wrote on 7/20/2010, 9:56 AM
This is going to be a very basic - almost dumb - question, but here it goes:

The efficiency of doing it (or should I say: lack thereof) aside, what is it actually that Vegas is doing while playing back an unedited clip from its timeline, and (apparently) isn't doing when playing the same clip from the Trimmer?

Well, I said it was going to be dumb - so please bear with me.

I have no problems understanding there is a lot of processing involved in playing back an edited file (with effects, CC, size/position/resolution changes, etc.) - but why labor so hard with just a raw clip?

My system is actually quite good at playing my XDCAM EX HQ (1080p/25) files at 35 Mbps, but when it comes to playing back my nanoFlash clips, anything above 100 Mbps is just unable to play back at the full 25 fps (of course, I'm talking Best/Full quality)...

...while from the Trimmer (or external players like VLC or Sony MXF Player), everything plays at full speed & quality - up to and including the 180 Mbps L-GoP and 220 Mbps I-Frame material! This I guess excludes HDD I/O bottleneck from the equation.

What is the basic difference between the NLE timeline playback, and a player (including Vegas Trimmer, or Edius Player Monitor)?

BTW, since I mentioned Edius: it's not true that it can play its timeline so much more efficiently as some people say. With a 220 Mbps nanofile on the timeline, I have noticed that while maintaining the full fps, Edius considerably reduces the Recorder Monitor quality (just like Vegas does when "Adjust size and quality for optimal playback" is checked).

Please enlighten me, good people :)

Piotr

AMD TR 2990WX CPU | MSI X399 CARBON AC | 64GB RAM@XMP2933  | 2x RTX 2080Ti GPU | 4x 3TB WD Black RAID0 media drive | 3x 1TB NVMe RAID0 cache drive | SSD SATA system drive | AX1600i PSU | Decklink 12G Extreme | Samsung UHD reference monitor (calibrated)

Comments

jabloomf1230 wrote on 7/20/2010, 10:24 AM
You'll probably need someone from SCS to explain it correctly.

A media player just reads the media file, accesses the audio and video codecs to decode each frame and pushes each frame to the screen/audio card via system drivers. My total conjecture is that the timeline playback has a lot of options for processing the output, built into it. Even if you could switch them all off, there would still a lot of overhead in the code that needs to be processed for each frame.

To preview the timeline, Vegas has to do all the things that a player does, plus check to see which options are selected for previewing. You can obviously see the impact, by changing the preview quality and resolution and those are only two factors affecting previews.
rmack350 wrote on 7/20/2010, 11:21 AM
Putting it in a kind of puppet-theater sort of way, I'd imagine that when vegas has a clip on the timeline then at every time point (like at every sample, NOT just every frame) Vegas has to ask itself a bunch of questions. If you've just got one clip on the timeline then Vegas probably answers No to most of those questions but it still has to ask and answer the questions. The trimmer just doesn't require this.

Open your Vegas Pro Online Help and search for Signal Flow Diagram. This gives you a picture of what SCS says is happening between the "Goesinto" and the "Goesoutof".

I'm not saying that Vegas couldn't do this better. In fact, I think if you ask the customer for a list of product requirements then clean playback from the timeline would be high on the list.

One other NLE I know of gets around this requirement by marking the timeline Red where it can't guarantee playback. (That product also provides better playback but that level of feedback really helps quell the complaints)

Rob

Chienworks wrote on 7/20/2010, 12:05 PM
Playback on the timeline also requires that the clip be conformed to the project properties on the fly, as it's playing. Even if the project properties match the clip, each frame and audio sample still have to be checked for conformance. This is a time consuming process even if that check results in nothing being done.
megabit wrote on 7/20/2010, 12:23 PM
Good point - but when I said "unedited", I also meant "100% con-formant with the project settings". I should have stated that explicitly.

AMD TR 2990WX CPU | MSI X399 CARBON AC | 64GB RAM@XMP2933  | 2x RTX 2080Ti GPU | 4x 3TB WD Black RAID0 media drive | 3x 1TB NVMe RAID0 cache drive | SSD SATA system drive | AX1600i PSU | Decklink 12G Extreme | Samsung UHD reference monitor (calibrated)

Former user wrote on 7/20/2010, 12:29 PM
But that is the advantage of Vegas, you can put several different formats (audio, video, stills) and it will attempt to play it in real time. Other NLEs will tell you that you have to render to preview.

Dave T2
megabit wrote on 7/20/2010, 1:25 PM
I'm not after another "why is Vegas playback so poor" rant - just asking what it's doing when there's nothing to do but play. Technical aspects, if you know what I mean.

AMD TR 2990WX CPU | MSI X399 CARBON AC | 64GB RAM@XMP2933  | 2x RTX 2080Ti GPU | 4x 3TB WD Black RAID0 media drive | 3x 1TB NVMe RAID0 cache drive | SSD SATA system drive | AX1600i PSU | Decklink 12G Extreme | Samsung UHD reference monitor (calibrated)

farss wrote on 7/20/2010, 2:09 PM
My understanding is that in a video based application the code is effectively driven by the video "clock". Every frame time of the sequence timeline decode a frame and send that to the display system.
Vegas is not a video based system, it is an audio based system. Every sample time check to see if a frame of video needs to be decoded and sent to the display system. In fact if you look at a project file you'll see the granularity is finer than even the audio clock frequency.

As others have said the upside to the Vegas approach is nothing has to 'conform'. That creates a huge code overhead plus as happens the potential for things to go wrong when you're not counting frames.

Bob.
rmack350 wrote on 7/20/2010, 3:19 PM
I think Piotr's point is that if the clip matches the project then there's nothing to check so why should there be overhead?

The counterpoint is that Vegas still needs to check to make sure there's nothing to check. Perhaps it could do it more efficiently than it does (and perhaps it does it more efficiently than I imagine)

Rob
farss wrote on 7/20/2010, 3:24 PM
"I think Piotr's point is that if the clip matches the project then there's nothing to check so why should there be overhead?"

Vegas has no way of knowing that though. It still has to jump though all those hoops thousands, not tens of times per second.

Bob.


rs170a wrote on 7/20/2010, 3:35 PM
Spot's explanation in the Trimmer preview better than Editing Preview thread:

Because the Trimmer window, like the Explorer window, is unbuffered, and doesn't rely on the settings of the Preview window.

In the same thread, Rob Mack said:

The trimmer doesn't have nearly as much to do. No interpolation to speak of so I think it's pretty much just like setting the preview window at "Preview" quality. And if you go find the signal flow diagram in the Vegas online help you'll see that a lot goes on when video is played from the timeline. Very little has to happen when playing from the trimmer.

Mike
rmack350 wrote on 7/20/2010, 5:32 PM
Proving that I repeat myself a lot even when I don't remember it. :-)

Rob
megabit wrote on 7/21/2010, 2:14 PM
I must say none f the answers convinced me.

Why cannot I take advantage of my nanoFlash I-frame files at 220 Mbps? They should behave in Vegas like good intermediates are supposed to - yet they can playback at some 50% speed at the best...

AMD TR 2990WX CPU | MSI X399 CARBON AC | 64GB RAM@XMP2933  | 2x RTX 2080Ti GPU | 4x 3TB WD Black RAID0 media drive | 3x 1TB NVMe RAID0 cache drive | SSD SATA system drive | AX1600i PSU | Decklink 12G Extreme | Samsung UHD reference monitor (calibrated)

rmack350 wrote on 7/21/2010, 11:17 PM
I think we were answering the question you put in bold:

What is the basic difference between the NLE timeline playback, and a player (including Vegas Trimmer, or Edius Player Monitor)?

Evidently what you really wanted answered was "why won't my 1920x1080x50i, 280Mbit/sec, 422 MXF file created by my Convergent Design NanoFlash recorder play at full framerate on the Vegas timeline?"

I don't know.

Rob

<Edit>Vegas Pro 9 offers MXF renders up to 50 Mbps but no higher. Maybe there's a clue in that? I don't really know but my guesses might run along the lines of maybe Vegas's codec doesn't handle 280Mbps very well...or maybe Convergent design is writing something else that the Sony codec doesn't handle well. MXF is yet another container, after all, so the contents can probably vary.