Here is my situation: I sharpen a track by duplicating it to a second track, adding a convolution filter to the first track, blending via "hard light" and adjusting the opacity of the first track to suit. I also need to add other fx to the result, e.g., color curve, panning, and so forth.
So I have two tracks which are blended into a "third" (conceptually speaking) track, then I need to apply fx to the third track.
I know of two ways to handle this:
1) Apply the same fx with exactly the same parameters to both tracks, i.e., apply the fx upstream of the blend.
2) Blend the two tracks to a third track via "render to a new track", then operate on the third track.
Both approaches are poor. With (1), its very difficult (not to mention a pita) to ensure that both tracks get exactly the same fx with the same envelopes. This has to be *perfect*, else the convolution filter and blend will be screwed up. With zoom/pan envelopes, I'm especially screwed. With (2), I have to wait for the render (its slow), and I may have to repeat it because of interactions between the amount of blending and the downstream fx.
It seems to me the "right" answer is one of the following:
1) Have a "Render to new virtual track" option, which combines the individual tracks into a new track and lets me edit the new track. However, the new track is calculated on the fly, rather than being pre-rendered to the hard drive. This would be very slick.
2) Have a "video bus", where video tracks are assigned to busses, fx could be applied to the busses, then the busses could be combined.
How these video busses could be combined with the same editing capabilities already present beats me, so this seems like an inferior option. Bad idea.
Note that "Render to new virtual track" is the most efficient option, as far as my time is concerned. Its also best option quality-wise. Never mind whether or not the computer could keep up in real-time, that's my problem/decision.
Have I missed another solution? Does anyone else see the usefulness of my suggestion?
So I have two tracks which are blended into a "third" (conceptually speaking) track, then I need to apply fx to the third track.
I know of two ways to handle this:
1) Apply the same fx with exactly the same parameters to both tracks, i.e., apply the fx upstream of the blend.
2) Blend the two tracks to a third track via "render to a new track", then operate on the third track.
Both approaches are poor. With (1), its very difficult (not to mention a pita) to ensure that both tracks get exactly the same fx with the same envelopes. This has to be *perfect*, else the convolution filter and blend will be screwed up. With zoom/pan envelopes, I'm especially screwed. With (2), I have to wait for the render (its slow), and I may have to repeat it because of interactions between the amount of blending and the downstream fx.
It seems to me the "right" answer is one of the following:
1) Have a "Render to new virtual track" option, which combines the individual tracks into a new track and lets me edit the new track. However, the new track is calculated on the fly, rather than being pre-rendered to the hard drive. This would be very slick.
2) Have a "video bus", where video tracks are assigned to busses, fx could be applied to the busses, then the busses could be combined.
How these video busses could be combined with the same editing capabilities already present beats me, so this seems like an inferior option. Bad idea.
Note that "Render to new virtual track" is the most efficient option, as far as my time is concerned. Its also best option quality-wise. Never mind whether or not the computer could keep up in real-time, that's my problem/decision.
Have I missed another solution? Does anyone else see the usefulness of my suggestion?