I just taped a concert with 3 V1U's, an HV20 (way in the back), and a Sony D50 field recorder. Footage came out great as did the audio from the D50. I've learned over the last few years how to sync the audio pretty well via spikes in audio waveforms.
But here's the issue I find with multiple cameras: If they are various distances from a performer/speaker, you can sync the audio exactly via lining up audio waveforms, but if one camera is 20-30' further back from the stage, the audio takes longer to get there, so the audio waveforms may be synced visually on the timeline but the matching video will actually be a few frames off. Is there a way to remedy this without ungrouping the video from the audio and experimenting with moving it frame by frame until the image seems to line up with the audio?
That's just difficult to do with Multi-camming it, as I lose a bit of fps in preview of several tracks of HDV, so trying to match up a mouth moving with audio is difficult with less than perfect preview.
But here's the issue I find with multiple cameras: If they are various distances from a performer/speaker, you can sync the audio exactly via lining up audio waveforms, but if one camera is 20-30' further back from the stage, the audio takes longer to get there, so the audio waveforms may be synced visually on the timeline but the matching video will actually be a few frames off. Is there a way to remedy this without ungrouping the video from the audio and experimenting with moving it frame by frame until the image seems to line up with the audio?
That's just difficult to do with Multi-camming it, as I lose a bit of fps in preview of several tracks of HDV, so trying to match up a mouth moving with audio is difficult with less than perfect preview.