Multi-cam sync, flash, rolling shutter

smhontz wrote on 1/23/2013, 8:41 PM
The situation: I have a two-camera shoot. Canon XF 105 & 305, running 1920x1080 30p.

Camera A: Has medium shot of speaker, receiving direct feed from house mixer
Camera B: Has wide shot of speaker, just using camera mics

I sync'ed both up by matching up the audio waveforms as much as possible and "nudging" Camera B back and forth until the audio echo disappeared. We then used only the Camera A audio feed from the mixer for the sound.

We then proceeded to use multi-camera to cut back and forth between the cameras - some 70 cuts.

Looking at the rendered output, my partner thought the audio seemed out of sync. Since the cuts that use Camera A also use the mixer feed that went into camera A, those ones seemed in sync, but Camera B was slightly off.

Fortunately, someone was taking pictures during the event, so I was able to see a camera flash on both cameras. But it appears in an unusual way:

Timecode:
1:14:28 both pictures normal
1:14:29 Camera A picture slightly brighter, Camera B normal
1:15:00 Camera A normal, Camera B shows full frame flash
1:15:01 Camera A shows only the top 1/3 frame with flash, Camera B normal
1:15:02 both pictures normal

The fact that Camera A showed only 1/3 frame of the flash leads me to believe I'm seeing a rolling shutter effect. So, I'm not sure exactly how I should correct this. It appears that Camera B is one frame ahead, so I should move it back one frame. Does that make sense?

Also, is there any way to move all the events related to just one camera in a multi-camera track? There are so many cuts, and we used pan/crop to reframe shots. I'd hate to have to start over from scratch.

Comments

richard-amirault wrote on 1/23/2013, 8:47 PM
I'm no expert .. but I would think that a one frame difference would not be noticable as "out of sync" audio. (for 99.9% of situations)

One possiblilty is that the cameras do not stay in sync during the entire shoot. Just because you sync them up at the begining does not mean that they will stay that way.

That they are two different models increases that likelyhood (but have both the same does not guarantee anything)
musicvid10 wrote on 1/23/2013, 8:55 PM
You are only able to achieve one-frame granularity with video sync; Quantize to Frames and Snapping on. Sliding video between frame boundaries does no good, and is asking for a peck of trouble.

Acoustic latency between audio sources will be different. Ignore that idea completely.

Go for the closest visual sync between cams (snapped to frames), and use the best audio. If you need to slip the audio a bit, ungroup and temporarily turn off QTF. Then turn it back on. But I would totally wait until I've watched a render to make that decision.

That's the absolute best you can do.
smhontz wrote on 1/23/2013, 9:06 PM
Interestingly enough, people can notice out-of-sync if the audio leads the video by as little as 15ms, or by 45ms if the audio lags the video. (See http://www.lipfix.com/technical_details.html). And, one frame at 30p = 33ms, so it is possible that 1 frame can make a difference. Unfortunately for me...
musicvid10 wrote on 1/23/2013, 9:22 PM
To reiterate:
1. Your video must remain at frame boundaries
2. Your audio can be slipped down to the tick level by temporarily turning QTF off.
So there is no issue syncing your audio to the aligned video.
3. I can notice a 5 ms audio lead because it is unnatural. People usually will not notice a 15 ms audio lag because we are used to it (an equivalent distance of about 15 ft.).
Laurence wrote on 1/23/2013, 9:25 PM
Another thing that might have something to do with this is that from the acoustic perspective of the wide B camera, the audio would be delayed a bit due to the time it would take for the sound to travel that distance. Perfectly synced audio of course wouldn't do this and might seem early even if it was actually in sync. Maybe delaying the audio on the wide shots, or better yet moving the video a little early on the wide shots, might make it seem better. I don't know for sure though.
smhontz wrote on 1/23/2013, 9:35 PM
Musicvid, I'm not sure I understand.

I've got 70 different video events on my multi-camera track, and one continuous audio track (the feed from the mixer into camera A). The Camera A video is in perfect sync with the audio track, it's only the Camera B video events that are not in sync. Are you suggesting I slice up the audio track per event, and then manually slip each Camera B audio event?
musicvid10 wrote on 1/23/2013, 10:10 PM
Nothing so complicated as that.

Your video tracks should be visually synced as closely as possible, exactly at frame boundaries before creating multicam track.

The cams will now be synced to each other within <= 1/2 frame (or < 16.7 ms). Then sync the audio such that a viewed render is natural (meaning the audio does not lead either cam).
Then we're done.

You may be suffering from hypervigilance (aka "directoritis") now, but in six weeks you'll be amazed at how good it actually is.
Trust me, I've been there.

I've tried different audio delays for different cameras 25 ft. apart, but it never worked. Actually, it was terrible.
One "should" consider reducing the delay if one is shooting with very long lenses though.

The better situation is adding a discrete rear audio track in a 5.1 project. I normally sync using Pluraleyes, then introduce a 12-15 ms delay to the rear track (about half the actual distance from proscenium to back of room). Then I "may" add a 5 ms delay to the front center channel, which really can add a three-dimensional audio illusion. In a surround project, sync differences between camera-to-subject and camera-to-camera all but disappear.

Of course, the ideal situation is genlocked cameras . . .
;?)
smhontz wrote on 1/24/2013, 6:42 AM
OK, I think I got it now.

What you're saying is, first of all, that audio sync alone could probably never work because of the distances of the subject to each camera. For example, Camera A, which is tied to the house mixer, is essentially receiving the sound at the same moment as the video (assuming no delay in the mixer) but Camera B, which is some 30 feet from the stage, and is hearing the sound from the speakers behind the stage, is already out-of-sync by 28ms or so (audio lagging behind video).

So, having a visible syncing element (such as a flash) allows me to get the visual sync. Then slip audio as necessary (never have it lead video).

Your genlocked comment is ironic. I specifically paid the extra cost on these two cameras to have the genlock feature in them, but I totally forgot I had that. I will definitely use that in the future, even if it means the extra hassle involved laying the cable between the two cameras.
farss wrote on 1/24/2013, 7:11 AM
I'd highly recommend genlocking cameras if it's at all possible. If cable becomes too much of a hassle then you can get RF links to handle it, not cheap from memory.

I've never had the pleasure of genlocked cameras and I can tell you visually it can cause issues. As Musicivid has pointed out you can have a half frame offset which doesn't sound much until you cut between two camera of anyone moving quickly and their motion jumps backward on a cut. So you slide one camera one frame and then they still jump but not as noticeably. I'd rate that as more distracting than audio sync issues, especially if you ensure sound lags vision because we're subject to that in the real world anyway.

Bob.
Laurence wrote on 1/24/2013, 8:28 AM
You might want to give Plural Eyes a look. I don't know how it does it but it seems to fix drifting sync automatically. Running it in post will take less time than laying the Genlock cables and the results will be the same.
musicvid10 wrote on 1/24/2013, 9:05 AM
@smhontz,
Correct on all three counts.

@laurence,
PI didn't actually correct for drift, unless something is new visavis Red Giant.
However, this is rarely much of a problem any more with same-make cameras or the H4n.

For long programs, we ran the same audio (special subs from the board) to both cameras, so we could cover card (tape) changes. PI is used to sync an outboard rear ambient track (then we slip it upstream by a mathematically determined amount).

Or, on short programs, the rear mics go to stationary camera A. The real advantage of genlock is to have the video frames start at the same point in time (in reality within a couple of ms); the audio sync is a nice collateral, but still not as accurate as Pluraleyes (IOW, we use that too "if" both cameras' audio will be used in the mix).



smhontz wrote on 1/24/2013, 9:47 AM
Perhaps somebody can explain to me how PluralEyes would help. I had no problem syncing the audio between the two cameras; the issue was that Camera B had audio that was delayed from Camera A because it was hearing the sound later than Camera A. So, when I got the audio in sync, the video was off by one frame, which I couldn't tell initially because I was trying to visually tell if things like hand motions, eye blinks, etc were the same between two different camera angles. It wasn't until I spotted a flash from a still camera that was visible on both video cameras that I realized that they were off by a frame.

So how would PluralEyes help with this? I'm not familiar with how it works.
riredale wrote on 1/24/2013, 10:07 AM
Why would you want to sync the audio between the two cameras? As the first step in getting the VIDEO in sync, fine, but what you want is video sync. After that, forget the distant audio unless you're doing surround-sound, or if you want to inject a bit for room tone and applause.

I shoot all the time with two Sony camcorders and the drift between them is maybe one frame in many minutes, easily fixable by just splitting and sliding the secondary camera track during edit. I've also discovered that having a mic on or very near the talent makes an amazing difference in sound quality than a mic even ten feet away.

EDIT: If the video is off by a single frame, that would probably be hard to notice in a cuts-only situation, since it's jarring enough for the eye to follow the action. For slow dissolves, maybe, but probably not. Not to these eyes, anyway.
musicvid10 wrote on 1/24/2013, 10:25 AM
If one is going to use audio from different cameras and sources in the mix, or be syncing multiple video takes using a single audio source (e.g., music video), Pluraleyes is a practical necessity. With my shows, it has reduced the audio mixing from a couple of weeks down to a single afternoon in some cases.

If one is going to use a single audio source, and will be syncing video by eyeballing or timecode, Pluraleyes is not necessary.

On shows using sixty mics, several stereo subs, and outboard recorders, we couldn't get by without it. Neither could music video producers, guerrilla ENG, flash mobs, athletic events, concerts, school events, etc.
musicvid10 wrote on 1/24/2013, 12:34 PM
Here's how a project mixed the way I described above sounds.
https://dl.dropbox.com/u/20519276/Forumsurmix-1.mp4

The audio is a stereo mixdown of the 5.1 surround project. This sample has not been cleaned up for extraneous clicks and mic contact. My orchestra is mostly HS and weekend warriors. The show used 30+ mics, 4 submixes, 2 XHA1s (genlocked), and an H4 in the back row of the room.
farss wrote on 1/24/2013, 2:21 PM
One way to get audio in sync with vision is to simply unlock it from the vision and then slip and slide it. Unfortunately with Vegas then it's difficult to prevent it accidently sliding around as you edit.
The other approach that I've used is to use the Delay FX.

If you want to actually mix audio from different recorders then ideally they need to be very well synced and the way that is done is to use a master clock that generates both the genlock signal with timecode and word clock for the audio recorders. Then everything is locked at the sample level i.e. all the analog to digital converters sample at the same time.

Bob.
musicvid10 wrote on 1/24/2013, 2:36 PM
A master clock (usually cam A) generates a freerun (continuous) timecode that is daisychained to other cameras and outboard recorders. Whenever any of those devices starts recording (including cam A), it picks up the genlock signal and chases it, keeping the devices pretty close together. The running timecode is impressed on all the recordings for syncing in post. That's the simplified explanation.

Works pretty well, when it works. Note that devices tend to lead or lag each other by up to "maybe" 4 ms even with good equipment. I suppose Word Clock gets it closer than SMPTE (often120 ticks)..
farss wrote on 1/24/2013, 2:59 PM
I know how it works and if that's all you can do it's certainly better than having cameras running free.
However just daisychaining vital signals such as TC is really asking for things to go wrong big time.

Apart from the obvious risks of broken cables It took me a while to figure out why the big boys don't do this.
Sending TC into an audio devices does not sync it, sure it'll try it's best to chase it but that is fraught with issues. The correct approach is to have a master clock generator that provides TC to all the cameras and word clock to all the audio devices.
This way not only is everything locked sample accurate the clock generator is the only single point of failure for the system.

If all that's way over the budget a SPG isn't all that expensive these days. Then a single cable problem cannot cause havoc.

Bob.
ddm wrote on 1/24/2013, 3:24 PM
A slight note of clarity regarding genlock which will only confuse matters more, I'm afraid... Genlock and timecode are two different beasts. You can have 4 cameras genlocked and all cameras (recorders, really) can have different time codes, they are separate inputs on cameras. Genlocked cameras assure that all frames start at the same time so when you are doing a live switch you will not cut to one camera thats out of sync (aqain, nothing to do with time code) causing the switcher to hiccup and do a vertical roll. Obviously, it is beneficial, when doing multicamera, for all cameras to be genlocked, also beneficial if all cameras are getting timecode from the same source, but they are separate entities.
OldJack wrote on 1/24/2013, 5:58 PM
I normally sync 3 camera videos using Pluraleyes software before creating the multi track. Its a no brainer.
smhontz wrote on 1/24/2013, 9:37 PM
But again, if I understand Pluraleyes properly, it does nothing for my particular situation (i.e, where I am using the audio feed into a single camera as the entire audio track, not combining multiple audio feeds)

It seems to me the important thing is VISUAL sync, because the audio of three different cameras is going to be inherently different because of the time delay getting to three different locations. So, if you just sync audio, which is what I had done, you end up with the video out of sync between the camera shots.

I only see two choices for audio: first, get video sync on all cameras, and then pick either a) a single audio track as your master or b) if using multiple audio tracks, you need to slip them to agree with each other. And choice b) is what Pluraleyes is for, right?
smhontz wrote on 1/24/2013, 9:46 PM
ddm, my cameras have both genlock and timecode terminals. According to the manual, I can generate timecode from one camera and have the second camera synchronize to it. According to the XF 305 manual: "When an external time code signal is received, the camcorder's own time code will be synchronized to it and the synchronization will be maintained even if you disconnect the cable from the TIME CODE terminal."

So, since I'm not going to do live switching, it sounds I don't need Genlock and I just need to temporarily hook the TIME CODE terminals together, sync 'em up, and I should be good to go. Does that sound right?
musicvid10 wrote on 1/24/2013, 10:16 PM
I was trying to give a general (=oversimplified) explanation to bring a few of the uninitiated up to speed wrt Bob's previous thoughts. If I failed to convey that, the only purpose of that post failed to succeed. I think most people think "genlock" is something that happens mostly to 19-23 year olds.

And thanks for the added detail, Bob. I haven't had the advantage yet of a master generator with multiple ports, however I'm certain it would be more reliable than how we did it for the show I mentioned above.
;?)
ddm wrote on 1/24/2013, 11:52 PM
smhonz, technically speaking... your timecode will be synchronized but your start frames for your video won't be. Not usually a big concern, but apparently, for you, it has been. Take your flash example, had your cameras been genlocked, that flash would have been the same on all cameras, but because your frames were starting at slightly different times (1/30th or 1/24th of a second off, or less) the flash, which I think on most modern flashes lasts for 1/200th of a second, was off on different cameras. As far as jam syncing time code, that will drift, as long as we're trying to be crazy in sync here. The normal procedure for multicamera work would be to genlock the cameras to a master genlock source AND feed them all timecode from one source as well. As a practical matter though, I would rarely bother with this. I've done too many multicamera concerts with non-broadcast cameras where there is just never an issue with sync. Yes, you might have to slip the picture from one camera one frame up or back, but that should do it, and that will change every time the camera stops.