Multi Camera Musings ...

PeterWright wrote on 11/17/2009, 2:55 AM
On average I do a multi camera shoot about once a year - I did one last Sunday and I am so impressed with two plug-ins to Vegas.

I have often written how easy it is to synch cameras up "manually" in Vegas, and it is, BUT for the first time I gave PluralEyes a run - what a fantastic piece of software! Occasion - a live music gig. I had 3 cameras plus an audio track from the mixing desk fed onto my Zoom H4.

Before the main music started, and before I started recording onto the Zoom, the "star" who was launching an EP - surprised the rock'n roll crowd with a brief demo of Tai Chi with one of his guests - 80 year old Grand Master Fu Sheng Yuan ...

I had three camera tracks with sound, plus the desk audio track, which didn't start until after the Tai Chi and ran through the remaining 90 mins. I wondered how Plural eyes would handle this.

It took a while to do its analyzing, then - zap - the music part was all synched up, and the tai chi part, without the mixing desk track, was moved to the end of the timeline, but was also perfectly synched up. It just sorted everything itself - I'll definitely buy it, even for once a year!

The other thing was the mixing desk track. Even though I set the Zoom H4 to Low for the stereo line inputs, it came out much too hot and distorted.
Enter Izotope RX - Declipping - rebuilt all those horrible flattened wave forms and made it bearable again. If it hadn't, I would have had to make some kind of audio mix from the three cameras' audio, but the desk mix is usually the main one - and Izotope made it usable again.

I love good software ......

Comments

CClub wrote on 11/17/2009, 3:14 AM
Very helpful feedback. I often use multi-cam shoots, and I need to give PluralEyes a whirl. I'm wondering how it addresses the delay between video and audio due to varying camera distances from a concert stage. I find that I can easily sync the audio tracks via track peaks, but then the video is 1-2 frames off due to the difference in the time it takes audio vs light to travel. Does PluralEyes sync to the audio peaks, and if so, has anyone noticed that the video may be a few frames off?
farss wrote on 11/17/2009, 3:28 AM
One tip for anyone recording to tape. I've found the Sony HDV 85min tapes worth the money. Two of those typically means no tape changes during the show.

Always check feeds from desks once the show is running. Watch where you're getting it from. I try to make certain it's pre master faders. It pays to get into the sound guy's good books, trust me on this one. Try to seem willing to help anyone working front of house, chat them up. I've managed to get the lighting gal to nudge her dimmers up a stop when asked very nicely.

Bob.
rs170a wrote on 11/17/2009, 3:55 AM
I'm wondering how it addresses the delay between video and audio due to varying camera distances from a concert stage.

CClub, I asked this same question on another forum and was told that it wasn't able to adjust for this difference.
I was curious as, when I shoot plays and musicals, I take a feed from the audio board into one channel and a shotgun mic at the back of the room/hall into the other and I know that there is a timing difference.
The delay would always be the same though so, once figured out, it's not too hard to slip the other audio track a few milliseconds manually.

Mike
rs170a wrote on 11/17/2009, 3:59 AM
I agree 100% with Bob.
I've worked enough shows with the local IATSE folks (and always bring a few coffees/donuts the first day) that I have no problems asking for the occasional favour.
They appreciate it and my recordings look and sound much better as a result.

Mike
PeterWright wrote on 11/17/2009, 4:00 AM
CClub - yes, PluralEyes does use the audio wave form to do the synching. In my case, no cameras were more than 10 metres from the stage, so there was negligible variation.

If you have a different situation - a big outdoor gig, for instance, I would let Plural Eyes do it's work first, then, knowing how far away each camera(s) was, then do some trial and error relative shifting of video a frame at a time to see if it improved, BEFORE clicking the multi camera function.
johnmeyer wrote on 11/17/2009, 8:01 AM
I just watched the demos, and if this product really works as advertised, it is an absolute must-have for anyone doing even the once-a-year multi-cam shoot, like Peter.

As for the audio delay with distance, I looked for that in the demos and didn't see it specifically mentioned. However, they do slip the audio within a frame to make it match, so you would have that as a starting point.

It looks to me as though ALL the syncing is done on the audio. I think (but I am far from sure) that if you want to compensate for camera distance, you would take the sound board feed as a reference. You would then take the distance from each camera to the speakers and use this formula to slip the video (with the quantize frame turned off):

One frame for every 11.4 meters (NTSC)
One frame for every 13.7 meters (PAL)

The software appears to work entirely on sound and doesn't look at the video at all. Thus, there would be no way for it to get any information from the video (like a clapboard or flash), and even if it could, this information is so "sloppy" (you only have information in discrete intervals every 1/30 or 1/25 or 1/24 of a second) that it wouldn't be useful.

BTW, when I do multicam, during setup I use the manual focus on each camera to focus on the stage speakers (if I am doing stage work) and write down the distance shown in the viewfinder or lens. I use this later during post to adjust for the differential sound/distance delays. If the camera doesn't have a readout, I use my SLR as a rangefinder. If you are going to do closeups of an actor speaking lines using a telephoto from the balcony, you absolutely must make this correction.

For those who haven't thought much about this, one example: when I shoot a fashion show in the high school gym, I set up under one basket, and the stage is at the other end of the court. High school courts are 84 feet (25.6 meters). That means when the narrator is shown on camera, he is almost 2.3 frames ahead of the soundboard audio. This is noticeable.
musicvid10 wrote on 11/17/2009, 8:30 AM
I just watched the demos, and if this product really works as advertised, it is an absolute must-have for anyone doing even the once-a-year multi-cam shoot, like Peter.
Yes it is. I can't say enough good things about it. For best results, one needs to take a considered approach and have good quality source audio.
My only suggestion is to grab the beta for Vegas, play with it, and give the authors your individual feedback. They are remarkably responsive and committed to their product.

See my post on the Pluraleyes forum for a discussion and response from the author on this point and a few others:
(Suggestions for development)
The full thread, which pretty much shows where the beta development is at, can be seen here:
http://www.singularsoftware.com/forum/viewtopic.php?f=4&t=67&p=234#p234

I am currently working with them on a small bug, and have been told to expect a new beta with the improvements soon.
rs170a wrote on 11/17/2009, 8:36 AM
You would then take the distance from each camera to the speakers and use this formula to slip the video (with the quantize frame turned off):

John, you're making my brain hurt :-)
What I do is set up a short loop region, turn off Quantize to Frames, hit play and nudge the other audio track(s) until everything is in sync.
Turn QtF back on again and start editing.

Mike
musicvid10 wrote on 11/17/2009, 8:48 AM
Mike, Here's what I do:
1) During a rehearsal, take a low-level board feed and a mic in the back of the room into my H4. Start recording, and have someone who is miked on stage clap three times (notice I did not say blink!)
2) Dump the audio clip in Sound Forge and determine the delay between channels, example = 22.5ms.
3) My timeline "nudge" for my mid-auditorium delay (aesthetically correct) is 11ms. Done it this way for years, and makes stunning 5.1 surround.

If one wanted a camera-specific delay, say for close and long camera angles, I like John's method of slipping the video by a frame or two, but I have not had much need to do so, because I work on the theory of the observer at a stationary vantage point, not a dynamic one as might be inferred by wide and close shots. IOW, the main audio track and talent's lips stay in synch from the viewer's "theoretical" vantage point regardless of camera placement and focal length. I suppose in the echoey auditorium situation John described, some compensation would be necessary.
rs170a wrote on 11/17/2009, 10:45 AM
Thanks for the suggestions musicvid.
In my case, the bulk of my work is a single camera shoot with me at the back of the hall/theatre.
The board feed goes to one camera channel with the shotgun going to the other channel.
When I dump it into Vegas, I lower the shotgun level and correct it enough to give me the room ambience that I don't get from the performer's body mics.

Mike
musicvid10 wrote on 11/17/2009, 10:58 AM
Right, and that is "exactly" what I do, with the added tweak for two cameras and a longer shoot:

-- The board feed (vocals L, orchestra R) goes to both cameras identically. When I need to do tape changes in the middle of a program, I have continuous audio on the running cam. After synch, those tracks are mixed and rendered into a pleasing (pseudo) stereo master.
-- The ambient mic is also at the back of the room, recorded in stereo on a portable.
-- The issue with PluralEyes in my situation is that it does the cleanest possible sync between the board and the ambient, and the natural delay is lost, thus the resulting mix may sound too "clean." Different than syncing by video frames because PluralEyes does subframe shifting in order to accomplish a near-perfect sync. Determining that delay and dividing by 2 is my normal "bump" for the ambient track once it has been synced on the Vegas timeline. In all except very large auditoriums, 5-15 ms is adequate and pleasing. In your situation, the natural acoustic delay is preserved because the L/R tracks are not being shifted in relation to each other.

If you use a shotgun at the back of the room, a short one or even an SM81 will give you a more natural sound than a longer Senny, for instance.
rs170a wrote on 11/17/2009, 11:06 AM
Thanks musicvid.
I've always had the tracks exactly in sync and find that, with the shotgun (basic on-camera mic) track, I get the sound I want.
I have a Christmas concert to shoot next month so I'll try your trick and see which one I like better.

Mike
musicvid10 wrote on 11/17/2009, 11:11 AM
Well, my way takes longer, because the tracks are not necessarily in sync to begin with, and I often split my portable (ambient) track into 10 min chunks to eliminate drift.

If I had the advantage of shooting programs less than an hour in length, I would probably do it your way (yes, I know, LP . . .)
rs170a wrote on 11/17/2009, 11:33 AM
No LP mode for me.
I bring along a JVC deck (BR-DV3000U) that can handle a 276 min. miniDV tape.
Firewire out of the camera into the deck and all is well with the world :-)
Spoiled? Heck yes!!

Mike
musicvid10 wrote on 11/17/2009, 11:53 AM
Spoiled? Perhaps, but it does not diminish my envy.
rs170a wrote on 11/17/2009, 11:57 AM
It was a part of a rather large equipment package purchase at the community college I work for about 5 years ago.
We haven't gotten any new gear since then (and don't expect anything anytime soon either) and are still shooting SD 4:3 on 8 yr. old JVC camcorders :-(

Mike
farss wrote on 11/17/2009, 12:40 PM
You don't need to goto LP, the Sony HDV 85min tapes are the go. Expensive but....

Feeds from desks can be very dry, my audio guy uses outboard FXs on his outputs and my feed is pre that, also it's mono. I've found that a stereo mic at the front of stage can work wonders. I use a Rode NT4 for music and the Sanken CMS-10 for drama. The stereo mic goes into channels 1+2 of the Edirol R4 and the desk into channel 3. I'll try a boundary mic into channel 4 as well maybe next show, just to see what it gets.
In post I mix the desk and the stereo mic carefully, they can be out of phase plus the desk is digital so some delay there too.
The Edirol is pretty old and expensive, a much cheaper way to get 4 channels is the newer Edirol R-44 which records to SDHC cards and doesn't eat batteries so quickly.


If you've only got the mono deck feed I find the Multi-tapped delay plug from Sound Forge pretty good to add some fake width. A little goes a long way, not to be used at "11".

Bob.
arenel wrote on 11/17/2009, 6:08 PM
One of the schools that I tape shows for has about 20 w/l mics and the board operator operates each pot on cue, then closes the master during musical interludes. Needless to say, some cues are bad, to save myself, I use two wired shotguns on stage, over the orchestra pit. Shooting with two cameras, a pdx-10 locked wide and 16x9, and a DSR-250 16x9 for medium and CUs, I feed the stage audio L&R to the wide camera, use a shotgun on the 250, and the board feed to the other 250 channel. I sync the two by lining up the peaks, then adjust the 250 picture to sync visually, and adjust the 250 shotgun track about 2 frame advance. When the board operator misses a cue, I have the stage stereo feed to bring up as a safety, and I use it for ambiance and applause as well as the orchestra. Many times I wonder why need good audio of a high school orchestra, but that is another discussion.

Ralph

Ralph
Jeff9329 wrote on 11/19/2009, 8:01 AM
I think you will always have to fine tune the audio sync manually because there are so many variables including normal echoes in the venue.

PluralEyes sounds good, but syncing by audio only is just a start if you have any multi-cam PIP/split screen going on.
musicvid10 wrote on 11/19/2009, 8:21 AM
I think you will always have to fine tune the audio sync manually because there are so many variables including normal echoes in the venue.

Nope, it's the end product if approached sensibly. Much better results than doing it by hand, and in 1/10th the time. Even with echoey audio from the back of the auditorium -- one can quickly add a delay offset after synchronizing (read my posts).

And I've been doing audio production for forty years.

Jay Gladwell wrote on 11/19/2009, 8:31 AM

MV, would you PM me please?

Thanks!


musicvid10 wrote on 11/19/2009, 8:36 AM
Done.
A question about PE?
farss wrote on 11/19/2009, 11:33 AM
The new mixers coming onto the market that let you record each input as a separate track prefader will see the end of these kinds of problems.
The problem I strike is certain instruments not being in the house mix as they were already loud enough. Also house mixes are rarely in stereo.

Bob.
rs170a wrote on 11/19/2009, 11:55 AM
One problem I face almost every single time (and is the reason I now always use a shotgun to feed another camera track) is that house audio ops almost always kill the master fader at the end of a number/set/act.
That means no applause :-(
It only took me one show to figure that out.

Mike