Remove childs crying - edit audio track

Comments

B.Verlik wrote on 5/13/2006, 8:31 PM
So, maybe somebody needs to design a special digital delay that uses specific frequency filters that will cause all or most frequencies to hit the recorder at the exact same time.
Or maybe by applying a delay to one of the mics, you can fine tune it to specifically eliminate certain frequencies and most of that childs crying or whatever you want to eliminate, will disappear.
Or maybe you can put a clear, plexi-glass molded shape of a human face in front of the 2nd mic to mimic the sound bouncing off the face. (I'm sure the person speaking would love to have something like that 1 foot from their head)
Now, I'm mostly kidding about that, but if it's so critical, why don't bass players sound like they're lagging in big arenas? (and yes, I know, technically they do.)
I've been to huge outdoor concerts in the past and even though the sound sounds like it's being blown by the wind, it doesn't sound out of sync from even a mile away. I can definitely see, that it's all delayed since it left the stage but....blah blah blah blah............
We're talking about a room.
When you start breaking down the math to the nano level, anything seems huge.
I'm not arguing, as much as I'm wondering the 'whys'.
I would rather just try it, but I don't have any situations coming up that I can apply that to. (or the patience to create a situation, at this time.)
I know different frequencies travel at different speeds, but if the mics are one foot apart, most frequencies probably hit both mics very close to the same time, but with imperfections. In my mind, maybe it's not perfect, but it seems somewhat do-able. (but, I'd bet you'd have better luck on stage with the Grateful Dead, than a small room full of screaming kids. Too much ambient room echo that would come close to the speaking persons volume.)
farss wrote on 5/13/2006, 10:19 PM
There is as far as I know almost no difference in the speed of sound frequency wise. The point here is that sound travels SLOWLY.
So at certain frequencies the sound will hit one mic before the other. When it hits the other the sound at the first mic is then out of phase and the signals cancel. For a sound of double the frequency ( 1 octave higher) or half the frequency (1 octave lower) the opposte happens, the sounds add.

You don't really need to even go out and try this. You can simulate this in Vegas, great tool for trying things out!

Use the Simple Delay FX.

Make two copies of the same audio track,reduce the level of each by 6dB (just so you don't clip) and add Simple Delay to one. Set the delay to 1 millisecond, this is roughly the same as two mics 1 foot apart.

And yes for large concerts delay lines are used to stop phase cancellation problems, even within the one speaker setup, also used to 'steer' the sound from the line array speakers.
Serena wrote on 5/13/2006, 10:21 PM
In a homogeneous medium (air, water, steel) at constant temperature the speed of sound is constant for all frequencies (large amplitude waves -- shock waves--travel faster ). So the sound from your bass player gets to your ear at the same time as all the other instruments, no matter how far away the stage. If you want to experiment you don't need anything special. Place a radio in the room, place a room mike, speak into another mike (in the room) and look at the tracks in Vegas (or Acid or Soundforge).
The figures farss discussed were concerned with phase. If you could see sound (which propogates as a longitudal series of compressions and expansions -- as I'm sure you know) then you would see regions of maximum compressions one wavlength apart. These are 20 ft apart at 50Hz and two feet apart at 500Hz, and of course the maximum expansions are at mid-wavelength. A bit of a play with a diagram will show the validity of farss's argument, and why I said each frequency has to be treated separately to make it work. Fortunately our hearing doesn't seem to attach much importance to phase relationships (other than for direction).

Edit: Bob got in first! I like his proposed test method.
Grazie wrote on 5/13/2006, 11:37 PM
The only time, I understand, when the frequency changes is when the wavelength is being "stretched" or "compressed" as demonstrated by the Doppler effect. Meaning that if the energy emitting source was approaching or receding at some speed - Ambulance/police siren; train whistle; falling Stukker bomber; blubbering child - then this frequency shift could be noted. I suppose IF the child was traversing the air in either a circular or linear motion, the bawling could have been registered as a wailing siren? Maybe not . .

. .anyways, I just love this demonstration!

http://lectureonline.cl.msu.edu/~mmp/applist/doppler/d.htm

Grazie

( Bob, "on average" I liked the candle reference!)

Serena wrote on 5/14/2006, 1:25 AM
Grazie, what about candles in the wind? The frequency changes because the speed of sound isn't affected by relative motion of source or receiver. If the child were traversing from inside to far outside the room, then it becomes much more managable.
B.Verlik wrote on 5/14/2006, 3:26 AM
I think they're talking about us suckers going for this phase-cancellation theory like a moth to a candle.
(spelling correction)

PS: It sounds good on paper, but in real life, the sound of a screaming kid in a room, with his head spinning from side to side, is going to be bouncing off every wall and floor and hitting those mics in immeasurable ways.
B.Verlik wrote on 5/14/2006, 4:17 AM
How about this angle?
You're calculations are as if one mic were one foot closer to the source, and I'm thinking both mics are left and right, but about the same distance. Only room ambience would make the difference. I think this is where my problem is. Does that make any sense?
Again, trying to understand............
JJKizak wrote on 5/14/2006, 5:47 AM
What would be cute is if Forge or Vegas had on their equalizer fx's a phase reversal function (180 degrees) where you could on the fly select a certain range of frequencies to reverse phase and hear the difference as you change the criteria. It wouldn't be perfect but then again a percentage faxctor could also be selected, lets say 70% reduction or 40 % reduction.
JJK
Serena wrote on 5/14/2006, 7:10 AM
I don't think farss was intending to be derogatory in his candle reference (at least that's the way I took it). If you use mikes side by side you will be able to get cancellation for sources along the perpendicular bisector of the line joining the mikes (ignoring room reflections -- better do this in free field). Remember all that interference pattern stuff in the old physics lessons? For any given frequency you will also get cancellation for specific angles off axis (the first order minimum is at angle alpha, where Cos(alpha)= lamda/D (where D=mike spacing)). But I can't expect this to be much good for rejecting broad source wide spectrum noise. And, you might argue, if Grateful Dead are doing it then so can we. But do we know what they're doing with two mikes? And if they are doing noise cancellation, they probably can afford the signal processing. But I reckon you should try it out and report back on your findings.
farss wrote on 5/14/2006, 7:22 AM
You CAN do all this in Vegas, you can even add a negative time delay to offset the spacing between the mics. You can run multiple copy tracks of one track, feed each one through a bandpass Ex, add delay and phase inversion and then sum it all back together again.

Sadly though this is up there with perpetual motion.

As someone pointed out, my example showing why it wouldn't work grossly simplified the real world situation. Depending on what direction the sound is coming from the delay between the mics will vary. Within a reflective room you have the same sound coming from every direction.

But there's anothe HUGE problem. We're assuming one mic will pickup the sound from the speaker + unwanted noise, the other mic just the unwanted noise. I fear not, not unless you have the talent almost swallow the mic. Both mics will pickup the person speaking.

Noise cancelling mics do exit, do work and are in common use. Mostly as headset mics for use around airports. They work because the two mics are back to back, one is with an inch of the mouth, the other faces the other way and picks up very little of the voice (relatively). Get your talent to wear one would probably work fairly well although they're really best at getting rid of low frequency noise. Of course it might look a bit wierd.
You could also try those Countryman mics that sit very close to the mouth, no noise cancelling as such, just getting really, really close to the source.
farss wrote on 5/14/2006, 7:33 AM
It could well work for The Grateful Dead. Most rock singers sing right into the mic. That means the level of the voice in one mic will be way higher than it is in the other mic. Getting your talent to scream into the mic and even without the second mic your crying baby problem would be solved.

Then again the other mic might just be there in case they blow the first one.
apit34356 wrote on 5/14/2006, 8:17 AM
The dual mics can be used to control feedback, 6000+ watts blasting from a speaker array can induce all forms vibrations,...... There is software, like sony's impulse and sony's noise reduction2, that can reduce the baby's crying, but its not cheap and not perfect and still requires a skilled audio editor. Phase cancellation is not new science, but as point out above, the math start simple, but room walls,.... plus moving objects all add to problem. But like sony's noise reduction2, once you id the freq range and location of source, a lot can be done.