final plugin chain ideas?

larryo wrote on 1/6/2003, 4:41 PM
Currently when I'm ready to render to 16/44.1, I add the following plugs to the final chain; paragraphic EQ with low/high shelf (remove <20hz & > 20kh), wave hammer preset "master for 16 bit", and finally dither (I record in 24/48). My ears say a-ok, but I'm wondering if the pros out there can tell me if I'm neglecting anything. Usually afterwards, I'll normalize-trim-remove DC offset from the the rendered file and call it a day. Obviously with this program there are many ways to get good results, but would appreciate constructive tips from any expert users. Thanks.

Comments

ibliss wrote on 1/6/2003, 6:16 PM
If you are using the peak limiter in the wave hammer plugin then you should have no need to normalize - especially as dithering should be the last thing in the 'mastering'-chain. You should set up the limiter to a max output of -0.3db.

I can't see why you need to use correction for DC offset either - Vegas won't (or shouldn't) generate rendered files with this problem. DC off set should be applied to the source WAVs, if anywhere (even they might not need it).

Most importantly "if it sounds right, it is right".

Mike K
Geoff_Wood wrote on 1/6/2003, 8:06 PM
Any reason why not to record in 24/44K1, saving time and potential sonic degradation on SRC ?

g.
ibliss wrote on 1/6/2003, 9:16 PM
Good point, Geoff. Why sample rate convert AS WELL AS reduce bits. Maybe its unavoidable if you use a sound blaster card. Other wise stick to 44.1 all the way through.
Studio_de_Lara wrote on 1/6/2003, 10:42 PM
I agree. Staying with 44khz is alot easier in the long run. Since all the ACID loops I have are 44khz, and the fact that you need a really good program to do SRC from 48khz to 44khz, it is better to stay at 44khz. If you can go 24bit, then do that. There are alot of good dither options now.
Regards,
Rich
larryo wrote on 1/7/2003, 7:43 AM
Thanks, all..I've read some debate on these forums regarding 48 vs 44.1 - I'll try a few projects 24bit/44.1 ... actually, some of the best recordings I've done have been 16bit 44.1. My main concern was using the wave hammer "master for 16 bit" preset. Regarding DC offset, I do find some DC offset on my rendered files after tweaking them with normalization. While the initial render comes close to my desired RMS levels, they generally need a db or so boost. Afterwards, DC offset usually finds some need for correction. Last question; do any of you guys use the low/high shelf cut on the final plug? I'm not hearing or seeing any results, so for now I've aborted that practice. Regards, Larry.
ibliss wrote on 1/7/2003, 8:44 AM
I still think you can bypass the normalization step if you use Wave Hammer appropriately. Choose the Master for 16-bit then select the 'Volume maximizer' tab.
Set the Output fader to -0.3, then GRADUALLY reduced the Threshold slider. You will easily be able to coax a couple of extra dBs out of your audio.

BUT DON'T OVER DO IT!.

I guess there's no harm in a bit of bass roll off at the LOW low end of the spectrum, to filter out the stuff that's gonna be soaking up amp power but not be reproduced by the speakers. But it helps to have a decent set of monitors to start with, and also a variety of systems to try the mix on to.
Geoff_Wood wrote on 1/7/2003, 4:40 PM
I can't see any point whatsoever in your high-low rolloffs. Maybe the low one if you have LF issues that were not dealt with elsewhere, but myspeaker *do* go down to approaching that frequency, so if it's meant to be there I do like to hear (rather, feel) it !

Also the DC offset stuff - it processing is producing a DC offset, then something is broken ! If you have A DC offset problem, maybe this is coming from your A-D ? I wonder why Vegas doesn't have a calibration doohicky like SForge ?

geoff
Rednroll wrote on 1/7/2003, 6:10 PM
I hardly ever use anything on my master bus plugin chain, unless I'm trying to create a special effect. I would never put an EQ in the master buss, be it low pass, highpass or paragraphic. If you mix the song correctly, then there's no need for a master buss EQ. Mix it properly, and leave the final tweaks of EQ...IF NEEDED...to the mastering process when you have your entire songs complete.

Try these settings: Change the Vegas properties to 8bit, 22Khz sampling rate. Pull the master fader down to -55dB....render your mix to a new track and then Normalize. That will give you a cool, grainy retro sound as found on an EMU SP1200 sampler.
larryo wrote on 1/7/2003, 6:38 PM
Thanks all for the feedback. The biggest problem I've been having is the level of the rendered file. I'll try the wave hammer/volume maximizer thing. The frustrating thing has been the inconsistencies using Normalization in the RMS mode. Basically, I don't understand why the scanned levels using Normalize vary so much when I re-scan using "statistics" in Sound Forge. The peaks are right, but RMS levels don't match.
drbam wrote on 1/7/2003, 8:02 PM
It didn't occur to me until Red's post that you weren't specific whether your original question related to a "final mix" or mastering stage. If its a final mix, then Red is obviously correct and you should not insert anything into the master bus.

drbam
ibliss wrote on 1/7/2003, 9:05 PM
...and render to 24 bit, too, if the files are to be mastered later.

Dthering should be the last thing you do after all your wavehammering/normalizing/dc-offsetting.
Rednroll wrote on 1/8/2003, 10:18 AM
"The frustrating thing has been the inconsistencies using Normalization in the RMS mode. Basically, I don't understand why the scanned levels using Normalize vary so much when I re-scan using "statistics" in Sound Forge."

Well, if all your songs sounded the same and had the same dynamic range throughout the ENTIRE song, then they would all be the same volume after RMS normalization. RMS normalization, scans the entire song and finds an AVERAGE RMS value and then Adds an Average value to the entire song to try and achieve that Average value. Just think of the first part, of when it does it's scan. If you have one song that is very soft throughout the entire song then, very loud during the chorus, then there is a big variance in Dynamic range, and this will lower the Average RMS scanned parameter. Then if you have another song which is mostly the same level throughout the song, then you scan the RMS value it may be much higher. But let's say both songs are actually at the same level during their chorus....how can a program tell the difference between what your chorus is, and your verse? The answer is that it can't. It just scanned the entire song and found an average RMS value. Basically, it's an Average of an Average.

You need to read up on mastering, and some techniques used and try them all out and then develope your own skill. This will take time. I would recommend a couple of subscriptions to EQ magazine and keyboard magazine. They regularly have articles by people who do mastering for a living, and also write very simple to understand articles describing the mastering process, and review mastering plugins and programs and such.

I've said this in the past. Mastering is NOT a one button press process of a file. Each song must be processed individually and you must understand by listening to each song, of what those processes are that are required before you even start. This is a skill that is developed over time with practice, just like anything else. If you want lifeless homongenous sounding songs, then keep using the 1 button push processes like "RMS Normalization" and "Wavehammer/L1 Maximizer" plugins, and keep scratching your head and being "frustrated" that your songs don't sound as good as anything released and mastered by a professional, who understands mastering. These plugins have their place in the mastering process and are very useful...but they're in no way the "all in one" plugin for every song.

red
Cold wrote on 1/8/2003, 3:56 PM
Sometimes I find it worth it to add a compressor to the master channel fader just for an "A/B" of the mix with the compressor set roughly where I would have it for mastering. I don't mix with this compressor on, but I do find it usefull for final tweaks such as checking the level of low end, making sure the lead vox sits in the mix properly etc. Red is quite correct though in saying to put nothing on the master fader during render, and treating mastering on a song by song basis. Personally, if I've mixed the project, I prefer a different person doing the mastering, monitering through different speakers, and listening with unbiased ears. This is what I suggest to my clients anyways... Most of the time this falls on deaf ears and I end up doing the mastering as well as the mixing though. The joys of being a small studio dealing with poor musicians. Steve S.
larryo wrote on 1/8/2003, 5:29 PM
My initial post was to see if frequent visitors to this post do any plugins on the final buss before rendering a 2-track master. My use of Vegas/Sound Forge is for a home project studio as a singer/songwriter, not commercial use. I find that I do get generally good results, and most of this comes from having what I consider a good ear. My objective is to have Vegas play me something that sounds as good as it gets just before I render the track to a final mix, and to do so without getting overwhelmed by the tools. I certainly don't expect any of these plugs to be the one-click mastering tool, but rather was seeking advice as to using final buss effects for optimizing the rendered master. Thanks to all for the responses and shared information. Regards, LarryO.
fishtank wrote on 1/9/2003, 3:39 PM
What I do not understand is why so many people love to *normalize* everything. From what I have heard from the more knowledgeable folks I know, normalizing is a bad thing, as the level changes that are applied will cause quantization errors.

I have always used the Waves L1\L2 as the final part of the chain during *mastering* before I burn a CD. Applying dither with the L1\L2 reduces the quantization distortion and I can set the limiter for the desired loudness at that time. I have not bothered to try the Wavehammer in SF 6 but I assume it is similar to the L1. I agree that the mix bus in Vegas is NOT the place for processing in most cases and the project should be rendered as 24 bit. The conversion to 16 bit for CD can be done in Sound Forge after processing with an L1 or equivalent with the dither level set for 16 bit etc.

What is really difficult for me to understand is why someone would normalize the multitrack files. What good could this possibly do??? If you need them to be louder you can just turn up the fader. You can also use an L1 or some other compressor/limiter on the insert if you want to squash the track. IMHO - I do not see a benefit in normalizing the tracks not to mention the added quantization distortion you will have by doing so.

I can understand why people would normalize as a final step if they do not have an L1\L2\Wavehammer or similar type plug-in and they are not sending their material out for professional mastering. Other than that, I can see no use for it. Someone please correct me if I am wrong.

larryo wrote on 1/9/2003, 3:59 PM
"What I do not understand is why so many people love to *normalize* everything. From what I have heard from the more knowledgeable folks I know, normalizing is a bad thing, as the level changes that are applied will cause quantization errors. "

I never normalize my multitracks, either directly or using the Vegas tool for the track. I agree, thats what the fader is for. I do, however, hear no artifacts from the process when applied to my final 2-track mixes, and this I've been doing since Sound Forge 4.0. Then again, I'm not pumping out my product for clients. It's just me and my ears, and those nearby I like to torture....
Geoff_Wood wrote on 1/9/2003, 4:03 PM
fishtank said :

"What I do not understand is why so many people love to *normalize* everything. From what I have heard from the more knowledgeable folks I know, normalizing is a bad thing, as the level changes that are applied will cause quantization errors."

Normalisation is a completely linear function and does (should !) not generate *any* quantisation or other errors. Like if you have 3,4,5, and 6, and want to make the biggest number 10, you add 4 to everything and will get *exactly* 7,8,9, and 10. That simple. No dithering or anything 'vague' comes into it.

It does however exacerbate the potential for overpeaking the headroom in subsequent processing, though this isn't a problem with todays higher bitdepth processing. And possibly adds a little extra processing load.

Why do we do it in the first place ? Because when you set the slider to "-12", you know the loudest peak will be exactly "-12" and not "-37.5". That is, unless you have *anything* active in the track FX ....


geoff
Rednroll wrote on 1/9/2003, 5:04 PM
"Like if you have 3,4,5, and 6, and want to make the biggest number 10, you add 4 to everything and will get *exactly* 7,8,9, and 10. That simple."

I disagree....any process will cause quantization errors!! What happens to your "3,4,5,and 6" and you have to add 3.9 to it instead of a nice even integer like 4? The answer is that it rounds up to the 7,8,9 and 10....therfore "quantization errors". What happens to the audio that is currently at a level of -inf and near the lower bits of resolution, like a fade out? It also causes "quantization error" because there is not enough bits to do an accurate calculation. "Quantization errors" are a form of distortion and as Fishtank stated, more knowledgable folks will try to avoid an additional process like Normalization if it is not needed.

Additionally, I never use the Normalization function when mixing. That just means you will have to lower most of your channel faders so that you don't go over 0dB on your master bus and the lower your channel fader is, then the lower your bit resolution will be when doing the mixing. I had to prove this similar point awhile back to good ole Pipeline awhile back on a mastering subject, when he tried to tell everyone to send their stuff to a Mastering engineer peaking at -.3dB to get the maximum bit resolution because that's what his "Mastering Guy said" and I disagreed that it is better to send it at -6dB to get maximum resolution and yet give the mastering engineer some headroom to do their job. Below is that discussion and a test that I discribed that you can prove this to yourself. I never did hear anything back from that one.....I think he just wimpered away with his tail between his legs.

Here read it for yourself....there's some good info there:
http://www.sonicfoundry.com/forums/ShowMessage.asp?ForumID=19&MessageID=101299

Here's another post I did in the past on the same subject manor that you can get some good stuff outta:
http://www.sonicfoundry.com/forums/ShowMessage.asp?ForumID=19&MessageID=135026

"Why do we do it in the first place?"

Because MOST people don't fully understand the consequences and how digital audio works, but will tend to give information to other people, even though it is false, therefore it becomes a practice to some people although they don't really understand why they're doing it. Quite the same reason why the Yamaha NS-10 with Kleenix placed over the tweaters became an eccential set of monitors for every studio in the past.

In that senario, some great mix engineer, produced a mix that everybody liked. Some magazine came in and did an interview on him. Placed behind him in a photo was a set of NS-10's with Kleenix drapped over the tweaters. So everybody concluded....HEY!! I got to have a set of NS-10's with Kleenix over the tweaters to get a good mix. After this trend took off, they later talked to that engineer and he admitted that he didn't use NS-10's to do that mix in the first place....on the day they came to do the interview he had been working all day on some audio in another room with the NS-10's and from working so long, his ears where killing him and he placed the Kleenix over the tweaters to reduce the high-end that was fatiqing his ears.....But at this point everybody had already drawn their false conclusions.

Hope you enjoy!!
Red
Geoff_Wood wrote on 1/9/2003, 11:18 PM
Red sed

"I disagree....any process will cause quantization errors!! What happens to your "3,4,5,and 6" and you have to add 3.9 to it instead of a nice even integer like 4? "

I am, of course, only referring to 'peak' normalisation rather than RMS.

There is no such thing as 0.9 of a bit. As I said (incompletely), peak normalisation is a pure linear additive process. Whole bits only are involved, not like complex processing where the 'answer' may well be a fraction of a bit and benefit from dither.

16,382+2=16384 , 253+2=255 , 8030+2=8032 , 0+2=2. No error.

geoff
Rednroll wrote on 1/9/2003, 11:52 PM
You're obviously missing the point and are blantantly wrong, even in your own example: "0+2=2" Now if a sample is at 0bits and you add 2 in your scenario....then it shows right there that there is an error, because you just added "2" to "0" and the REAL answer you want should be ZERO not "2".....or did you really want that silence in your song to be making noise? AND I WAS referring to "Peak Normalization". You must be using that new math that I don't understand. Take an extreme case, let's say your original .WAV file only had a peak level of -70dB, which uses very few bits to represent that level and you use peak normalization, to raise that peak to 0dB. Now, what was really only represented by maybe 4 bits with a level of -70dB, has had a gain applied to it of +70dB, and now is being represented by the entire bit depth that you're working at of 24bits and is now peaking at 0dB. Do you really think those 24bits have no errors, when the original file was only at a level of 4 bits? Of course they do.

Here...prove it to yourself. Take a 16 bit file that is currently peaking at 0dB, and now lower the level by -75dB (ie a "linear function". Now take that -75dB file and do a peak normalization back to 0dB...and tell us all what you hear, when it's done. From you're explanation, you should hear exactly what you started with, because it's only had "linear" processes done to it.

You are Multiplying when you do "peak normalization" you are NOT doing addition, and from what I remember during those Calc course in college with exponential multiplication you always end up with some kind of round off "error". Where did you go to school at?

Do you really want to go on?
fishtank wrote on 1/10/2003, 9:03 AM
Good explanation Red.

Geoff wrote:

Why do we do it in the first place ? Because when you set the slider to "-12", you know the loudest peak will be exactly "-12" and not "-37.5". That is, unless you have *anything* active in the track FX ....

-----------------------------------------------------

I still do not see what normalizing a track gets you. Even if you make the assumption that you will not introduce quantization errors by normalizing (which is NOT the case), how is setting the fader to "-12" and knowing the loudest peak will be "-12" dB useful? So what. You mix the tracks until it sounds like you want. If you are clipping the mix bus you can select all the tracks and move the faders down together until you are where you need to be. Knowing that peaks of a certain track will be right at where the fader is set offers no real advantage as far as I am concerned. The fact that you are indeed adding quantization errors (whether they are audible to you or not) is a reason not to do it.

I believe that quantization errors will be less of a problem when working with 24 bit files than with 16 bit, but why introduce distortion unnecessarily?

This argument reminds me of the classic "I'm showing clipping on my mix bus but do not hear any distortion - so it must be ok". Technically it is not. I agree that slight clipping may not be audible and will not necessarily ruin you mix, but there is no reason to let it happen. If you want you mix louder use a compressor and peak limiter when you *master* and think about maybe compressing tracks individually more when you mix and/or track if needed.



Rednroll wrote on 1/10/2003, 9:52 AM
Fishtank wrote:
This argument reminds me of the classic "I'm showing clipping on my mix bus but do not hear any distortion - so it must be ok".

That's really ironic because Geoff made this exact statement in another post and assumed it was something the program must be doing wrong. And people wonder how things eventually become true although to the person who truly understands it, knows that this is not the case.

http://www.sonicfoundry.com/forums/ShowMessage.asp?ForumID=19&MessageID=147646

As a side note, a lot of users always criticize me for slinging mud and starting arguments in these forum. My point is not to start an argument, but If I feel someone is giving blantantly wrong information, that is contrary to the way I have learned and understand it, then I will correct them on their mistake and prove to them that they are indeed wrong. There is a lot of this stuff that happens in the recording industry...thus my example of the NS-10 monitors....and I feel we could all benefit and learn from a good debate, and truly understand how things work. Otherwise, we're just mixing on autopilot and doing things because "someone" told us this is the correct way to do things, and as you can see from this discussion, that really isn't the case....so instead of having 1 engineer doing something wrong, we now have 30 doing the same thing wrong and the disease continues to grow.

My philosophy has always been, to understand how something works and it's strong AND weak points, because you never know when their comes a special case where that weak point actually becomes a strong point for your particular application. See my original post in this thread of making a retro grainy soundy audio sample above. So basically you've doubled the arsenal you're using with the same amount of tools you started with.
Geoff_Wood wrote on 1/10/2003, 3:16 PM
OK 0+2=2 was wrong,1+2+3 would have been better, cos the bottom 2 become zeros !

How about a new normalisation preset - "Normalise to highest whole bit", where every binary digit has another (inherently whole) binary digit added, to peak the loudest one, and keep the rest unquantisedistorted and linear. Remember that every dB of the original recorded signal is in reality a whole binary digit - at least at the first stage 'as recorded'. It's only after processing that quantising/dithering decisions must be made as a result of a function on theat original 'whole signal'.

In my case normalisation enables straightforward application of known desired threshold levels in compression, as a starting point, and not having to find a separate threshold point for each separate track. Am I lazy ? Individual track play-level meters would reduce the 'necessity' to normalise...

My point with the 'overload' indicators is that in many cases *there appears to be no resulting distortion* , or maybe I just can't hear it.

geoff
Rednroll wrote on 1/10/2003, 5:25 PM
You can't hear it, because infact it isn't there in this case, because the meters are WRONG withing the track inserts. See the explanation of this and how to test it for yourself in the other post relating to it. You actually have another 3dB to go before you start to hear it, but unfortunately those meters won't show you that.

Feel better that you're not crazy now? :-)