Red meter peaks on plugins, master bus, etc. The only red that seems to make an audible (or visible) difference is on 'record' into an event (ie soundcard input level). Should I not worry about red anywhere else ever ?
If it's Red you're clipping the digital signal. Depending on how much "over" 0dB (red threshold) you are will determine whether you hear it or not. A little bit won't probably be heard, more will definitely.
But no. Everywhere where I see red there is *no* visible or audible distortion happening. The 'red' is obviously related to the original file bitdepth, and seems to be 'absorbed' into the higher bitdepth of the processing area.
I would like to have this confirmed by somebody authoritive though ....
You will not hear distortion, until you clip more than 10 consecutive samples, and Vegas might have some kind of limiter in it to looks for that type of thing and evokes a limiter. Maybe Peter can confirm that for you, but I know from my personal experience of designing hardware, our software has look ahead features and detects that type of thing and does this.
I was led to believe it was more than 6 consecutive samples...but anyway.
Don't forget that the channel/buss meters hold peaks, so if they hit red once, they will stay red even if there is no further clipping on that channel/buss. Clear the reading by clicking on it.
You may be correct....the magic number slips my mind at this point, I Believe it is 4-10 consecutive samples before you actually are able to hear it. After I posted that message, the number "4" came to mind, but you may be correct with 6.
But yes. If you see red it means some of your samples are being rendered at or greater than 0dB and that by definition can mean you're clipping. Render some of these clipped files and examine them in Sound Forge (or Vegas). You should see the clipping. Just because you don't hear it doesn't mean it's there.
I think you mean 10, 6, 4, whatever samples at FS before a 'Red Overload' is indicated - nothing to do with 'what you can hear', or not.
I have done some 'controlled experiments' and verified that two identical 'normalised' tracks when rendered do in fact visibly clip (maybe audibly, maybe not) unless the master slider is dropped 6dB. With dissimilar 'real world' signals, maybe the distortion is sometimes beyond my ability to hear. Circumstances where this manifests itself is usually bass drum witha good 'click'.
What has confused me, and I can now only otherwise rationalise by "faulty plugin" is when a normalised mono track (only normalised for the purpose of this experiment) is put through the stock Track Compressor, in my case with Thresh '-10', Amount 'x4.6' and Auto Gain Compression 'On'. For some reason the plug output meters show 3.1dB into the 'red'. This is reflected by the Master meters as a 3.1 dB overload.
*(Aside) Without gain comp the output is -3.4dB . Surely they should show 0dB on peaks if the auto gain compensation is correct, not +3.1dB ? Or don't I understand compression either ?
A render to new file with the Master left at 0dB does give a clipped waveform as a result.
Now ..... set the Master at -3.1dB, and render to a new track. You (well, i do) get a new, compressed version of your original track, peaking at 0dB, but with NO *audible* or *visible* distortion on those peaks, even when that peak is zoomed right in on. Opening in SForge and zooming right down to the FS peak shows only one individual sample at that peak level.
So that 3.1 dB of 'redness' (which is indicated within the plugin, and verified externally to the plugin as being 'presented to' the Master Bus meter)is being absorbed somewhere, transparently (?).
I've addressed a similar problem before. First of all The meters in the track inserts are wrong. They are -3dB off and non responsive to the audio. Also while investigating this issue, I've noticed there are other bugs within those compressors, and I have decided not to use them. Rather than using normal music, this is easier demonstrated using a 1Khz sinewave normalized to 0dB.
Set the Input and Output volume adjustments to 0dB
Set the Threshold to "0"
Set the Compression to 1:1
This should give you no compression
On the Input and output meters it reads "-3dB", thus the input meter is -3dB off.
Further ON with the test: Now pull the Threshold back to -3dB. The input meter now raises to "0dB". Lowering the Threshold should not effect the "input meter", but it does. So there's an obvious bug. Pull the Input gain back to -6dB and you can see this same thing where when you pull the threshold down, the input meter will start to raise +3dB and then stop. This is what you are seeing when you're seeing red. It may just be the metering, thus is why you don't hear any distortion, but it makes you feel really uncomfortable.
The Gain Reduction meter seems OK to me though. Set the Compression ration to 2:1 and lower the Threshold to -6dB. The gain reduction reads -3dB...This is correct for these settings. So maybe the plugins are functioning properly, but the Input meter, which effects the output meter is definitely not.
There is something obviously wrong with this and I know Sonic Foundry is aware of it, because I've had this discussion with Peter in the forums.
Well, I tried but this bug still appears in V4. I'm not sure why this is still a problem to get an accurate compressor meter? It was the first thing I checked out in V4 beta testing and reported it as a bug. The input meter is still 3dB off and has the same symptoms I described above. It must be within the compressor Track Effect. I put in my TC native Reverb as a track insert. It also has an input meter and it reads correctly. I even put the TC native reverb after the Sonic Foundry Track compressor. The input and output of the Track compressor read -3dB playing back a 0dB sinewave. The TC native reverb which followed this read 0dB on it's input meter. Maybe we have to wait to Vegas 5 for this to get addressed?
If they use 32 bit processing internally, they could reserve a few bits for overhead. A software could use that to get accurate metering of overload levels, which has the advantage that you see how much you're overloaded, so you know how much you have to drop your levels by. I'm not an expert on audio DSP, but this overhead may be required for things such as limiter plugins to work. Thus, it makes sense that it's passed down the signal chain. When rendering to the output bit depth (16 or 24 bits), the overhead bits are probably chopped off, and thus clipped. But before that, the overload may not be an issue if everything in the chain processes at the higher bit depth. In this case, it would be possible to have reds on plugin meters with no clips in the master. Of course, if you keep raising the levels, you will eventually exceed the overhead and get clipped waveforms (and you can then probably still adjust the master to not show any clipping). This would mean that to be safe, keep your meters green, but a bit of red on the plugin chain will not harm you (and who wants to check every plug-in meter on every track for the duration of every song they're mixing?). By the way, if this is how the processing works, the signal is actually processed at a higher dynamic range if the plugin meters are red, but this shouldn't result in any audible improvements.
All of this is just guesswork, however. Maybe someone who has experience with DirectX can comment?
j
You're understanding of extra bits and overhead is not correct. 0dB is 0dB in 32bit,20bit, 24bit, or 16bit. The anology I always like to use is to think of sampling by putting a waveform on a graph piece of paper. The Y axis is your bit dept where -inf is the minimum value and 0dB is the maximum value. Increasing your bit resolution increase the amount of values on your graph paper between those 2 points, therefore it's more accurate and there is less quantization error. Your X-axis has to do with your sampling frequency. The higher the sampling frequency, again the higher density of gridlines you have on the X-axis.
Limiters work, that they have "look ahead processing." They actually detect that there is distortion before the audio is processed. They look ahead because the processing is many times faster than the speed of sound, therefore they can detect that clipping occured so the limiter kicks in and turns down the level to avoid the distortion and then you hear the audio.
I don't know if this is relevant or not, but I remember reading a technical post on the cubase.net forums where it was explained by Steinberg that you could go 'into the red' on any of the meters within the program without fear of distortion because the 32 bit float processing could accomodate it. The ONLY place where the meter could not go into the red was the final bus meter where the audio left the program on its way to the audio hardware. Red at that point would result in a distorted file.
Of course Vegas may work completely differently. And your point about the compressor readings still seems valid.
First: The Track Compressor level meter is definitely wrong. So don’t use it.
Second: 0dB in the good old analog world was a reference level, and you had headroom and noise ratio to give a dynamic range, and we used VU meter (0dB = reference level) and PPM (-6dB = reference level).
With the introduction of digital gear, there were no standards for the reference level, and on most (low-end Japanese) gear 0dB became the maximum level – and now we have to live with that until we get full 32-bit floating point audio.
This has lead to 0dB has become the ‘reference level’ that you try to get as close to as possible with compressors etc. sad-sad-sad.
Third: You can only use a ‘look ahead limiter/compressor’ on harddisk stored sound to request the sound ‘before realtime’ – or you must introduce a delay in the limiter which is acceptable when rendering sound.
The use of look ahead limiter/compressor has the advantage of the attach on the sound can be made intelligent, to improve the side effect of lowering the level.
The use of a limiter in digital equipment to avoid clipping should not be necessary, because the only way to clip the sound is to use FX’s that increase the sound above maximum level. Most DXI FX (if not all) use internal 32-bit floating format where clipping is not possible, so clipping is only introduced when the sound is converted to 16-bit (or 24), and can be avoided by taking care of the FXs.
But, of course if you don’t have the time/possibility to play with FX levels, a limiter at the end of the chain could be practical.
I strongly believe that Vegas keep the 32-bit floating format between FXs!
I concede that limiting does not require a bit depth overhead. Red, your explanation of the difference between different bit depths is of course correct, provided that you are mapping the full dynamic range of one bit depth to another. What I was referring to was the possibility of reserving some overhead when going to a higher bit depth. This means that the 'all-bits-on' value of the smaller bit depth is not mapped to 'all-bits-on' of the larger bit depth, but to a somewhat smaller (arbitrary, but fixed) value. Then, if you have a full scale signal in the smaller bit depth, and convert it to the larger bit depth, you can multiply it with a factor larger than 1 without clipping. Using your graph paper example, if you convert data in this way, the x-axis of the larger bit depth is slightly longer than that of the smaller bit depth, and the larger bit depth does not provide as high a resolution increase (increase in number of values between two points of source data) as it would if you mapped 'all-bits-on' to 'all-bits-on'. However, if your final output bit depth is always smaller than the processing bit depth, it would make sense to use some overhead when converting to the processing bit depth, because the reasons for using a higher processing bit depth are to reduce artefacts of digital signal processing, such as rounding errors and overflow (clipping).
The last two posts have mentioned 32-bit floating point processing. Floating point would give you more overhead, but less resolution increase. The general point is the same. I don't think it matters much if floating point or integer arithmetic is used. This may be mainly a choice of which mode suits the hardware better.
The Steinberg quote mentioned in one of the above posts would confirm what I said in my earlier post.
Well all-in-all this is some good information. The thing to keep in mind is that one persons poison is another persons sugar. I don't believe, if someone doesn't know how to watch their levels and understand digital audio, then we should make band aids for the unknowledgable. If I want to USE digital distortion as an EFFECT, I have that option. If I don't want digital distortion, then I know how to watch my meters and mix like the trained engineer I am. In short, don't take away my sugar so the average Joe idiot doesn't have to worry about anything.