Normalize vs Manual Gain Staging – any difference?
Easy one for you vets:
I want to gain stage my tracks before applying any mixing plugins and I have a handful of questions:
1 – is there any reason that SWS/BR: Normalize loudness of selected items/tracks… is a worse choice than manually gain staging each track with a VU meter on the master? It seems to me this would do the same thing much more quickly – is there something I’m missing or am I good to go on this?
I have not gain staged until now, just as I’m about to balance volumes, add compression, EQ, saturation, etc. and really get into my mixing phase. That is to say I already did use these following plugins without having normalized volume first:
a – Melodyne
b – Vocal Rider
c – Izotope RX8 de-click and mouth de-click
And all seems to have worked out well enough so far.
2 – Should I have indeed gain staged before putting any of the above plugins on the tracks? At this point I’ve used the plugins and rendered stems out of the finished products so this is more to know for next time around.
3 – Is there any danger in rendering a track too many times? As in, if I render after Melodyne, then again after Vocal Rider, then a 3rd time after RX8, then a 4th time after using the split/trim tool to insert silence does this at all degrade the signal? I’m under the impression it does not degrade the signal.
Thanks for all your help – I’ve learned TONS since starting to talk to you geniuses on this forum!
Just to get a good signal to noise ratio. I've read that some plugins are calibrated to affect the signal best at about -18dBVU or thereabouts. Is that unreliable advice to your mind?
I just typed out a 3 paragraph response and answered most of your questions. Then I decided that I really don't want to get into any conversation on the topic of "Gain staging" any more. It's not worth the potential bullshit.
I'm sure you'll get help from a few people who do know what they're talking about. Hopefully Karbomusic, Ashcat, or maybe even White Tie will help you out. There are many others. You probably know who not to listen to, but that's up to you.
Lol, I can understand your reluctance but I would have read it. If you reconsider I'm here for it.
Cool. To be honest, I think there's quite a bit of useful information in at least 3 or 4 threads that were started in the last 10 days or so. Anyone coming in here will probably be repeating something that they or someone else already said in at least one of those other threads.
Not trying to be un-helpful. But I just don't want to be part of a whole new conversation on it.
That's Kooky talk.
I'll do a search then - my bad. Not trying to spam the forum with repeats. I'll search "normalize" and see where that gets me. The last one I tried got waaay over my head and this is a pretty basic set of questions. Stay up, my man....
Well after a lot of over-my-head and more-complex-than-I-need (or understand) reading, I still have the same question with an added feeling that "one day" I'll understand these more techy threads.
For today I'm still not getting why I'd manually gain stage over normalizing in Reaper. I should add that I'm speaking of Loudness Normalization, not Peak Normalization.
I read the following, which is where the question came from:
Normalizing audio and gain staging in Reaper There’s a bit of a heated discussion going on in the industry regarding both normalizing your audio before mixing, as well as about gain staging. Some people say you shouldn’t normalize your tracks because the recording engineer and the producer set the levels right for what the rough balance should be, in accordance with the vision the latter has. Truth be told, for a lot of projects, the tracks aren’t necessarily recorded with the end result in mind. Normalization helps bring all the tracks to a level that’s easier for us to use in our mix template.
When mixing with plugins, gain staging matters because plugins are designed to operate at a “nominal level,” which is generally equivalent to an average (not peak!) level of -18 dBFS. Audio files that are too soft won’t trigger a compressor or gate properly and extremely loud files may trigger a compressor or gate too much and may limit the amount of EQ you can apply before overloading a plugin.
There are two types of normalization available: peak normalization and loudness normalization. Loudness normalization brings the average loudness of the file to a specified level, which is what we want for gain staging purposes.
I generally go for a loudness for all tracks (average level—not peak!) of -20 LUFS. This seems to satisfy the plugins’ need to be fed audio at the correct level. After normalizing everything, to avoid peak overloads with the drum and percussion tracks, I’ll bring down their level by a few dBs. Then I might bring up the lead vocal tracks by a couple of dBs. This gets me to a good place to start the static mix where my plugins behave well and my faders are in a usable place (not too low).
If I don't want to overthink this, is the above sound advice?
^Ha! There's nothing wrong with being organized with levels, usually makes the entire job easier with less jumping around and compensating but it isn't hurting the audio and...
Gain staging is really an historical analog term that doesn't align well with digital, and for this reason we try not to call it that because it causes misinformed confusion that spreads like a virus which is what most of us want to avoid. That's right, we are in the middle of gain staging pandemic as I type this.
Regarding non-linear plugins that are "happy with minus some amount of dBFS". All that means is you give the plugin more signal, it does more of what it does, you give it less, it does less of what it does. But at the end of the day, you and your ears are what decides what that should be - they have knobs, turn them until you like what you hear.
__________________ Music is what feelings sound like.
Cool. To be honest, I think there's quite a bit of useful information in at least 3 or 4 threads that were started in the last 10 days or so. Anyone coming in here will probably be repeating something that they or someone else already said in at least one of those other threads.
Not trying to be un-helpful. But I just don't want to be part of a whole new conversation on it.
That's Kooky talk.
you say you don't want to get involved but you've already posted three times in this thread. Why not just stay away from it lol
__________________
Have a GOOD time....ALL the time !
I want to gain stage my tracks before applying any mixing plugins and I have a handful of questions:
1 – is there any reason that SWS/BR: Normalize loudness of selected items/tracks… is a worse choice than manually gain staging each track with a VU meter on the master?
Is there something I’m missing or am I good to go on this?
I have not gain staged until now, I already did use these following plugins without having normalized volume first:
a – Melodyne
b – Vocal Rider
c – Izotope RX8 de-click and mouth de-click
And all seems to have worked out well enough so far.
2 – Should I have indeed gain staged before putting any of the above plugins on the tracks? At this point I’ve used the plugins and rendered stems out of the finished products so this is more to know for next time around.
3 – I’m under the impression it does not degrade the signal.!
Since using "gain staging" in a DAW (nowadays) is a bit of misnomer, thus with my proposal to be substituted by the term "gain back-staging", you do not need to normalise by loudness each time you process your track\item\recording.
I have never used Melodyne, Vocal rider, RX8 - can not help you with those. Vocal rider seems cool auto-automation tool though.
To make a good comparison between 'Noramilising (by Loudness)' and 'Manual RMS setup' (gain back-staging), maybe it is better to show us some examples (3~5 sec. of performance in original format, .wav or .aiff, .flac, .wv) before\after this normalisation.
What value would you like to achieve (in terms of loudness and RMS)?
"To degrade the signal" after multiple renderings in what sense? As in stems for stem mastering or as in pre-mixing.
It will depend on the settings of the plugins, which take participation in that preparation for rendering. I am sure it will be shite performance anyway, if you really needed Melodyne (unless for some "robotic" vocal effect). So, how good is the source? Is it degraded\faulty in the first place?
...it seems that as long as I'm operating at 24 bit (which I'll check that I am), I should have no problem over 4 or 5 renders.
Any time you’re rendering anything other than the final distribution master, you really should render to a floating point format. That way you don’t have to worry at all about clipping or quantization distortion or level in general. You can do that about infinitely without anything like “generation loss”.
In 24 bit, there actually are limits in both directions, and even if you watch the levels so it doesn’t clip, you will start to accumulate distortion/noise/errors near the zero crossings (of the waveforms, not talking db). Those things usually aren’t a big deal until you start adding a bunch of gain and compression and stuff, and yeah you probably can “get away with” several consecutive renders as long as you’re not doing truly insane things in between. You just have to watch for clipping.
But since even 32 bit FP means you don’t have to watch or worry about anything ever, it’s just the obvious answer. It’s as close to not rendering as you can get, iykwim.
Yeah, same here After recording for decent levels, if I'm mixing I generally don't do anything. In the tape days many people would gain stage the trims of the faders so that there was a rough unity fader level when the tracks were roughly in balance, but there's no reason for that now. As far as the plugins liking a certain level, it's one of those things where my personal feeling is it's pointless to do it at the file level. If something's too low where I had concern about how it was hitting a plugin I might do something, maybe at file level but usually in an insert to put it in the ballpark of where it's most helpful. Otherwise I just don't bother. It's putting the horse before the cart in the grand scheme of the secretarial work one has to do to share up the mix, IMO.
And if I won't be mixing I won't do anything like that. I give the original files and if the mixer likes to get into that then they will. Since it's more a mixing function that a recording one it's not on my radar to get into gain staging recorded files. And if I'm mixing what someone else has recorded I'd rather they left such gain staging to me. YMMV.
__________________
The reason rain dances work is because they don't stop dancing until it rains.
You should not manually render any intermediate audio data to files.
If Reaper does intermediate renders (e.g. subprojects) it uses floating point format, hence gain does not matter at all.
-Michael
Generally, when mixing I use my ears. That's those sticky outy things on the side of your head you use to keep the mask on.
This is my first gain staging thread so I'll tell you what I do. Gain staging is as obvious as it appears. Check the efx to make sure nothing's going over zero.
If you hear distortion make sure you don't like the distortion before chasing it down. Which is another way of saying there are no rules and use your ears to guide your mix. Distortion is often your friend when mixing. Like... run a feed off your snare into a track with a guitar overdrive plug on it. Then slide up the overdrive to taste. Sure it's an old trick but if you're here wondering about gain staging...
I might be more interested to hear how you guys deal with heavy gain changes, like a very quiet vocal that gets very loud when they open the pipes.
But even then, I might split the track into two, then apply different compression to each. Or sometimes I put two compressors on the track.
Not sure why I joined in on this thread as I have no worthy input. Morning coffee I guess.
I might be more interested to hear how you guys deal with heavy gain changes, like a very quiet vocal that gets very loud when they open the pipes.
Usually, I'll just split the item and turn those parts down or up depending on which there are more of to be mostly even with the rest - that can be done visually by just looking at the waveform and is a pretty quick process (no need for meters to do this). I'm a fan of item-level changes early on as it removes a lot of work or processing that would need to happen further down the signal path.
Then after that, whatever compression I want to smooth the overall performance out + some character, then at the end of that some type of vocal rider if needed just to keep the overall level consistent throughout the mix. If I nail that, it removes the need for any automation.
__________________ Music is what feelings sound like.
Last edited by karbomusic; 08-11-2021 at 11:33 AM.
Gain staging in Reaper is no more than cosmetic to make level meters look nicer.
Of course the final gain when auditioning or when rendering to a non-floatingpoint file format does matter.
-Michael
It is Gain back-staging. And please, if it is only "cosmetics to make meters look nicer", let's make a feature request for an option to hide meters.
In Reaper Track meters are Peak meters, to show "ceiling limits in digital". Floating point internal processing in modern DAWs, which gives us more dynamic range, does not mean we should use it and mix in the red.
Most likely they wont mix or master in order to print to CDs. They will use 24-bit. For 16-bit - dithering, more or less a bit placebo, especially with modern dense or crazy loud masterings, though dither is free.
BUt the OP is about Normalising by Loudness (objective value) vs. Manual RMS level ("gain staging"), which is subjective.
Just call it adjusting levels, because that's all it is.
Technically Gain is what is coming in at the Input.
Volume is what goes out from the Output.
In a pre-mixing the people who understand deal with what is on the input: from recording
It is usually raw, quiet (low RMS, high Peaks).
The people who understand what all this "stage" thing is all about, treat the Gain (incoming Peaks and RMS) in order to get a meaningful and (more often than not) higher perceived Volume level at the Output.
'Adjusting levels' might be the greater set that covers all levels: from the recording, pre-mix (+ editing), mixing, pre-master and master. It always concerns levels.
Hence it is not specified.
"Gain staging", rather gain back-staging in digital is about ADC conversion (usually what is the raw recording) and pre-mixing (+ editing).
In Reaper Track meters are Peak meters, to show "ceiling limits in digital".
This is nonsense when stated like that, as there is no "ceiling limits in digital" within Reaper.
The Reaper meters show "the ceiling limits in digital in case you would render thee audio stream to a fixed point file" at this point of the signal chain.
-Michael
This is nonsense when stated like that, as there is no "ceiling limits in digital" within Reaper.
The Reaper meters show "the ceiling limits in digital in case you would render thee audio stream to a fixed point file" at this point of the signal chain.
-Michael
Yes, correct. I meant the conversion from digital to analogue.
But the meters are not "scaled". Pretty much they are absolute with regards to the so called "digital ceiling".
they are absolute with regards to the so called "digital ceiling".
Nope. They are scaled to peak level of the floating point number +/- 1, also called 0dB, which is just some arbitrary value in the scale of FP-number, ranging from - to + many billions, with a resolution that always fits our need.
Where on the final mix or the master or my guitar Volume knob, or the Gain of the amp, or the mic-preamp or the plugins?
Confusion.
Did you 'gain digi-staged'?
Ah, I forgot. I went straight to mixing... and thus never finished it.
Irrelevant - your ongoing mission has been promoting "gain staging" which is a confusing term we'd like to avoid. Don't start moving goal posts now by talking about a knob called gain on an amp SIM or volume knob on a guitar.
And to be clear, I could record 20 tracks right now, go "straight to mixing" while ignoring everyone of your gain stage suggestions and the mix will be as good as it can be when I'm done. This is precisely why your convoluted and overly complex explanations to beginners is simply not helping, if not out right discouraging to beginners trying to digest it.
__________________ Music is what feelings sound like.
??? ADC happens before the signal reaches Reaper and hence Reaper can't do anything about it.
-Michael
Yes, but it is still part of the process and it is in the pre-mixing (before the actual mixing takes place).
Also you can do some old-school gain staging (analogue or whatever pre-ADC chains you have) and record as intended.
I usually record with some analogue pre-amps or gentle overdrive and some compressors, sometime EQ if needed (usually low-cut). That needs some minuscule gain-staging in order to get just a notch or two before the Peaks of the signal clip the red of the ADC (thus, avoiding such clipping from happening).
But let's stick to the Normalise by Loudness vs Manual RMS Gain.
Irrelevant - your ongoing mission has been promoting "gain staging" which is a confusing term we'd like to avoid. Don't start moving goal posts now by talking about a knob called gain on an amp SIM or volume knob on a guitar.
And to be clear, I could record 20 tracks right now, go "straight to mixing" while ignoring everyone of your gain stage suggestions and the mix will be as good as it can be when I'm done. This is precisely why your convoluted and overly complex explanations to beginners is simply not helping, if not out right discouraging to beginners trying to digest it.
So now tell me how many years of mixing experience have you got\gained, compared to a beginner who bought his\her mic\guitar\keyboard and audio interface just two weeks ago?
So now tell me how many years of mixing experience have you got\gained, compared to a beginner who bought his\her mic\guitar\keyboard and audio interface just two weeks ago?
You are strengthening my point... that simpler, less-complex explanations for beginners are better for them. Secondly, the last thing you want to do is pollute beginner minds with known-confusing terms that get conflated with analog terms which they then have to unlearn later - or even worse make up your own terms as you go.
That said, I can tell them not to clip when recording or the master and when rendering and they can do an awful lot of recording and increase their knowledge as they go. That means they can be somewhat successful and inspired the moment they start which is very important.
And that is what makes all this gain staging crap a myth, they aren't as beginners going to be doing 80 track mixes with 100 plugins and delivering them to CD Baby for duplication and needing to meet some professional target for a producer - as you said, they just got the interface two weeks ago.
__________________ Music is what feelings sound like.