Human being with feelings
Join Date: Aug 2006
How to get golden ears in one easy step (seriously)
Level-match playback anytime you are making any kind of comparative decision. The world of making good audio decisions will become an open book. This is going to be a long post, but it's important. Bear with me.
"Level-matching" does NOT mean making it so that everything hits the peak meters at the same level. Digital metering has massacred the easiest and most basic element of audio engineering, and if you're using digital systems, you have to learn to ignore your meters, to a great degree (even as it is has now become critical to watch them to avoid overs).
Here's the thing-- louder sounds better. Always. Human hearing is extremely nonlinear, due to a thing called the "fletcher-munson effect." In short, the louder a sound is, the more sensitive we are to highs and lows. And as we all know from the "jazz" curve on stereo EQs, exaggerated highs and lows means a bigger, more dramatic, more detailed sound.
Speaker salesmen and advertising execs have known this trick for decades-- if you play back the exact same sound a couple dB louder, the audience will hear it as a more "hifi" version and will remember it better. This is why TV commercials are compressed to hell and so much louder than the programs. This is why record execs insist on compressed-to-hell masters that have no dynamics (this "loudness race" is actually self-defeating, but topic for another thread).
What this means for you, the recordist, is that it is essentially impossible to make critical A/B judgments unless you are hearing the material at the same apparent AVERAGE PLAYBACK VOLUME. It is very important to understand that AVERAGE PLAYBACK VOLUME is NOT the same as the peak level on your digital meters, and it absolutely does not mean just leaving the master volume knob set to one setting.
Forgive me for getting a little bit technical here, but this is really, really, important.
In digital recording, the golden rule is never to go over 0dBFS for even a nanosecond, because that produces digital clipping, which sounds nasty. Modern 24-bit digital recording delivers very clean, very linear sound at all reasonable recording levels* right up to the point where it overloads and then it sounds awful. So the critical metering point for digital recording is the instantaneous "peak" level. But these instantaneous "peaks" have almost nothing to do with how "loud" a thing sounds in terms of its average volume.
The old analog consoles did not use the "peak" level meters that we use in digital, and they did not work the same way. Analog recordings had to thread the needle between hiss on the low end, and a more gradual, more forgiving kind of saturation/distortion on the high end (which is actually very similar to how we hear). Peaks and short "overs" were not a big deal, and it was important to record strong signal to avoid dropping below the hissy noise floor. In fact, recording "hot" to tape could be used to achieve a very smooth, musical compression.
For these reasons, analog equipment tended to have adjustable "VU" meters that tracked an "average" signal level instead of instantaneous peaks. They were intended to track the average sound level as it would be perceived by human hearing. They could be calibrated to the actual signal voltage so that you could configure a system that was designed to have a certain amount of "headroom" above 0dB on the VU meter, based on the type of material and your own aesthetic preferences when it came to hiss vs "soft clipping."
In REAPER's meters, the solid, slower-moving "RMS" bar is similar to the old analog VU meters, but the critical, fast-moving "peak" indicator is something altogether different. If you record, for instance, a distorted Les Paul on track 1 so that it peaks at -6dB, and a clean Strat on track 2 so that it also peaks at -6dB, and you leave both faders at 0, then the spiky, dynamic Strat is going to play back sounding a lot quieter than the fatter, flatter Les Paul.
The clean strat has big, spiky instantaneous peaks that might be 20dB higher than the average sustained volume of the notes and chords, while the full, saturated Les Paul might only swing 6dB between the peak and average level. If these two instruments were playing onstage, the guitarists would adjust their amplifiers so that the average steady-state volume was about the same-- the clean Strat would sound punchier and also decay faster, the dirty Les Paul would sound fuller and have more sustain, but both would sound about the same AVERAGE VOLUME.
Not so when we set them both according to PEAK level. Now, we have to turn down the Strat to accommodate the big swings on the instantaneous peaks, while we can crank the fat Les Paul right up to the verge of constant clipping. This does not reflect the natural balance of sound that we would want in a real soundstage, it is artificially altered to fit the limits of digital recording.
To be continued...
*Note that, contrary to a lot of official instruction manuals, it is not always good practice to record digital right up to 0dBFS. Without getting too far off-topic, the reality is that the analog front-end is susceptible to saturation and distortion at high signal levels even if the digital recording medium can record clean signal right up to full scale. The practice of recording super-hot is one of the things that gives digital a reputation for sounding "harsh" and "brittle." Start a new thread if you want more info.