Loudness normalization, beating the system
One of the questions that I hear quite often is; How can I make my music to appear louder on streaming services because this and that track sound louder then my track? You will most likely hear or read about two answers, 'you can not make it louder, because it will all be normalized to the same level' and/or 'just make it louder, that will make it sound louder'. Actually, both are not true. In this article I will tell you how the loudness normalization system works and how you can 'beat the system'.
First off all, I think it’s good to know how the system works and where it came from. Some years ago in the broadcast-world there was no standard except for a max peak level of something like -9/-10dBFS. Because of that, commercials where louder then the program-material. Those commercials where compressed and limited and so the max peak was where it ‘should be’ but the RMS level was way louder. In the USA they came up with the CALM act which stated that commercials needed to be equally loud compared to the program material. That’s where loudness normalization started in the form of the ITTU BS1770 and for the EU, the EBU R128 standard. Basically that standard is -23LUFS, (* Loudness Units Full Scale) integrated. Integrated means, the full program material measured from start till end, should be -23 LUFS on average. This means peaks could be higher then -23 and softer parts could be way lower then -23, but the average level needs to be -23LUFS integrated. The measurement is done by making use of the K-weighting curve, rather then a flat/simple RMS measurement. The K-weighting curve is pretty much based on how the human hearing/brain works; you are less sensitive to low frequencies but more to mid and high frequencies. The curve takes out parts of the low frequencies and boosts the mid and high frequencies. (see image). Next to the integrated measurement you also have short term (measured over a shifted period of 2 seconds) and momentary (measured over 400msec) measurement. The whole system is way more complex then just this brief and simple explanation and explaining all details will be to much information. But I think it’s good to know the basics and where it came from to understand what is happening and for what reasons.
* 1LU (Loudness Unit) is not 1dB, this is because of the K-weighting curve. In practice most people compare an LU to a dB but it's actually not and that should be keeped in mind.
Loudness normalization used by streaming platforms is based on this system, with the biggest difference being that it’s not normalized to -23LUFS, but to (for most services) -14LUFS. That’s also where people make a wrong judgement, they think that they should master to -14LUFS because otherwise you will get a 'penalty' and they will limit your track. Sure, if your track is louder then -14LUFS integrated it will be taken down in level, but it’s nothing more then a ‘volume knob’ and so if your music sounds great at let’s say -10LUFS, it will still sound great when Spotify turned the volume knob down by 4dB to make it -14LUFS. Loudness normalization will NOT limit your music, it will just take the level down to reach it’s target loudness.
So.. all music will sound equally loud after loudness normalization? No, not at all, as you might have found out yourself by now. I will explain how this works and how you can ‘beat the system’.
Always keep in mind that the measurement is done by using the integrated value, so the average value from start till end. So if a track is -12LUFS integrated, it will be taken down by 2dB to reach it’s target loudness of -14LUFS. If your track is -10LUFS, it will be taken down by 4dB to make it -14LUFS. But now it gets interesting, because it could be that both are the same track but still one sounds louder then the other. It could even be the same master, but still the version which has an integrated loudness of -12LUFS could sound louder in the chorus then the version which was -10LUFS. How could that happen? It’s basically the arrangement and mix what makes the difference, not just the master.
If you look at the image above, you can see that in the top version, the section with the green lines, intro and breaks are lower in level compared to the chorusses, while in the bottom version, marked with orange lines, the same intro and breaks are louder. As you can see, the chorusses of both versions are equally loud.
The integrated loudness of the version with lower level intro and breaks is -11.5LUFS, while the one with the louder intro and breaks is -10.3LUFS. So even though the loudest section, the chorus, is equally loud on both versions, the integrated loudness difference is 1.2dB. But what happens if this music is uploaded to streaming services? It gets interesting there because both versions will then be normalized to -14LUFS. Let’s focus on the numbers more into depth. The version with the lower level intros and breaks will be taken down 2.5dB while the version where intro and breaks are louder will be taken down by 3.7dB. So the difference between those two versions is 1.2dB.
Now let’s have a look at the combined waveforms after loudness normalization. On the left side (green lines), the 'softer intro/breaks' version, on the right side (orange lines) the 'louder intro/breaks' version. You can clearly see the difference in level, the left side of the image is 1.2dB louder in the chorus compared to the right side. That 1.2dB difference is actually quite a lot if you listen to it. It gets even more interesting because you perceive that difference in level in intro/break and chorus in being bigger and so making the track have even more impact. Win win.
In the figures below, you can see the values before and after loudness normalization. Take a close look at the differences in short term max and momentary max.
More dynamic intro/breaks |
Louder intro/breaks |
|
Before normalization |
integrated: -11.5 LUFS |
integrated: -10.3LUFS |
After normalization |
integrated: -14 LUFS |
integrated: -14 LUFS |
To sum things up, if you want to beat the system you can do that by making your arrangement/mix 'more dynamic'. Bigger differences in levels between breaks/softer passages and chorus/louder passages. Think of lower level intros/outros/breaks, things like that. Another big advantage is that the human brain perceives loudness more when the difference between loud and less loud section is bigger, it will also simply have more impact. Do you really ‘beat the system’ this way? Make up your own mind... But is it really about the numbers? No way! It's always about the music and finding the sweetspot, the ying/yang of both sound, balance and levels by listening, not by using numbers or meters.
- Hits: 18614