Introduction
This is a guide to the essential ideas of audio mixing, targeted specifically for computer-based producers. The internet has an incredible wealth of information on this subject, but it is scattered across a disorganized body of articles and tutorials. Our goal with this course is to consolidate of of the most important information in one place.
This guide will not tell you about mixing techniques on how to track vocals or what frequency to boost to make your drums really knock. On the other hand, this does not assume that you are making club-oriented dance music. Certainly the advice in here is applicable to mixing electro house or hip-hop, but it is equally applicable to mixing ambient. But it is worth mentioning that dance music as a whole does pose special mixing challenges, such as the tuning of percussion tracks and the achievement of loudness, and these challenges are given adequate time, since they are relevant to many students.
This course assumes that students have a very basic prior knowledge of the concepts of mixing. You should know the way around your DAW. You should know what a mixer is, and what an effect is, and how to use them. You should have at least heard of the terms compression, equalization, and reverb. You should have done some mixdowns yourself, so you know how the whole process works. But that's really all you need to know at this point.
Introduction
This is a guide to the essential ideas of audio mixing, targeted specifically for computer-based producers. The internet has an incredible wealth of information on this subject, but it is scattered across a disorganized body of articles and tutorials. Our goal with this course is to consolidate of of the most important information in one place.
This guide will not tell you about mixing techniques on how to track vocals or what frequency to boost to make your drums really knock. On the other hand, this does not assume that you are making club-oriented dance music. Certainly the advice in here is applicable to mixing electro house or hip-hop, but it is equally applicable to mixing ambient. But it is worth mentioning that dance music as a whole does pose special mixing challenges, such as the tuning of percussion tracks and the achievement of loudness, and these challenges are given adequate time, since they are relevant to many students.
This course assumes that students have a very basic prior knowledge of the concepts of mixing. You should know the way around your DAW. You should know what a mixer is, and what an effect is, and how to use them. You should have at least heard of the terms compression, equalization, and reverb. You should have done some mixdowns yourself, so you know how the whole process works. But that's really all you need to know at this point.
1.1 Frequency Domain
If you continue to subdivide physical objects into smaller pieces, you will eventually arrive at atoms, which can not be further subdivided. There is a similarly indivisible unit of sound, and that is the frequency. All sounds can ultimately be reduced to a bunch of frequencies. The difference is that, where an object may be composed of billions of atoms, and sound typically consists of no more than a thousand of frequencies. So frequencies are a very practical way of analyzing sounds in the everyday context of music.
What is a frequency anyway? A frequency is simply a sine-wave shaped disturbance in the air; an oscillation, in other words. They are typically considered in terms of the rate at which they oscillate, measured in cycles per second (Hz). Science tells us that the human ear can hear frequencies in the approximate range of 20 Hz to 20,000 Hz. In many cases, this range of 20Hz-20,000Hz comfortably encompasses all the frequencies that we commonly deal with in our day to day lives.
Let's take a look at the various frequency ranges:
Subsonic (20Hz - 40Hz) - This frequency range makes up the extreme of human hearing, and is almost never found in music as they require an extremely high level of volume to be heard, particularly if there are other sounds playing at the same time. Even then, they are ore felt than heard. Most speakers can't reproduce this range.
Sub Bass (40Hz - 100Hz) - This relatively narrow frequency range marks the beginning of musical sound and is what most people think of when they hear the term "bass." It accounts for the deep booms of hip-hop and the hefty power of a kick drum. These frequencies are a full-body experience, and carry the weight of the music. Music lacking in sub-bas will feel lean and wimpy, but on the other hand, music with excess will feel bloated and bulky.
Bass (100Hz - 300Hz) - Still carrying a hint of the feeling of the sub-bass range, this frequency range evokes feelings of warmth and fullness. This range provides the body, stability, and comfort to the music. It is also the source of the impact of drums. An absence of this frequency range makes music feel cold and uneasy. An excess of these frequencies make music feel muddy and indistinct.
Lower Midrange (300Hz - 1kHz) - This frequency range is rather neutral in character. It serves to anchor and stabilize the other frequency ranges. Without this range, music sounds pinched and unbalanced.
Upper Midrange (1kHz - 8kHz) - These frequencies attract attention. The human ear is quite sensitive in this range, so it is wise to pay special attention to whatever you include to occupy this range. These frequencies provide presence, clarity, and punch. An absence of upper midrange frequencies make music feel dull and lifeless. An excess of upper midrange frequencies make music feel piercing, overbearing, and tiring.
Treble (8kHz - 20kHz) - Another extreme frequency range in terms of human hearing. These frequencies provide detail, sparkle, and sizzle to your sounds. An absence of treble makes music feel muffled and boring. An excess of treble makes music harsh and uncomfortable to listen to. These frequencies by their presence of absence, make music exciting or relaxing. Music that is meant to be exciting, such as dance music, contains a large amount of treble, music that is meant to be relaxing contains low amounts of treble.
So, now we understand the effects of individual frequencies on the human psyche. But sounds rarely consists of single frequencies they are composed of multitudes of frequencies, and the way in which said frequencies are organized also has an effect on the human psyche.
When multiple frequencies occur simultaneously in the same frequency range their conflicting wavelengths cause periodic oscillations in volume known as beating. Beating is more noticeable in lower frequencies than in higher frequencies. In the sub-bass range, any beating at all becomes quite dominating and often disturbing, while in the treble range, frequencies are typically quite densely packed to no ill effect.
Beating is also the underlying principle of the formation of musical chords. Combinations of tones which produce subtle beating are considered "consonant," while combinations of tones which produce pronounced beating are considered "dissonant." When considered chords in terms of beating, it is important to note that beating occurs not only between fundamental frequencies of the tones involved, but also their harmonics. Thus, for instance, while two individual frequencies a major ninth apart will not produce beating.
Beating also contributes to the character of many non-tonal sounds. For instance, the sound of a cymbal is partially due to the beating of the countless frequencies which it contains. Similarly, the "thump" sound of the body of an acoustic kick drum is partially due to the beating of low frequencies.
1.2 Patterns of Frequency Distribution
Having considered in general the psychological effects of individual frequencies and combinations of frequencies, let us now examine the specific frequency distribution patterns of common sounds. Obviously, it would be impossible to describe the frequency distribution patterns of every possible sound. Every frequency distribution describes one sound or another. So in this section, we'll be examining the frequency distribution patterns of the sounds most commonly found in music. We will only examine four categories of sounds, but they cover a surprisingly large amount of ground. With them, we will be able to account for the majority of sounds found in most music.
1.2.1 Tones
The simplest frequency organization structure is the tone. Tones are very common in nature, and our brains are specially built to perceive them. A tone is a series of frequencies arranged in a particular, mathematically simple pattern. The lowest frequency in the tone is called the fundamental, and the frequencies above it are called the harmonics. The first harmonic is twice the frequency of the fundamental; the second harmonic is three times the fundamental frequency; and so forth. This extension could theoretically go on to infinity, but because the harmonics of a tone typically steadily fall in volume with increasing frequency, they tend to fade out eventually.
The character of a particular tone, often called its "timbre," is partially determined by the relative volumes of the harmonics; these differences are a big part of what differentiates a clarinet from a viola, for instance. The reedy, hollow tone of a clarinet is partially due to the higher emphasis on the odd numbered harmonics. The bright tone of a trumpet is due to the high volume of its treble range upper harmonics, while the mellower tone of a french horn has much more subdued upper harmonics.
Tones are the bread and butter of most music. All musical instruments, except for percussion instruments, primarily produce tones. Synthesizers also mostly produce tones.
1.2.2 The Human Voice
The human voice produces tones, and thus could justifiably be lumped into the previous section. But there is a lot more to it than that, and since the human voice is such an important class of sounds, central to so much music, it is worth examining more closely.
The human voice can make a large variety of sounds, but the most important sounds for music are those that are used in speech and singing; specifically words and consonants.
A vowel is a tone. The specific vowel that is intoned is defined by the relative volumes of the different harmonics. The difference between 'ehh' and an 'ahh' is a matter of harmonic balance. In speech, vowel tones rarely stay in one pitch, they slide up and down. This is why speech does not sound "tonal" to us, though it technically is. Singing is conceptually the same as speaking, with the difference being that the vowels are held out at a constant pitch.
A consonant is a short, non-tonal noise, such as 't', 's', 'd', 'k.' They are found in the upper midrange. The fact that consonants carry most of the information content of human speech may well account for the human brain's ears favoring the upper midrange.
So, we can see that the human voice, as it is used in speech and singing, is composed of two parts; tonal vowels, and non-tonal consonants. That said, the human voice is very versatile, and many of it's possible modes of expression are not covered by these two categories of sound. Whispering, for instance, replaces the tones of vowels with breathy, non-tonal noise, with consonants produced in the normal manner. Furthermore, many of the noises that are made, for instance, by beatboxers, defy analysis in terms of vowels and consonants.
1.2.3 Drums
So far, we have examined tones and the human voice. The human voice is quite tonal in nature, so in a certain sense we are still looking at tones. Now, we will look at drum sounds, which though not technically tones, are still somewhat tonal in nature.
A "drum" consists of a membrane of some sort stretched across a resonating body. It produces sound when the membrane is struck. A drum produces a complex sound, the bulk of which resides in the bass and lower midrange.
This lower component of the sound, which is often referred to as the "body," does not technically fill the frequency arrangement of a tone, but usually bears a greater or lesser resemblance to such arrangement, and thus the sound of a drum is somewhat tonal.
In addition to the body component of the sound, which is created by the vibrations of the membrane, part of the sound of a drum is created by the impact between the membrane and the striking object. This part of the sound, which we will refer to as the "beater sound," has energy across the frequency spectrum, but is usually centered in the upper midrange and the treble.
1.2.4 Cymbals
Now, having examined tones in general, the human voice, and drums, we come to the first (and only) completely non-tonal sounds that we will examine: cymbals. Cymbals are thin metal plates that are struck, like drums, with beaters. The vibrations of the struck plates creates extremely complex patterns of frequencies, hence the non-tonal nature of cymbals.
Cymbals have energy throughout the entire frequency spectrum, but the bulk of said energy is typically in the treble range, or in the midrange in the case of large cymbals such as gongs. There is also reason to believe that cymbals have significant sonic energy above the range of human hearing, since their energy shows no signs of fading out near 20kHz. In any case, because cymbals have so much treble energy, they are a very exciting type of sound.
1.4 Loudness Perception
Since loudness is such an important topic in mixing it seems appropriate at this point to talk about the perception of loudness in general.
Loudness is measured in decibels (dB). Decibels are a relative, logarithmic measurement.
Decibels are a logarithmic measurement in that amplitude increases exponentially with decibel value. Specifically, every 10dB increase or decrease of decibel value corresponds to a factor of ten increase or decrease in amplitude. In other words, increasing a sound’s amplitude by 10dB multiplies its amplitude by ten. Increasing a sound’s loudness by 20dB multiplies its amplitude by a hundred. Decreasing a sound’s loudness by 30dB multiplies its amplitude by one thousandth. And so forth. Decibels are a relative measurement in that a measurement of decibels does not tell you precisely how loud a sound is; it can only tell you how loud it is relative to some reference amount, usually designated as 0dB. So, for instance, a level of 3dB is three decibels louder than the reference level, and a level of -3dB is three decibels quieter than the reference level.
When discussing real-world sounds traveling through the air, loudness is most often measured in dBSPL, or “decibels of sound pressure level.” This is a unit of measure based on the decibel, with the reference level of 0dBSPL being the quietest sound that is audible by a young adult with undamaged hearing. The threshold of pain is generally placed around 120dBSPL. This range of 0dBSPL to 120dBSPL gives us the practical dynamic range of human hearing. 80dBSPL is a good listening level for music.
Loudness can be measured in two ways: it can be measured in terms of peak loudness, or in terms of average loudness. Peak loudness measures the amplitude of the highest instantaneous peaks in the sound. Average loudness measures the overall average amplitude level, taking into account all of the loud peaks and the quiet in-between spaces. Peak loudness is good to know because peaks that are too loud will often cause audio equipment to overload. Average loudness is good to know because it reflects, more accurately than peak loudness, the human ear’s actual perception of loudness.
The level meters on most audio mixers measure peak loudness. Average loudness, when measured as described above, will still not be a terribly accurate measurement of human loudness perception. Loudness perception is complicated by the fact that the ear has a bias towards certain frequency ranges and away from others. The ear is most insensitive in the subsonic range, and becomes progressively more sensitive into the upper midrange, after which its sensitivity rapidly rolls off. The sensitivity also varies with volume, with the ear being less sensitive to bass and treble at lower volumes.