Mixing and Mastering


During the first day of my Introduction to Music Technology class, I always ask my students what it means for an album to be remastered and how such an album differs from one that’s been remixed.

I am usually greeted by an ocean of silence.

This unawareness about mixing and mastering—even among music technology students—is the proximate cause for this blog post. In it, I hope to demystify these two components of professional audio and to provide a concise guide for doing each.

It is my belief that mixing and mastering are the heart and soul of music production, so anyone who wishes to inhabit the realm of professional audio simply must have a working understanding of these processes.

Let’s begin with the definitions.

Definitions of Mixing and Mastering

Mixing is the art and craft of balancing and assembling individual audio tracks into a cohesive whole—usually an audio file; mastering is the art and craft of refining and finalizing that audio file for commercial consumption.

Once basic tracking, editing, and signal processing have been completed, audio projects can move on to the mixing stage of production. Once mixing is completed, audio projects can move on to the mastering stage of production.

In case you are unfamiliar with tracking, editing, and signal processing— the precursors of mixing and mastering—the following overview will fill in those blanks.

Tracking, Editing, and Signal Processing

Before any mixing or mastering takes place, tracking, editing, and signal processing are usually completed. Here’s how the process typically proceeds:

  • First, musical sounds are converted via microphone (or other transducer) into electricity and the resulting audio signals are captured via multitrack recorder or DAW (digital audio workstation) in an operation called tracking. 
  • Then, the individual audio tracks are manipulated and altered to eliminate mistakes and improve clarity in an operation called editing
  • Last, the individual audio tracks are run through electronic components or software plugins that adjust, refine, and enhance their character in a process called signal processing

These three activities—tracking, editing, and signal processing—routinely overlap, and it is the rare project in which the three processes happen serially. Yet, to maintain clarity pedagogically, it is helpful to think of them as distinct elements in the step-by-step process of audio production.

Once tracking, editing, and signal processing are complete, a project can move on to the mixing stage of production.


One of the goals of mixing is to construct an aesthetically appealing audio recording designed to connect with your fellow human beings. This includes songs, podcasts, audio books, sound designs, and film scores.

Constructing a professional-sounding piece of audio is contingent upon wisely adjusting the following parameters:

  1. Volume
  2. Pan
  3. Dynamics
  4. Equalization
  5. Effects

Volume refers to amplitude balance between the tracks, panning refers to spatial positioning between the speakers, dynamics refers to amplitude differences, equalization refers to volume at specific frequencies, and effects refers to signal processors like delay, chorus, and modulation.

Perhaps the first parameter to focus upon is equalization, for this adjustable factor is the key to creating intelligibilityAn intelligible mix is one in which the sounds are plain, clear, and understandable. Even if your tones and timbres are raucous and rowdy, the raucousness and rowdiness should be clear in the mix.

Masking, whereby one sound covers up and renders inaudible other sounds of similar frequency, is the prime suspect in a muddled mix.

Proper use of an equalizer can help add clarity and minimize the deleterious effects of masking. 

The best way to avoid masking is to use subtractive equalization, which entails turning certain frequencies down instead of turning certain frequencies up. This maneuver maximizes the aural space in a mix.

Subtractive equalization
When using an equalizer, employ more subtractive equalization than additive equalization. This method allows more space for sounds to exist. Most amateur music technologist pollute their mix with a tsunami of EQ boosts. This is a recipe for a muddled mix beset by the phenomenon of masking.

Your next concern is to construct a hierarchy of sounds to establish which play leading roles and which play supporting roles.                

Sounds that are front and center in a mix (vocals, lead guitar, piano, and strings) typically carry a song’s melody—the most important aspect of most music.

Sounds that play supporting roles include drums, bass, rhythm guitar, piano, synth pads, etc. This part of the mix should form an aural architecture that can prop up, and highlight, the foreground sounds.

In most cases, sounds that play leading roles should happen sequentially and without overlap. So, when the vocal phrase is over, the lead guitar lick should come in at the same level of loudness established by the vocals. This means the lead guitar and the lead vocal should be balanced at the same volume. If the two sounds are ever happening simultaneously, then one or the other needs to be turned down.

Mix Hierarchy 3
To create an aesthetically appealing mix designed to connect with your fellow human beings, it is necessary to arrange sounds into a hierarchy. Typically, lead vocals take the top position. All other sounds are meant to prop up, and support, the lead vocal. Sometimes the top role is occupied by a lead instrument like an electric guitar.

Creating a good sounding mix entails the following:

  • Maintaining the integrity and coherence of a performance
  • Establishing a sound hierarchy
  • Placing sounds in a space like a room or a hall

According to Stanley Alten, author of Audio in Media, one way to create perspective is to “position the sounds to create relationships of space and distance” (448).

The relationship of space and distance is accomplished with reverb, which can be used to simulate a performance space, and with panning, which can be used to simulate a performance configuration.

For example, you may use reverb to place your ensemble in a hall, and you may use panning to configure your ensemble—left-to-right—on that hall’s stage.

Vocals, bass, kick, and snare are ordinarily placed in the center of a mix. Guitars, keyboards, brass instruments, and backing vocals are ordinarily placed slightly to the left or to the right in a mix. Sounds are rarely panned hard left or hard right. With panning, subtlety is the key.

Reverb and delay effects can be panned to increase the width of a sound’s perspective. This is done by routing the effected sound to another track and panning the new track opposite the source track. This will make the sound no longer have a focal point in the mix. It will be like it is coming equally from the left, right, and center.

Strategies for simulating the scope and perspective of a performance space work best if clarity is first achieved in mono and without reverb (Prochak 115). 

The Problems of Mixing

Mixing is a rigorous endeavor that takes years to master. It’s likely that you will devise many dull and uninteresting mixes before creating ones of commercial quality.  

Here are a few things to keep in mind: first and foremost, the music must be good. A good mix can’t save a poor song poorly recorded. A bad mix can ruin a well-recorded song, though.

There are far more ways to construct a bad mix than a good one.  

One of the reasons is a phenomenon called accommodation. Accommodation occurs when, upon continuous listening, the brain begins to fill in missing sounds with self-generated ones and to ignore existing sounds (Alten 448).

Also, during long mixing sessions, hearing will fatigue. Consequently, the perception of audio will be different at the end of a session than it was at the beginning. This will cause the listener to experience an imperfect awareness of the sounds they are listening to.

There are three ways to prevent hearing fatigue and accommodation: (1) take frequent breaks, (2) listen in a quiet environment, and (3) avoid overexposure to midrange sounds.

Also, it is key to listen to the track on as many loudspeaker sources as possible.

This is done by creating a working mix (a rough draft of the song) and listening to that mix in the car, on a set of headphones, on a phone, and through a proper system.


Mastering is the art and craft of finalizing audio projects for commercial consumption. It involves (1) cleaning up beginnings and endings, (2) sculpting timbre with EQ, and (3) increasing loudness with compression/limiting.

  1. To clean up beginnings and endings entails eliminating extraneous noises and standardizing the length of silence that unfurls between songs.  Attention to these details ensures that song transitions proceed in a good, non-distracting manner.
  2. To sculpt timbre with EQ entails altering a project’s sonic character to more perfectly align with the sound-quality norms of professional audio. This usually requires an engineer with an especially tuned set of “ears.” Such an engineer is often intimately familiar with the sound frequency spectrum and can use an equalizer to dial in timbres like “raw,” “polished,” “vintage,” and so on.
  3. To apply compression/limiting entails using dynamic processors to boost the song’s amplitude to commercial levels. An album of songs should have a consistent volume, so all songs should employ similar levels of compression/limiting. The problem with dynamic processing is it delimits music’s naturally-occurring differences in volume. This means the normal experience of listening to live music, which is replete with amplitude variation, is wholly replaced with a sort of sonic hyper-reality. Tracks mastered with heavy limiting are usually experienced by the listener as a tsunami of unrelenting sound. For pop music, this is not a problem, but for classical music, jazz, and other styles, heavy limiting annihilates the music’s main aesthetic—the drama of going from soft to loud and back again.

Mastering studios often possess high-quality gear, exquisitely tuned listening rooms, and expert sound engineers.


To master digital audio, it is important to have a working understanding of loudness, the measurement of a sound’s intensity.

Loudness is measured using decibels (dB), and decibels measures the ratio between two physical properties. For example, sound pressure and electric energy, or power and voltage.

The name, decibel, is a portmanteau, which means it’s a word formed through the combination of two words: (1) Deci, an abbreviation for ten, and (2) bel, a unit for the amount of signal lost over a mile of telephone wire. This amount of signal loss is too large for pro audio applications, so the unit is divided into tens—hence, the name decibel.

Four Kinds of Decibels

There are many distinct types of decibels, but there are four that are especially useful.

  1. Decibels that measure acoustic sound pressure (dB–SPL)
  2. Decibels that measure voltage (dBu)
  3. Decibels that measure digital audio (dBFS)
  4. Decibels that measure the perceived loudness of pro audio (LUFS)

Following is a description of these four decibel systems:

Acoustic Sound Pressure (dB–SPL)

The acoustic intensity of sound is specified by how many time greater that sound is than a reference level of 10¹² watts per square meter, which is the threshold of human hearing.

10⁻¹² is equal to a decimal point followed by eleven zeros and a 1. It looks like this when written out: 0.000000000001 W/m². And, it represents 0 dB SPL.

Remember, decibels are the ratio between two physical properties. When calculating dB-SPL, the two physical properties are as follows:

  1. The reference number of 10⁻¹² watts per square meter 
  2. Some other sound’s watts-per-square-meter intensity

For example:

If you divide the reference number 10⁻¹² by 10⁻¹¹, then the quotient is 0.1, which is equal to 10 dB–SPL.

So, 10⁻¹¹ W/m², also known as 0.00000000001 W/m², is equal to 10 dB–SPL.

Here are some more decibels:

  • 30 dB–SPL = 0.000000001 W/m² 10-9  W/m²
  • 40 dB–SPL = 0.00000001 W/m² 10-8 W/m²
  • 50 dB–SPL = 0.0000001 W/m² 10-7 W/m²
  • 60 dB–SPL = 0.000001 W/m² 10-6 W/m²
  • 70 dB–SPL = 0.00001 W/m² 10-5 W/m²
  • 80 dB–SPL = 0.0001 W/m² 10-4 W/m²
  • 90 dB–SPL = 0.001 W/m² 10-3 W/m²
  • 100 dB–SPL = 0.01 W/m² 10-2 W/m²
  • 110 dB–SPL = 0.1 W/m² 10-1 W/m²

Here is the decibel calculator I used: 


These kinds of decibels, dB–SPL, are not the ones used for pro audio applications because there is no way to get an accurate reading on sound pressure levels in a pro audio context. There is too much variation in equipment and listening environments: speakers, cables, amplifiers, and room sizes all vary widely from studio to studio. However, dB–SPL is often encountered in news articles or laws about loud sounds. If you hear a journalist or a lawyer talking about decibels, these are the kind they are referring to. 

A different type of decibel is used to measure loudness in a pro audio context: decibel volts (dBu).

Decibel Volts (dBu)

dBu, is a relative unit that measures the input and output signals of sound equipment against a standard voltage. In most cases, the standard voltage is 0.775. This amount of voltage corresponds to a reading of 0 dBu. Sounds that are quieter than this value have a negative number, and sounds that are louder than this value have a positive number.

The dBu unit evolved in the era of analog gear and was displayed on devices called VU meters: Old-school mixing consoles were adorned by many VU meters.

Following is a chart of the exponential nature of decibel volts.  Notice the difference between 0 dB and -3 dB is much greater than the difference between -3 dB and -6 dB.

Not all decibels are created equal.

Decibel Volts 2.jpg

In this diagram, the logarithmic nature of the decibel is displayed. Notice the difference between -3 dB and 0 dB is massive, but the difference between -3 dB and -6 dB is much smaller comparatively—even though both differences represent a change of 3 dB.

Decibels Relative to Full Scale (dBFS)

Using analog gear, engineers can push past 0 dBu without clipping. But digital audio has an absolute upper limit.  Consequently, a different unit for measuring digital audio was necessary.

That unit, evolving in the late 1970s, is dBFS, which stands for “decibels relative to full scale.” The threshold of clipping is labeled 0 dBFS.

There are two ways that dBFS is metered: (1) peak metering and (2) RMS metering.

Peak meters display the highest amplitude level produced by an audio signal and are good for displaying when a sound has eclipsed the 0 dBFS threshold. RMS meters calculate the average, over-all loudness of a signal and are good for displaying the perceived intensity of the sound.

RMS meters display the signal as being softer than that indicated by peak meters. This is because the transients are averaged with the sustained sounds. For example, the combination of a snare drum and a sustained organ note will appear lower than the snare drum but higher than the organ if measured by RMS.

Many loudness meters combine the two systems in one display. For example, channel strips in Studio One are equipped with a peak/RMS (PkRMS) meter:

0 dBFS 2
The right edge of the blue line—where it begins to turn red—is the peak value, and the white dashes represent the RMS value.

Loudness Unit Full Scale (LUFS)

The newest loudness unit is LUFS, which stands for “loudness unit full scale.” This unit was developed by the International Telecommunications Union to provide a consistent method of measuring the loudness of television broadcasts. A law passed in the U.S. against loud commercials was the impetus for the development of this unit.

Metering with LUFS is like metering with RMS—that is, both provide a measure of average sound intensity. However, LUFS provide a reading closer to the human perception of loudness, whereas RMS provides a direct reading of signal power.

Roughly, 1 loudness unit (LU) is equal to one decibel (dB).  

A good LUFS reading for a mastered song is about -11 to -13 LUFS. YouTube normalizes its audio to around -14 LUFS. The broadcast law for television stipulates a -23 LUFS reading.

Level Meter
Well-mastered professional audio has a LUFS reading between -11 and -13. LUFS function similarly to RMS, but they measure human perception of loudness instead of output power.

In my experience, LUFS and RMS track close to one another. So, if you’ve got a reading of -15 RMS, then your LUFS will likely be fairly close to the same number.

Volume Maximizing

To successfully master music, it is a promising idea to be familiar with your DAW’s limiter, which is the most important effect used during mastering.   

A limiter is a compressor set with a ratio of 10:1 or higher. It produces the effect of increasing a sound’s loudness without allowing it to peak the meter.

Limiters perform this dark magic by reducing the sound’s volume when it crosses a specific decibel value—a parameter known as the ceiling.  

A good starting point is to set the ceiling somewhere between 0 dB and -1 dB. Studio One for Engineers and Producers by William Edstrom Jr. suggests starting at -0.3 dB (124).

Once the ceiling is set, most limiters work by allowing you to boost the input level—thus, increasing the mix’s sound up to the level of the ceiling. The net result is dramatically increased amplitude.

Limiter 2
Limiters increase the output amplitude of a mix. These devices operate by setting a ceiling value (-0.3 is standard) before increasing the input level. A good measure of volume maximizing is an RMS reading around -13

The Problem of Loudness

Through the years, loudness has increased markedly on professionally produced recordings. The result of this amplitude arms race has been the elimination of dynamics from modern recordings.

It is the opinion of most engineers that this has been a bad development for professional audio. A cursory listen to old recordings will reveal that, although they are not as loud, they are much more vibrant in their dynamic qualities.

To ensure vibrant audio in your own work, it is wise to avoid squashing your master track with extreme limiting. However, since some limiting is necessary on master tracks, it is wise to learn how to make sense of loudness. This means having a working understanding of the common scales used to measure loudness: PkRMS and LUFS.  


Music recordists and music technologists must have a grasp of mixing and mastering to function in the professional audio world.

The best way to improve at these skills is to take on a large-scale project and see it through to completion. Perhaps you can offer to record a friend’s EP or album for free. By working pro bono, your friend won’t be too upset if you produce a less than stellar product, which will almost certainly be the case.

It takes years of experience to produce commercial-quality audio. But, if you practice mixing and mastering as much as you can, then you will slowly—but steadily—develop your aptitude in this field.

Works Cited

Alten, Stanley R. Audio in Media 9th ed. Wadsworth, Cengage Learning: U.S.A., 2011.

Edstrom, William Jr. Studio One for Engineers and Producers. Hal Leonard Books, 2013.

Prochak, Michael. Cubase SX Official Guide. Sanctuary Publishing Limited: London, 2002.



Leave a Reply to Spike Jones Cancel reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s