Introduction to Music Technology

Fundamental Concepts, Key Definitions, and Specific Forms

Introduction

The purpose of this blog post is to define the concept of music technology and to outline its various forms.

Following are three definitions: number 1 is Wikipedia’s, number 2 is my parsing and rewording of that definition, and number 3 is my abstract:

  1. Music technology is the use of any device, mechanism, machine or tool by a musician or composer to make or perform music; to compose, notate, playback or record songs or pieces; or to analyze or edit music (“Music Technology” para. 1).
  2. Music technology is a classification of things with the following referents:
    1. The use of devices, machines, and computer programs to assists musicians in recording, composing, storing, and performing music. (activity)
    2. An object used to do these activities. (device)
    3. Any thought experiment in which previously unrealized concepts are envisioned as a means to support, reinforce, or bolster a musical idea. (concept)
  3. Music technology is any activity, device, or concept that aids the production of music.

Some specific forms of music technology include the following:

  • Electronic instruments like the Hammond organ and the Theremin
  • Transducers
  • The computer-language protocol called MIDI
  • The method of arranging sampled sounds along a timeline known as sequencing
  • The process of microphone capture and arrangement of audio signals known as multitrack recording
  • Digital audio workstations

The rest of this blog post examines the above devices and activities.

Music-Technology Devices

Since the late nineteenth century, electricity has been used to generate musical sound by way of the electronic instrument. There are two broad categories of such instruments: (1) electromechanical and (2) purely electronic.

Mechanical devices that employ moving parts and machinery to generate their electronic properties are considered to be electromechanical.

Two examples of this sort of technology are (1) the Hammond organ, which uses an electrically motorized tonewheel to generate a pitch, and (2) the electric guitar, which uses magnetically activated transducers to convert the vibrational energy of a guitar’s strings into electricity.

Hammond and Guitar
The Hammond organ and the electric guitar are both examples of electromechanical instruments.

The Hammond Organ uses a technology called additive synthesis, which is a combinatorial sound technique that creates different timbres by adding various sine waves together. This electronic maneuver is accomplished by tone wheels and transducers. The transducer converts the mechanical energy of the tone wheel into electric energy. The resulting electricity is then routed via circuitry and amplified.

A tonewheel is a relatively primitive apparatus for generating electronic musical notes. It consists of an electric motor and a gearbox that drives a series of rotating disks. Each disk contains a set of bumps that cause it to generate a specific frequency. The frequency depends on the speed of rotation and on the number of bumps.

The Telharmonium was an early electronic musical instrument that used tonewheels and the additive synthesis principal. It was developed by Thaddeus Cahill in 1897. The electrical signal from the Telharmonium, which was generated with tonewheels and transducers, was transmitted over wires to be heard on the receiving end by means of a loudspeaker or a telephone handset (Manning 3-4).

Teleharmonium1897
The Telharmonium, also known as the Dynaphone, was the first electronic instrument (Manning 3).

Purely electronic devices, which have no moving parts, can be subcategorized by whether they generate their electronic properties by the use of vacuum tubes or by the use of solid-state semiconductors.

An example of vacuum tube technology is the Theremin, which is an electronic musical instrument in which tube-powered oscillators are controlled by the player’s hands as he or she waves them at various distances from two metal antennas. The player does not touch either antenna.

One antenna controls the frequency, (pitch), and the other antenna controls the amplitude (volume). The electronic signal generated by the Theremin is then amplified and routed to a loudspeaker to create audio.

Leon Theramin and the Theramin.jpg
In 1924, Leon Theremin (1896-1993) invented the first purely electronic musical instrument (Manning 4).

An example of solid state technology is the modern variant of the Theremin. Produced by the company Moog, these devices use circuit boards with solid substances instead of vacuum tubes to carry the flow of electrons through the oscillators and amplifiers.

The synthesizer, and the setup known as a DAW (digital audio workstation), are in the solid state subcategory.

The taxonomy of electronic instruments looks like this:

  • Electromechanical
    • No subcategory
      • Hammond Organ
      • Electric Guitar
  • Purely Electronic
    • Vacuum Tube
      • Theremin
      • Tube radios
    • Solid State
      • Modern Theremin
      • Synthesizer
      • Computer (DAW)

MIDI is a computer protocol that allows the interconnection of all sorts of electronic instruments and computer interfaces.

Sound (audio) is not captured via the MIDI language. Instead, a MIDI controller transmits specific information about pitch, duration, whether a sustain pedal was used, how hard a note was played, etc. The point of transmitting this information is to capture the nuance of a player’s musical action.

The most common types of MIDI data are notes, duration, velocity, and patch set. Duration refers to on or off, velocity refers to loud and soft, and patch set refers to which sounds from a MIDI synth (or other sound generating device) are to be used.

MIDI becomes sound when played by a synthesizer or by a virtual instrument within a digital audio workstation (DAW). The sounds are often generated by samples. A sample is a short snippet of audio that can be activated by keyboard, drum pad, or data.

A digital audio workstation, or DAW, is a computer-based music-production system. It usually entails the combination of hardware, software, and electronic instruments. Although DAWs-of-the-past were housed in electronic instruments, like digital keyboards, nowadays, the term DAW is almost always used to describe software-based platforms like ProTools, Cubase, and Studio One.  

Most modern-music productions are generated using digital audio workstations and some combination of audio, MIDI, and samples.

Music-Technology Activities

A music sequencer (or just, sequencer) is a device designed to play musical sounds via pre-planned instruction or data set. Such preplanned instructions or data sets are called sequences, hence the name.

Musical notation can be thought of as a sequence: the dots (note heads) on a page are played in order by a performer. With electronic sequencing, sounds are organized along a timeline and played back by a machine or a computer.

Sequencing is an extremely common form of music technology. Making beats = sequencing.

The first sequencers were primitive devices that played rigid patterns of notes or beats using a grid of 16 buttons (steps) with each representing 1/16th of a musical measure.  Groups of measures—each with their own combinatorial possibility—could be compiled to form longer compositions.

An example of a step sequencer is the Roland TR-808.

MIDI is a computer protocol that allows the interconnection of all sorts of electronic instruments.

Sound (audio) is not captured via the MIDI language. Instead, a MIDI controller transmits specific information about pitch, duration, whether a sustain pedal was used, how hard a note was played, etc. The point of transmitting this information is to capture the nuance of a player’s musical action.

The most common types of MIDI data are notes, duration, velocity, and patch set. Duration refers to on or off, velocity refers to loud and soft, and patch set refers to which sounds from a MIDI synth (or other sound generating device) are to be used.

MIDI becomes sound when played by a synthesizer or by a virtual instrument within a digital audio workstation (DAW).

A music sequencer (or just sequencer) is an application or a device designed to play back musical notation.

The first sequencers were primitive devices that played rigid patterns of notes or beats using a grid of 16 buttons (steps) with each representing 1/16th of a musical measure.  Groups of measures–each with their own combinatorial possibility–could be compiled to form longer compositions.

An example of a step sequencer is the Roland TR-808.

roland_tr808_rear_lg (1).jpg
The TR-808 was a drum machine manufactured by Roland between 1980 and 1983. Its deep bass boom and unusual drum sounds were integral to 1980s electronica and hip-hop (“TR-808” para 1-3).

Nowadays, sequencing is usually performed with a digital audio workstation (DAW), like Reason, ProTools, or Cubase.

Multitrack recording is a method of sound recording that allows multiple sounds to be combined to make one, cohesive sound. Multitrack recording, or multitracking, entails recording different audio channels on separate tracks.

The first step in the music recording process is to use microphones (and other transducers) to convert musical sound vibrations into electricity. Next, the electronic audio is captured using an analog or digital multitrack recorder. Last, the captured music is finalized as a single audio file called a master track.

Mixing is the process of balancing individual audio tracks stored on a multitrack recorder; Mastering is any further adjustment made to the master track

The master is usually a computer file containing audio in stereo. This means that master tracks are two-track audio files. They are meant for commercial consumption. Spotify, Soundcloud, Apple Music, Amazon Music, and every other song aggregator on Earth, host stereo audio files that have been produced through mixing and mastering.

There is much overlap between multitrack recording and sequencing. In fact, the two activities routinely overlap when making music. Most modern productions are combinations of microphone-captured sound and sampled sound.

Conclusion

The point of this blog post was to acclimate you to the realm of music technology. Its most important details are (1) the definition of music technology, (2) the difference between electromechanical and purely electronic instruments, and (3) common music-technology activities like multitracking and sequencing.

You should be able to Define music technology, list and define the common forms of music technology (including sequencing, multitrack recording, and MIDI), and differentiate between electromechanical and purely electronic musical instruments.

Now explain this stuff to your grandma.

Works Cited

Manning, Peter. Electronic and Computer Music. New York: Oxford U Press, 2013.

“Music Technology” The Free Encyclopedia. Wikimedia Foundation, Inc. 19  June 2016.

“Roland TR-808” The Free Encyclopedia. Wikimedia Foundation, Inc. 28  May 2017.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s