The purpose of this blog post is to define the concept of music technology and to outline its various forms.
Following are three definitions: number 1 is Wikipedia’s, number 2 is my parsing and rewording of that definition, and number 3 is my abstract:
- Music technology is the use of any device, mechanism, machine or tool by a musician or composer to make or perform music; to compose, notate, playback or record songs or pieces; or to analyze or edit music (“Music Technology” para. 1).
- Music technology is a classification of things with the following referents:
- The use of devices, machines, and computer programs to assists musicians in recording, composing, storing, and performing music. (activity)
- A particular object used to do these activities. (device)
- Any thought experiment in which previously unrealized concepts are envisioned as a means to support, reinforce, or bolster a musical idea. (concept)
- Music technology is any activity, device, or concept that aids the production of music.
Like many areas of human inquiry, the definition of music technology is always shifting because new forms are continually devised.
Some specific forms of music technology include the following:
- Electronic instruments such as the Hammond organ and the Theremin
- The computer-language protocol called MIDI
- The method of arranging sampled sounds along a timeline known as sequencing
- The process of microphone capture and arrangement of audio signals known as multitrack recording
In order to understand technologies such as these, it’s important to explore the prime mover of them all: electricity.
The phenomenon of electricity is born of Electromagnetism, which is one of the four fundamental forces of the universe.
Electromagnetism is generated by an exchange of electrons at the atomic level. The purpose of the phenomenon is to create equilibrium between protons and neutrons within the atom.
Protons exist inside the nucleus of the atom, and electrons exist as an orbiting cloud outside the nucleus.
These clouds of electrons are arrayed in layers, and the electrons in the outermost layer are responsible for all electrical interactions. This layer of electrons is known as the conduction band.
If an atom has a surplus of electrons in its conduction band, a condition which makes it out of balance with the number of protons in the nucleus, then it has a negative charge. If the conduction band is in equilibrium with the protons of the nucleus, or it has more protons than neutrons, then the atom has a positive charge.
Electrical charges may be generated by friction, by the variation of magnetic forces upon a conductor such as copper, or by chemical interaction.
Electrical charges can flow, which results in one of the following effects: lightning, static electricity, electromagnetic induction, or electric current.
Electricity can make magnetism, and magnetism can make electricity.
The standard international unit (SI) for a quantity of electricity is the coulomb, which measures how much charge is transferred in one second across a conductor.
The standard unit for the flow of electrons is the ampere, which measures the electrical current that is equivalent to one coulomb.
Another concept related to this business about electricity is the volt, which is a measure of the potential difference of electromotive force between two points on a conductor that happens to be carrying one ampere of current.
You can think of it this way: volts measure the potential for electric current, and amperes measure the actual electric current.
Copper and other conductors can transport an electrical charge. This is because the conduction band of the copper atom is only half full of electrons, so it is primed to carry an electrical charge or to form bonds with adjacent atoms.
The final electrical unit to be familiar with is the watt, which is a measure of the amount of work performed by a current of 1 ampere across a potential difference of 1 volt. You can visualize mentally the concept of a watt by comparing it to horsepower: one watt is equal to 0.001 horsepower. In the realm of music technology, wattage comes up in amplifier/speaker system.
The point of this information about the physics of electricity is that understanding such concepts will help you clarify what is happening with signal flow, cables, and power adapters.
One final aspect of electricity to take into consideration is terminology.
The words electric and electrical are synonyms and encapsulate the concepts just covered. On the other hand, the word electronic has a more specific meaning that pertains to devices that process electricity with amplifiers, vacuum tubes, or transistors so as to maintain a consistent current through various electrical routes (Berube 159).
Electronic devices include such things as radios, computers, synthesizers, cell phones, and many other technologies.
To clarify this matter of terminology, think about it this way: a stove may be electric, but if it’s using transistors and microprocessors to operate temperature regulators and cooking timers, then it’s also electronic.
Since the late nineteenth century, electricity has been used to generate musical sound by way of the electronic instrument. There are two broad categories of such instruments: electromechanical and purely electronic.
Mechanical devices that employ moving parts and machinery to generate their electronic properties are considered to be electromechanical.
Two examples of this sort of technology are (1) the Hammond organ, which uses an electrically motorized tonewheel to generate a pitch, and (2) the electric guitar, which uses magnetically activated transducers to convert the vibrational energy of a guitar’s strings into electricity.
The Hammond Organ uses a technology called additive synthesis, which is a combinatorial sound technique that creates different timbres by adding various sine waves together. This electronic maneuver is accomplished by tone wheels and transducers. The transducer converts the mechanical energy of the tone wheel into electric energy. The resulting electricity is then routed via circuitry and amplified.
A tone-wheel is a relatively primitive apparatus for generating electronic musical notes. It consists of an electric motor and a gearbox that drives a series of rotating disks. Each disk contains a set of bumps that cause it to generate a specific frequency. The frequency depends on the speed of rotation and on the number of bumps.
The Telharmonium was an early electronic musical instrument that used the additive synthesis principal. It was developed by Thaddeus Cahill in 1897. The electrical signal from the Telharmonium, which was generated with tonewheels and transducers, was transmitted over wires to be heard on the receiving end by means of a loudspeaker or a telephone handset (Manning 3-4).
Purely electronic devices, which have no moving parts, can be subcategorized by whether they generate their electronic properties by the use of vacuum tubes or by the use of solid-state semiconductors.
An example of vacuum tube technology is the Theremin, which is an electronic musical instrument in which tube-powered oscillators are controlled by the player’s hands as he or she waves them at various distances from two metal antennas. The player does not touch either antenna.
One antenna controls the frequency, (pitch), and the other antenna controls the amplitude (volume). The electronic signal generated by the Theremin is then amplified and routed to a loudspeaker to create audio.
An example of solid state technology is the modern variant of the Theremin. Produced by the company Moog, these devices use circuit boards with solid substances instead of vacuum tubes to carry the flow of electrons through the oscillators and amplifiers.
The synthesizer, and the setup known as a DAW (digital audio workstation), are in the solid state subcategory.
The taxonomy of electronic instruments looks like this:
- No subcategory
- Hammond Organ
- Electric Guitar
- No subcategory
- Purely Electronic
- Vacuum Tube
- Tube radios
- Solid State
- Modern Theremin
- Computer (DAW)
- Vacuum Tube
Analog and Digital
Analog audio is a continuous signal and digital audio is a parcellated string of 1’s and 0’s.
The technical description for analog (which I adapted from Wikipedia) is as follows: An analog signal is any continuous signal for which the time varying feature is a representation of some other time varying quantity. This means that the artificially produced, time-varying signal is analogous to the original, naturally occurring time-varying signal.
For example, recording audio with a microphone is analog because the fluctuations in air pressure as registered by the movement of the mic’s diaphragm are converted to an electric current via a transducer. The current produced by the transducer is perfectly analogous to the original sound pressure wave. To visualize this, imagine jolts and spurts of electricity moving in perfect synchronicity with the movements of a diaphragm.
Digital signal is one in which the time value of the original signal is plotted by numerical values determined by the signal’s bearing above or below discrete thresholds.
The process of making an analog signal cohere to specific number values along a timeline is called quantization.
Quantization is measured in bits and refers to how “pixilated” the audio replication will be.
The number of times per second that a computer translates analog signal into a digital stream is known as the sample rate. You can think of the sample rate as how often a computer is listening and converting electricity into information. At any given stretch of time, a computer sampling audio will listen in punctuated blips. Computers do not listen in perpetuity.
Sample rate and bit depth are two important measurements to determine the quality of a digital source. For example, a compact disc has a sample rate of 44.1 kHz and a bit depth of 16.
A digital audio signal is one that can be created, and decoded for playback, by a computer.
A simplified definition of each concept reads as follows: analog is a continuous signal, digital is a bit stream.
MIDI, Sequencing, and Multitrack Recording
MIDI is a computer protocol that allows the interconnection of all sorts of electronic instruments.
Sound (audio) is not captured via the MIDI language. Instead, a MIDI controller transmits specific information about pitch, duration, whether a sustain pedal was used, how hard a note was played, etc. The point of transmitting this information is to capture the nuance of a player’s musical action.
The most common types of MIDI data are notes, duration, velocity, and patch set. Duration refers to on or off, velocity refers to loud and soft, and patch set refers to which sounds from a MIDI synth (or other sound generating device) are to be used.
MIDI becomes sound when played by a synthesizer or by a virtual instrument within a digital audio workstation (DAW).
A music sequencer (or just sequencer) is an application or a device designed to play back musical notation.
The first sequencers were primitive devices that played rigid patterns of notes or beats using a grid of 16 buttons (steps) with each representing 1/16th of a musical measure. Groups of measures–each with their own combinatorial possibility–could be compiled to form longer compositions. An example of a step sequencer is the Roland TR-808.
Nowadays, sequencing is best done with a DAW such as Reason, ProTools, or Cubase because these programs allow the user to deal with both audio and MIDI.
Sequencing represents one of the most widely-used forms of music technology. Therefore, it is one of the fundamental skill sets necessary for any music technologist. It requires some knowledge of music theory.
Multitrack recording is a method of sound recording that allows for multiple sound sources to be combined to make one, cohesive sound.
Multitracking usually entails recording different audio channels onto separate tracks. Mixing refers to adjustments made to these individual audio tracks while constructing the master track.
The master is usually made into a stereo file meant for commercial consumption. The process of making further adjustments to the master track is called mastering.
The point of the material covered by this lecture is to acclimate you to the realm of professional audio, digital recording, and electronic instruments. These subjects are the primary focus of this class.
Concerning the material covered here, be sure to pay special attention to the following details: (1) the definition of music technology, (2) the difference between electromechanical and purely electronic instruments, and (3) the distinction between analog and digital.
Berube, Margery S, editor. The American Heritage Guide to Contemporary Usage and Style. Boston, Houghton Mifflin Co, 2005.
Manning, Peter. Electronic and Computer Music. New York: Oxford U Press, 2013.
“Music Technology” The Free Encyclopedia. Wikimedia Foundation, Inc. 19 June 2016.
“Roland TR-808” The Free Encyclopedia. Wikimedia Foundation, Inc. 28 May 2017.
“Stradivarius” The Free Encyclopedia. Wikimedia Foundation, Inc. 28 May 2017.