This blog post corresponds to a podcast I produced. It is (more or less) a transcript of that audio program.
I’m going to tackle the future of music from two angles: First I’ll imagine the changes that artificial intelligence might inflict upon music, then I’ll speculate about the direction of music theory within the context of human tastes and preferences.
I’ll begin with the artificial intelligence angle.
Artificial intelligence refers to computer systems designed to approximate human-like intelligence. These kinds of computers can make decisions based on data from cameras, microphones, and other sources, and they can accomplish jobs that typically require human operators.
Today AI can be found in driverless cars, military drones, and personal assistant applications like Apple’s Siri or Microsoft’s Cortana.
It seems likely that artificial intelligence will impart some kind of influence on the future of music. Maybe through Elon Musk’s Neuralink technology, which is designed to be a brain/computer interface powered by artificial intelligence (AI). So, this would entail having a computer chip physically installed into your brain.
Neuralink’s website describes the venture thusly: “Neuralink is developing ultra-high bandwidth brain-machine interfaces to connect humans and computers.”
To me, what’s interesting here is the implications this technology may have on the separate ways that musicians and nonmusicians listen to, and interpret, music.
For nonmusicians, the experience of music is mostly emotional; for musicians, the experience is mostly analytical. What if a musician could be made to hear music again from the mostly emotional perspective of a non-musician rather than the mostly analytical perspective that’s been wrought through decades of musical training?
Many musicians (myself included) have become purely-analytical music machines—numbed to the emotional content of music. When I listen to music, my brain is usually busy figuring out the details of the song: What chords are those? What key are they in? What meter is this song in? What instruments are making those sounds.” The analysis never ends.
Very rarely do I sit back and enjoy the way a song is making me feel. I’m pretty sure that most musicians are like me in this sense. Perhaps Neuralink could induce a brain state that dulls the analytical abilities of musicians so that they may hear music from a non-musician’s perspective.
Now Imagine the implications this could have for songwriting and composition. What if Mozart had had a Neuralink implant and such a state was induced upon him? I’d be pretty interested in hearing what he came up with.
On the other hand, imagine a non-musician being able to learn music during the course of one afternoon—Matrix style. Or maybe non-musician concert-goers could be made to understand complex symphonic pieces with all the nuance and comprehension of a master musician.
Perhaps this technology could be exploited by music venues to manipulate audience member’s emotional reactions in synch with the performance: I’m imagining something like an internal light show that goes along perfectly with the music. And maybe these venues feature artificially intelligent super bands that can improvise and execute music of a superhuman caliber.
I can also imagine scenario where listener’s in private, or with a partner, could be brought to orgasm through musical content-informed manipulation of their brains. Just picture listening to the song “Let’s Get It On” that features a sexual component and not just an imagined or inferred one.
AI need not be implanted into our brains to have an influence on the future of music. Someday, probably soon, computer scientists will hatch a version of artificial intelligence that will be able to convincingly compose good music. At present, AI can compose, but it all sounds quite weird.
Google has a research project known as Magenta that is dedicated creating art using machine intelligence.
Magenta composed a piano tune in 2016 by using a neural network, which is a computer system designed to operate like an animal brain. Progress has continued with this project, and a posting on their website from this June (2017) features a piano piece of much greater sophistication. To me, it sounds like the raving doodling of a madman, but it’s hard to deny progress here.
This system can make polyphonic music with some manner of expression that is realized by changes in volume during its performance and by using phrasal structures to organize its melodies. It’s ideas in this area are based upon human conventions of timing and expression.
The differences between the 2016 melody and the 2017 melody is significant, so who knows where this will be in five years. Composers may be out of the job.
The Future of Music Theory
Okay, now I’m going to look at the future of music through the lens of human preferences and tastes. I’ll consider whether the future will impart any change on our system of music theory.
I took a listen to the top three songs in the world according to “The Hot 100” on Billboard.com, which uses radio airplay, data from music streaming services, album sales, and information from the Nielsen data measurement company to configure its list.
For the week of October 7, 2017, the top three were: “Bodak Yellow” by Cardi B, “Rockstar” by Post Malone, and “Look What You Made Me Do” by Taylor Swift.
- Okay, the Cardi B song, which, is one of the worst things I’ve ever heard, consists of a song length drone in E minor and a single melodic hook that repeats forever while a caustic, smug woman spouts a rap about how great she is and how much money she makes at the clubs.
- And the Post Malone song, which is somewhat better than the Cardi B song, still only has a two-chord progression. It’s in G minor, and it never changes. It goes Eb Gm Eb Gm ad infinitum. The lyrics are about having sex with prostitutes, taking pills, and threatening the use of an Uzi.
- The Taylor Swift song, which is also bad but a little better than the others, at least lyrically, is comprised of a chordless drum beat for the verses and the four-chord, ”Hit the Road Jack” chord progression for the chorus. The lyrics are about how she doesn’t like some unnamed, odious person and how she got wise in the nick of time.
None of the songs have a bridge, and all are repetitive to the point of tediousness.
I can imagine Beethoven analyzing the Cardi B song: “One chord, Em, and a single motif—mi, fa, la— why is that woman talking over it all? This is the future of music?”
The top three songs should make clear that most people prefer simple music to complex extravaganzas of theoretical experimentation. So, the notion that music theory is going to move on to some higher and more evolved territory in the future is simply delusional.
I think the music of the future is going to sound basically the same as the music of the present, which is only superficially different from the music of the past. This is due to the enduring nature of music theory, which, by and large, has remained unchanged since Greek and Roman times.
There’s been some tweaking and adjustments made, of course, but—by and large—it’s the same. The octave, the perfect fifth, and the pitch set known as the major scale, represent the backbone of most music made on Earth.
The ancient Greeks and Romans had scales, we have scale, they had chords, we have chords, they assemble vocal melodies into units called songs, so do we. Music theory is like grammar in this way. At bottom, both music theory and grammar are descriptivist theories. This means that they describe what is going on instead of prescribing what should be going on.
They operate under the dictum that a theory is only valuable insofar as it describes reality. Music theory, developed to maturity during the Common Practice Period of the seventeenth and eighteenth centuries,
The term Common Practice Period refers to the, harmonic and melodic conventions that people find, and continue to find, aesthetically pleasing. In addition to classical music, jazz and popular music also rely heavily on the theories developed during the Common Practice Period.
20th-century music saw the advent of atonalism, which was a form of music so dissonant and alien that it can only be accurately described as an audible equation. Almost no listeners find this sort of music compelling. It appears only to be interesting to those that can create it and a select few connoisseurs.
If you view atonal music as successful, then you have a warped idea of success and reality. I‘ve yet to meet a non-musician who finds anything of value here.
Now, I agree that such music often finds a home in horror movies, but most people don’t listen to horror movie music for enjoyment. It’s only value, in my opinion, is that it is can increase one’s blood pressure.
So, if your goal is to do this to your listeners, or it’s to increase their anxiety level, then kick out the jams with some Arnold Schoenberg or Paul Hindemith. Me personally, I’ll be over here listening to Beethoven, Robert Schumann, and the Beatles and enjoying my music.
Here’s the thing about the tonal music theories developed during the Common Practice Period: these music theories outline a systematic view of the facts about sound and describe the reality of the kind of music that most people enjoy hearing. The techniques figured out during the Common Practice Period will continue to hold sway in the future. Most people just don’t like atonal music.
This is hardly a controversial position. In fact, it’s backed up by comprehensive research done in this area.
In 1983, linguist Ray Jackendoff and music theorist Fred Lerdahl worked out a theory of universal musical grammar for a book they co-wrote called A Generative Theory of Tonal Music.
This theory stipulates that, while listening to music, the human brain unfurls an ordered series of cognitive actions designed to automatically discern musical idioms. A musical idiom is a sequence of sounds that possess musical content such as a melody (a succession of discrete notes), harmony (a simultaneous cluster of consonant notes), and rhythm (a repeating pattern of notes or percussive impulses).
Humans brains organize and interpret these things into hierarchical structures using a universal musical grammar that piggybacks off the cognitive ability for language.
The Generative Theory of Musical Grammar also stipulates that when specific notes are collected into pitch assemblies called scales, which are collections used to construct melodies, they begin to take on a hierarchical structure in which some notes sound stable and other notes sound unstable.
Music then becomes a game of moving from moments of stability to moments of instability and back again.
No matter what superficial stylistic direction future music takes, it will almost certainly continue to employ this foundational concept. Another thing that will likely continue to be the same, and another detail figured out by Jackendoff and Lerdahl, is music’s propensity for phraseology.
Music’s phraseology is determined by dovetailing short groups of notes, called motifs, into slightly longer strings of notes, called phrases, into collections of phrases, called lines or sections, into collections of sections, called movements or songs.
This is the part of music theory that relates overtly to human’s capacity for language. The hierarchy I just outlines is perfectly analogous to letters assembled into words assembled into phrases, assembles into sentences, assembled into paragraphs assembled into articles, books, or blogs.
This convention is likely not going to go away.
I submit to you that the music of the future is going to be wholly recognizable in today’s terms. In my opinion, composers of atonal, microtonal, or Avant-garde music might as well be playing Dungeons and Dragons in there underwear: Almost no one cares about accumulating hit points and defeating fantastical beasts.
The most famous modern-day composers—those who operate in film (like John Williams, Hans Zimmer, and James Newton Howard)—by and large, employ the harmonic and melodic principles of the Common Practice Period.
They, like modern pop musicians, generally steer clear of music theories that desperately try to reinvent the wheel.
If you disagree with me here, then I’ll prove it to you: I looked up some YouTube videos for Schoenberg’s Pierrot Lunaire, which is widely regarded to be an atonal masterpiece: one video had 39,000 views, another had 47,000 views, and another had 51,000 views. Some these videos had been up for five years or more, and not one of them had cracked 100,000 views.
Then I found the video for “Play that Song” by Train, which literally uses the melody and chord progression from the song, “Heart and Soul.” The Train song had 27.5 million views, and it’s only been out for ten months.
In case you think I’m making an unfair comparison between Art-music and pop music, consider that in Beethoven’s day, he was pop music. Today, classical music and jazz, basically only exist in the academic departments of universities and colleges. I read somewhere that these styles occupy something like 3 percent of downloads, streams, and album sales.
So, if we’re going to consider the future of music, we’re going to consider the future of music, and this likely doesn’t entail something that’s being kept on life support in the conservatories.
Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. Oxford University Press, 2014.
“Neuralink” Wikimedia Foundation, Wikipedia the Free Encyclopedia, Sept. 10, 2017.
Ian Simon and Sageev Oore. “Performance RNN: Generating Music with Expressive
Timing and Dynamics.” Magenta Blog, 2017.
Lerdahl, Fred & Ray Jackendoff. A Generative Theory of Tonal Music. Cambridge, MA: MIT Press, 1983.
Rowell, Lewis. Thinking About Music: An Introduction to the Philosophy of Music. The University of Massachusetts Press, 1983.