This is an excerpt from Even no. 3, published in spring 2016.
There’s something not to like about getting music recommendations from a machine. Music recommendation, or music discovery as it is also known, has become central to the business model of music streaming services — the market leaders Spotify and Pandora, the recently launched Apple Music, Google’s and Amazon’s efforts, the Jay Z-backed Tidal, and half a dozen others — each of them striving to distinguish itself through its superior ability to deliver an endless supply of music we’ll love. Discussions of these systems regularly pit humans against algorithms, like a musical version of John Henry’s showdown with a steam-powered hammer. Surveying the state of music discovery this past June in the New York Times, Ben Ratliff counterposed playlists made up of hand-picked tracks with those determined by “purely algorhythmic [sic] logic,” and his conclusion was definitive: “That’s where discovery will always lie: In the suggestions of actual human beings.” The forces behind Apple’s new streaming service, launched around the same time, evidently concurred. According to its spokespersons, the best playlists come from knowledgeable music lovers. As head of the service Jimmy Iovine explained, “Algorithms alone can’t do that emotional task. You need a human touch.”
It thus came as the ultimate twist when three months later, writing of Spotify’s new Discover Weekly feature for the website The Verge, Ben Popper lovingly recalled the playlist that introduced him to the Senegalese artist Aby Ngana Diop. “It felt like an intimate gift from someone who knew my tastes inside and out, and wasn’t afraid to throw me a curveball,” he wrote. “But the mix didn’t come from a friend — it came from an algorithm.”
The frisson of disquiet we might experience at that revelation — this feeling of intimacy, of someone truly understanding me, came from an algorithm — taps into deep historical anxieties about the boundary between humans and machines. Today, these anxieties often play out as fears that humans will become obsolete, replaced, surpassed, or destroyed by the artificial intelligences we are creating. Art, literature, and especially music have had a major role to play in mobilizing such fears, for over the past two centuries they have provided an uneasy meeting place for the human values of emotional depth, individuality, and creativity on the one hand, and the mechanical values of precision, repetition, and automation on the other.
While human and machine represent familiar poles in this discussion, a historical look brings into view just how slippery, mutable, and contingent those two concepts really are. Even where one locates “actual music” amidst its various material embodiments and sonic manifestations proves up for debate. Consider, as an example, how the Talking Heads frontman David Byrne describes the change wrought by phonography in his book How Music Works (2012). “Before recording technology existed,” Byrne writes, “you couldn’t take [music] home, copy it, sell it as a commodity (except as sheet music, but that’s not music), or even hear it again…. Technology changed all that in the 20th century. Music (or its recorded artifact) came to be regarded as a product — a thing that could be bought, sold, traded, and replayed endlessly in any context.”
Today we might readily go along with Byrne’s precepts. Sound, not notation, is what music “really” is; hence recordings, not sheet music, make it possible to purchase music and take it home. A hundred years ago, however, this view would not have been so intuitive. In the 19th and early 20th centuries, it was perfectly self-evident that sheet music was indeed music. Jane Austen expressed such a relationship to the printed score in Sense and Sensibility (1811). When the lovesick Marianne goes to the piano, she finds there, printed, purchased, and taken home, nothing less than music itself: “The music on which her eye first rested was an opera, procured for her by Willoughby.” Further unsettling Byrne’s intuitions, the scene continues: Marianne “put the music aside, and, after running over the keys for a minute complained of feebleness in her fingers and closed the instrument again.” For Austen, the printed score was the music, more so than the auditory phenomenon of improvised playing.
Not only was sheet music, before the 20th century, “music” in senses supposedly unique to recordings; it was also far from preordained that purchasing recordings would constitute purchasing music. It took a substantial amount of clever marketing by Edison and other pioneers of phonography to convince consumers that sound recordings were, in fact, music. As one 1895 advertisement assured an uncertain public, the Edison phonograph provided “not merely an imitation of music, but indeed real music, performed by the artist as in one’s own presence.” Twenty years later, in 1915, Edison launched a “tone test” campaign, which consisted of carefully staged concerts in which audiences were asked to compare live and recorded performances — and to conclude that the two were indistinguishable.…
By the 18th century, engineers began to make playback devices in human form — to used pinned barrels and cams to control the movements of the fingers, limbs, and even mouth of a doll-like figure playing a musical instrument. The first major success of this kind was a flute-playing automaton, built and exhibited in the 1730s by the French inventor Jacques de Vaucanson. Thereafter, discussions of musical performance often pitted humans against automata, with an eye to establishing what set the former apart from, and of course above, the latter. In 1752, while Vaucanson’s automatic flute-player was on tour in Germany, the German flautist and music teacher Johann Joachim Quantz acknowledged the potential superiority of machines in speed and precision, but reserved to humans the capacities for emotional understanding and connection: “With skill a musical machine could be constructed that would play certain pieces with a quickness and exactitude so remarkable that no human being could equal it either with his fingers or with his tongue. Indeed it would excite astonishment, but it would never move you....” The sense of rivalry, and of musical priorities newly motivated by the need to differentiate human from machine, is palpable in Quantz’s conclusion: “Those who wish to maintain their superiority over the machine, and wish to touch people, must play each piece with its proper fire.” Half a century later, no less an intellectual luminary than Hegel cast the job of the musical performer in terms of what lay beyond the machine: “If...art is still to be in question, the executant has a duty, rather than giving the impression of an automaton...to give life and soul to the work.”
But what if a machine could play music such that it touched us emotionally or seemed endowed with “life and soul”? By the 1810s, that potential breakdown of asserted differences between human and machines had become a source of worry. “The Sandman” (1816), a story by the German composer, music critic, and fairytale author E.T.A. Hoffmann, explored one such nightmare scenario. Its protagonist, Nathanael, attends a party where he hears Olympia play the harpsichord and sing in a way that enraptures him. Olympia’s musical performance, combined with her steadfast gaze and sighs of “Ah, ah!”, help convince Nathanael of her deep soul, of the sweet harmony between their minds — and Nathanael falls in love. But Olympia is in fact an automaton of wood, metal, and glass. When this fact is revealed to Nathanael — gruesomely, by the sight of gaping black holes where her eyes should be — he goes mad. With the benefit of hindsight, others who had also failed to detect Olympia’s mechanical nature identify a telltale sign: at tea parties, Olympia sneezed more often than she yawned. Thereafter, tea partiers were careful to yawn frequently, and “there was no sneezing at all, that all suspicion might be avoided.”
Emotional attachment to a mere machine, and especially love or sexual desire for a machine, has endured as one of the most disturbing forms of boundary trespassing. Likewise, Hoffmann’s tongue-in-cheek increase in yawning and death of sneezing points to a recurring phenomenon around mechanical simulations of human functions: each major crossing of the boundary between humans and machines has served not to dissolve the boundary, but rather to modify it. Speech, complex computations, winning at chess, winning on Jeopardy: like generating a good music discovery playlist, each had once been considered beyond the reach of machines, as exemplary of some special human endowment of intelligence, responsiveness to the environment, emotion, or creativity. Yet, once mechanically achieved, the focus turns elsewhere. Mastery of chess is no longer considered exemplary of intelligence; perhaps excellent music recommendations will soon no longer be considered exemplary of taste.