FYI.

This story is over 5 years old.

Music

What Happens When a Microchip Inside Your Brain Dictates What You Listen To

In a new piece for WSJ, Stephen Witt imagines the AI-driven future of personalized listening.
Photo by Uwe Hermann

In a new piece for the Wall Street Journal, Stephen Witt—author of this year's How Music Got Free—dishes out some speculative fiction on what music consumption might look like in 2040. In just 25 years, he imagines that it will be commonplace for an "algorithmic DJ [to open] a blended set of songs, incorporating information about your location, your recent activities and your historical preferences," updating in real time with biofeedback.

Advertisement

At this point in time, "Even the concept of a 'song' is starting to blur," he writes. "Instead there are hooks, choruses, catchphrases and beats—a palette of musical elements that are mixed and matched on the fly by the computer, with occasional human assistance." If this sounds interesting, we encourage you to check out the work of innovators like The League of Automatic Music Composers and contemporary boundary-pushers like TCF.

Witt also imagines the possibility to "digitally resurrect" "long-dead voices from the past," giving artists like Etta James and Frank Sinatra newly-composed hits. This kind of vocal synthesis technology eerily reminds us of computer-generated pop star Hitsune Miku.

To read more, check out the whole piece here.

Follow Alexander on Twitter.