Can The Echo Nest Stay Aloft in the Turbulent Music-Recommendation Industry?

(Page 2 of 4)

new listeners. And Westergren has become a poster boy and activist for Internet radio, the existence of which has been threatened by attempts in Washington, D.C., to hike the royalties musicians can earn on streaming music.

But as cool as Pandora is, it suffers from a major limitation: human analysts can only listen to so many songs every day. In its eight years, the Music Genome Project has catalogued only 1.5 million songs. That’s a small fraction of the total amount of music available over the Internet: the social music community iMeem, for example, has more than 5 million tracks.

It’s a great idea to base music recommendations on the actual, measured characteristics of a given song, rather than guess about whether one person shares musical tastes with another they’ve never met. But to scale it up to the entire universe of music, Whitman argues, you have to turn to automated techniques. One is “machine listening,” the technology Jehan was investigating at the Media Lab, which can quickly produce a boiled-down quantitative representation of a song. But equally important is basic text search technology, which is needed to identify the artists associated with each song and to find out what people are saying online about the music they like.

After all, “You can’t assume that just because two songs sound the same to a computer, you will like both of them,” Whitman points out. Look at mainstream rock and Christian rock: they may share similar tempos and rhythms, but they appeal to very different audiences. “The difference there is completely cultural,” says Whitman. “The text-retrieval stuff I do can tell you a lot about the music that the audio will never tell you.”

Whitman and Jehan hung out together a lot at the Media Lab, and after they both got their PhDs in 2005, they decided to join forces to try to solve the music recommendation problem. Vercoe, Whitman’s thesis advisor, became their first angel investor—“The first time he’s ever invested in a student,” according to Lucchese. And Echo Nest became the first music-discovery company to emphasize what Whitman calls “this combination of acoustic and cultural information.”

What they spent three years building—and finally unveiled at the Demo 08 conference in San Diego in September (watch a video of their presentation here)—is the “Musical Brain,” a software-as-a-service platform that developers building music-driven websites can tap into for recommendations, as well as automated analysis of new songs and feeds of music-related media from across the Internet.

The Musical Brain has too many features to describe here, but for the most part they fall into Whitman’s two categories—the acoustic and the cultural. On the acoustic side, the software can listen to an entire song in about 2 seconds, then use various digital signal processing techniques to identify every unique “segment”—every drum beat, trumpet note, or lyrical syllable—plus factors like pitch, loudness, and timbre. The average pop song yields 2,000 to 3,000 segments, which are summarized in a single XML file that goes into the Musical Brain’s database.

On the text side—which “we have a lot more people working on, because it’s huge and messy,” in Whitman’s words—the company is attempting to catalog every document on the Internet that’s about music. Every time someone writes a blog post about an album or a concert, every time a college newspaper publishes a music review, the Musical Brain finds it and uses natural language processing techniques to figure out what artist or song the text is about, then pick out the specific terms each writer is applying to that artist or song. In this way, the Brain slowly builds up a picture of what Internet users are thinking and saying about music. For example, says Whitman, “We could tell you with a certain probability that today on the blogs, people thought Radiohead was ‘angular’ or ’emotional.'”

One of the hardest parts of the problem, Whitman says, is parsing names—figuring out which bands and artists bloggers, reviewers, and commenters are actually talking about. That’s partly because “there are bands named after just about everything you can imagine,” he says, from The Grizzly Bears to CSS. (That one gives the programmers at The Echo Nest fits, since CSS also stands for Cascading Style Sheets, XML files widely used to control the appearance of Web pages.) The Musical Brain also has to compensate for people’s inability to spell the names of certain artists, like Björk or Britney Spears—there are at least 30 common variations on Britney’s name alone.

“It’s the least sexy part of the digital music market, and no one likes to talk about it, but reconciling artist names is a huge … Next Page »

Single Page Currently on Page: 1 2 3 4 previous page

Wade Roush is a freelance science and technology journalist and the producer and host of the podcast Soonish. Follow @soonishpodcast

Trending on Xconomy

By posting a comment, you agree to our terms and conditions.

Comments are closed.