Apple Buys Emotient, A Digital Analyst of Consumer Feelings
Apple has snapped up Emotient, a startup that scrutinizes human faces to gauge feelings and reactions, the Wall Street Journal and other media outlets are reporting.
San Diego-based Emotient, founded in 2012, had been marketing its services to advertisers and researchers as a tool to predict how consumers would react to digital ads, movies, and other content. The company also developed a prototype emotion-sensing app for Google Glass, the device created by Google to display digital content to wearers as they interact with people or navigate through their environments.
Apple isn’t saying how it plans to incorporate Emotient’s technology into its business plans or products, though it confirmed the acquisition to media outlets Thursday. Image recognition companies such as Emotient and its rival, Waltham, MA-based Affectiva, have analyzed millions of facial expressions from video inputs—some from study participants using their own webcam-equipped devices such as laptops or smartphones. The companies have correlated minute facial muscle movements called “microexpressions” with specific human feelings such as surprise, dislike, and confusion. Detecting such emotions in response to marketing content or app experiences could help guide business decisions.
But the technology raises privacy concerns, in an era when cameras have become ubiquitous in public places such as shopping malls, and when most smartphones and tablets have two camera lenses—one pointing to the world outside, and another focused on the user. One possibility is that companies such as Apple will harness the user-facing camera to capture real-time reactions to on-screen events, either to serve advertisers or to adapt the device’s user interface according to each individual’s preferences—whether conscious or not. Such uses would likely raise questions about the user’s ability to opt out.
Emotient CEO Ken Denman told The San Diego Union-Tribune in March 2014 that the company did not store images of people’s faces, and used its technology to tap into broad sentiments, such as the reaction of a crowd to ads shown during the Super Bowl. Emotient co-founder Marian Bartlett told the newspaper that the company’s technology has helped assess how engaged students are with online tutoring programs. Some applications might eventually focus on individuals, however. Denman said the company’s technology could some day help doctors diagnose patients they “see” through remote video appointments.
In August, Emotient claimed to be the first to analyze candidates’ expressions during a presidential primary debate. Among the 10 Republican contenders, Emotient said it clocked mostly “anger” from Donald Trump, and Ted Cruz’s face most often reflected “sadness.”
Emotient’s technology was initially based on research by two of its co-founders, Bartlett and Javier Movellan at U.C. San Diego, according to a former version of the company’s website, which has now been substantially pared down. The Emotient technology was later adapted to work as a Web-based application accessible through Internet browsers.
Emotient’s scientific advisory board included Paul Ekman, who developed the Facial Action Coding System (FACS), the scientific foundation for interpreting subtle facial expressions; and Terry Sejnowski, a pioneer in machine learning who helped Ekman start automating the FACS method, according to the former Emotient website. The old site included the following details:
“Emotient Analytics delivers >95% accuracy in the detection of primary emotions, expressed in as little as a single video frame, in both real-world conditions as well as controlled environments.
These brief flashes of emotion, or “microexpressions”, were shown by Dr. Ekman’s work to be important to the understanding of people’s gut reactions to stimuli. Practically speaking, they reveal quite a bit about a person’s beliefs and their propensity to act or buy. These are the tiny “tells” that occur in the short period between unfiltered reaction and conscious masking of emotions.
Emotient’s technology not only captures these single-frame expressions, but does so in the face of obstacles such as poor lighting, consumer-grade webcams, facial occlusions such as eyeglasses and facial hair. It also achieves this level of accuracy for multiple faces in a frame, potentially up to hundreds of faces in a 1080p video.”
The purchase of Emotient, on undisclosed terms, adds another company to Apple’s stable of acquired artificial intelligence businesses that plumb complex aspects of natural human interaction. In October, Apple acquired Perceptio, whose technology facilitates the use of image recognition technology via smartphones, according to Bloomberg. In the same month, Apple acquired VocalIQ, a Cambridge, U.K. speech recognition company that industry observers saw as a possible way to enhance Apple’s personal assistant Siri, which interacts with device users through voice commands.
Emotient had raised a total of $8 million in outside capital. Most recently, it raised a $6 million Series B round in March 2014 led by Seth Neiman of Handbag, and joined by previous investor Intel Capital.