People often reveal their private emotions in tiny, fleeting facial expressions, visible only to a best friend — or to a skilled poker player. Now, computer software is using frame-by-frame video analysis to read subtle muscular changes that flash across our faces in milliseconds, signaling emotions like happiness, sadness and disgust.
With face-reading software, a computer’s Web cam might spot the confused expression of an online student and provide extra tutoring. Or computer-based games with built-in cameras could register how people are reacting to each move in the game and ramp up the pace if they seem bored.
However, the rapidly developing technology is far from infallible, and it raises many questions about privacy and surveillance.
Ever since Darwin, scientists have systematically analyzed facial expressions, finding that many of them are universal. Humans are remarkably consistent in the way their noses wrinkle, say, or their eyebrows move as they experience certain emotions. People can be trained to note tiny changes in facial muscles, learning to distinguish common expressions by studying photographs and video. Now computers can be programmed to make those distinctions, too.
Companies in this field include Affectiva, based in Waltham, Massachusetts, and Emotient, based in San Diego, California.
Affectiva used Web cams over two-and-a-half years to accumulate and classify about 1.5 billion emotional reactions from people who gave permission to be recorded as they watched streaming video, company cofounder and chief science officer Rana el-Kaliouby said. These recordings served as a database to create the company’s face-reading software, which it will offer to mobile software developers starting in the middle of next month.
The company strongly believes that people should give their consent to be filmed, and it will approve and control all of the apps that emerge from its algorithms, Kaliouby said.
Face-reading technology may one day be paired with programs that have complementary ways of recognizing emotion, such as software that analyzes people’s voices, technology forecaster Paul Saffo said.
If computers reach the point where they can combine facial coding, voice sensing, gesture tracking and gaze tracking, a less stilted way of interacting with machines will ensue, he said.
For some, this type of technology raises an Orwellian specter — and Affectiva is aware that its face-reading software could stir privacy concerns. However, Kaliouby said that none of the coming apps using its software could record video of people’s faces.
“The software uses its algorithms to read your expressions, but it doesn’t store the frames,” she said.
So far, the company’s algorithms have been used mainly to monitor people’s expressions as a way to test ads, movie trailers and television shows in advance. (It is much cheaper to use a program to analyze faces than to hire people who have been trained in face-reading.)
Affectiva’s clients include Unilever, Mars and Coca-Cola. The advertising research agency Millward Brown says it has used Affectiva’s technology to test about 3,000 ads for clients.
Face-reading software is unlikely to infer precise emotions 100 percent of the time, said Tadas Baltrusaitis, a doctoral candidate at the University of Cambridge who has written papers on the automatic analysis of facial expressions.