HUMAN BEINGS ARE champions at spotting patterns, especially faces, in inanimate objects—think of the famous "face on Mars" in images taken by the Viking 1 orbiter in 1976, which was essentially a trick of light and shadow. And people are always spotting what they believe to be the face of Jesus in burnt toast and many other (so many) ordinary foodstuffs. There was even a (now defunct) Twitter account devoted to curating images of the "faces in things" phenomenon.
The phenomenon's fancy name is facial pareidolia. Scientists at the University of Sydney have found that not only do we see faces in everyday objects, our brains even process objects for emotional expression much like we do for real faces, rather than discarding the objects as false detections. This shared mechanism perhaps evolved as a result of the need to quickly judge whether a person is a friend or foe. The Sydney team described its work in a recent paper published in the journal Proceedings of the Royal Society B.
Lead author David Alais, of the University of Sydney, told The Guardian: “We are such a sophisticated social species, and face recognition is very important … You need to recognize who it is, is it family, is it a friend or foe, what are their intentions and emotions? Faces are detected incredibly fast. The brain seems to do this using a kind of template-matching procedure. So if it sees an object that appears to have two eyes above a nose above a mouth, then it goes, 'Oh I'm seeing a face.' It’s a bit fast and loose, and sometimes it makes mistakes, so something that resembles a face will often trigger this template match.”
Alais has been interested in this and related topics for years. For instance, in a 2016 paper published in Scientific Reports, he and some colleagues built on prior research involving rapid sequences of faces that demonstrated that perception of face identity, as well as attractiveness, is biased toward recently seen faces. Alais et al. designed a binary task that mimicked the selection interface in online dating websites and apps (like Tinder), in which users swipe left or right if they deem the profile pictures of potential partners attractive or unattractive. The team found that many stimulus attributes—including orientation, facial expression and attractiveness, and perceived slimness—are systematically biased toward recent experience.
This was followed by a 2019 paper in the Journal of Vision, which extended that experimental approach to our appreciation of art. Alais and his coauthors found that we don't assess each painting we view in a museum or gallery on its own merits. Instead, we're prone to a “contrast effect," and our appreciation of art shows the same serial-dependence bias. We judge paintings as being more appealing if we view them after seeing another attractive painting, and we rate them less attractive if the prior painting was also less aesthetically appealing.
The next step was to examine the specific brain mechanisms behind how we "read" social information from the faces of other people. The phenomenon of facial pareidolia struck Alais as being related. "A striking feature of these objects is that they not only look like faces, they can even convey a sense of personality or social meaning," he said, such as a sliced bell pepper that seems to be scowling or a towel dispenser that seems to be smiling.
Facial perception involves more than just the features common to all human faces, like the placement of the mouth, nose, and eyes. Our brains might be evolutionarily attuned to those universal patterns, but reading social information requires being able to determine if someone is happy, angry, or sad, or whether they are paying attention to us. Alais' group designed a sensory adaptation experiment, and it determined that we do indeed process facial pareidolia in much the same way as we do for real faces, according to a paper published last year in the journal Psychological Science.
This latest study admittedly has a small sample size: 17 university students, all of whom completed practice trials with eight real faces and eight pareidolia images prior to the experiments. (The trial data were not recorded.) The actual experiments used 40 real faces and 40 pareidolia images, selected to include expressions ranging from angry to happy and falling into four categories: high angry, low angry, low happy, and high happy. During the experiments, subjects were briefly shown each image and were then asked to rate the emotional expression on the angry/happy rating scale.
The first experiment was designed to test for serial effects. Subjects completed a sequence of 320 trials, with each of the images shown eight times in randomized order. Half of the subjects completed the portion using real faces first and the pareidolia images second. The other half of the subjects did the opposite. The second experiment was similar, except both real faces and pareidolia images were randomly combined in the trials. Each participant rated a given image eight times, and those results were averaged into a mean estimate of the image's expression.
“What we found was that actually these pareidolia images are processed by the same mechanism that would normally process emotion in a real face,” Alais told The Guardian. “You are somehow unable to totally turn off that face response and emotion response and see it as an object. It remains simultaneously an object and a face.”
Specifically, the results showed that subjects could reliably rate the pareidolia images for facial expression. The subjects also showed the same serial dependency bias as Tinder users or art gallery patrons. That is, a happy or angry illusory face in an object will be perceived as more similar in expression to the preceding one. And when real faces and pareidolia images are mixed, as in the second experiment, that serial dependence was more pronounced when subjects viewed the pareidolia images before the human faces. Alais et al. concluded that this is indicative of a shared underlying mechanism between the two, which means "expression processing is not tightly bound to human facial features," they wrote.
"This 'cross-over' condition is important, as it shows that the same underlying facial expression process is involved, regardless of image type," said Alais. "This means that seeing faces in clouds is more than a child's fantasy. When objects look compellingly facelike, it is more than an interpretation: They really are driving your brain's face-detection network. And that scowl or smile—that's your brain's facial expression system at work. For the brain, fake or real, faces are all processed the same way."
This story üas originally published on Ars Technica.
More about: