The masks, which have taken a part in daily lives of us all during the COVID-19 period, have introduced many limitations and regulations, yet these limitations do not include facial recognition.
Even if our acquaintances have their masks on, it is quite possible for us to recognize them. How human brain achieves this recognition with or without a mask is a question asked not only by the field of neuroscience but also by the researchers studying on computers, mathematics, engineering, software and artificial intelligence. For instance, the facial recognition performances of the artificial intelligence-based cameras have decreased due to the masks during the Covid-19 period. Nevertheless, scientist working in the field of machine learning are continuing their attempts to eliminate this problem.
In the past month of April, many scientists uploaded 1,200 Instagram selfies to a colossal dataset named GitHub, and they put tags to distinguish the masked faces. By uploading 525 tagged masked face pictures and 90,000 unmasked face pictures to the same data set, the researchers at the University of Wuhan in China have also made a contribution to the solution of the facial recognition problem of the cameras for the safety of the society.
But how do brain and artificial intelligence recognize faces, are there any similarities between them?
A group of scientists from the fields of neurosurgery, neurobiology, mathematics and computer at the Weizmann Institute of Science (Grossmann et al., 2019) published some findings that would explain this question. As a result of the study, some similarities were found in the way the human faces are encoded by brain and artificial intelligence.
When we look at a human face, a group of neurons in the visual regions of the brain, which respond only when a face is seen, are activated and give signals. Nonetheless, it still does not accurately answer the questions on how these neurons come together and perceive, recognize and distinguish faces, although there are various opinions about this matter.
The artificial intelligence performance has grown to compete with human performances on specific issues by carrying out complex mental processes, such as facial recognition, thanks to the Deep Convolutional Neural Network, which has recently been developed in the field of artificial intelligence. The system, which the field of artificial intelligence has been interacting with for quite a long time, has also shown its effects in the field of neuroscience, upon this development. The neural networks developed were used to estimate the functioning of the visual processes of the brain. The opinion that constitutes the basis of the practices claims that the structures forming these systems can show similar characteristics when two systems do the same process properly, and that those characteristics can play a significant part in doing that process.
The study conducted by Grossmann and his team is a procedure that shares the same opinion. The study investigated whether two biological and artificial systems would show any similarities while solving a mental process, such as facial recognition. The participants were shown a series of face and non-face pictures, including celebrities that they may have known, and their brain activities were recorded through iEEG. Based on these records, it was observed that a different activation pattern formed for each face in the neural networks in the brain and that each group of neural cells was activated at different densities. Beyond that, while the neural activity activated by some faces showed similar patterns, it was recorded that the neural activity patterns stimulated by other faces were different from one another. Researchers wonder whether these neural activity patterns play a significant role in our ability to recognize faces.
The other phase of the study measured the facial recognition performance of deep neural networks, which we could call artificial intelligence, through the facial recognition capacity of humans. Having partial similarities with the visual system of humans, the neural networks have also common aspects with the neural cells regulated in a multilayer manner. In order to recognize a human face, artificial neurons in each layer must select and collect specific facial characteristics. Similar to the brain’s visual system, this system also encodes the face from the most fundamental attributes (lines, simple shapes, etc.) to complicated attributes (eyes, eyebrows, end of forehead, etc.) and characteristic attributes. The same figures shown to the participants were also shown to the artificial neural network in order to control if the neural pattern, which appeared in the facial recognition of the human brain, would also occur in artificial neural networks.
At the end of the study, the face-specific activation pattern in the artificial network was checked to find out if it had the same variability and structure as those of the human.
The findings of the study include specific parallelisms in the biological and artificial systems, and these parallelisms being located in the middle layer of the artificial neural network, which records more concrete attributes, such as the pictorial traits of the face rather than abstract attributes like character recognition.
Grossmann, one of the coordinators of the study, stated that the results of the study supported their hypotheses. He reported that different faces creating separate neural activity patterns and the correlations between these patterns play a significant part in the recognition of faces by the brain. He emphasized that the study could make contributions to the understanding of the facial perception function of the brain and help develop artificial neural networks.
– Facial recognition adapts to a mask-wearing public
– Artificial networks shed light on human face recognition
– Convergent evolution of face spaces across human face-selective neuronal groups and deep convolutional networks