WASHINGTON: Researchers have created a new artificial intelligence system that can decipher the human mind and understand what a person is seeing by analyzing brain scans. This breakthrough was made possible thanks to the efforts of the scientists.
This development may be of assistance to those working to enhance artificial intelligence (AI) and may also lead to fresh insights into how the brain works. A convolutional neural network is a form of algorithm that is essential to the study. This type of algorithm has been essential in helping computers and smartphones to recognize faces and other things in their environments.
Zhongming Liu, an assistant professor at Purdue University in the United States, said that “that kind of network has had a huge influence in the area of computer vision in recent years.” [Citation needed] “Our method makes advantage of the neural network to comprehend what it is that you are viewing,” Liu said.
The study of how the brain analyzes static pictures and other visual stimuli has made use of convolutional neural networks, a kind of algorithm that belongs to the “deep learning” category.
According to Haiguang Wen, a doctoral student at Purdue University, “This is the first time that such an approach has been used to see how the brain processes movies of natural scenes. This is a step toward decoding the brain while people are trying to make sense of complex and dynamic visual surroundings.” The researchers collected 11.5 hours worth of data using functional magnetic resonance imaging (fMRI) from each of the three female volunteers as they watched 972 video clips. The films ranged from those depicting humans or animals in movement to those displaying natural views.
The information was utilized to teach the system how to forecast the activity that would occur in the visual cortex of the participants’ brains while they were viewing the films.
The fMRI data from the participants were subsequently decoded using the model, and the reconstructed films included videos that the model had never seen before.
The model was successful in properly decoding the fMRI data into a variety of different picture categories. After then, actual video pictures were shown with the interpretation that the computer had generated of what the person’s brain had seen based on the fMRI data.
“The fact that we are decoding the data almost in real time, while the participants in the study are viewing the film, is something that I believe sets our project apart from others of its kind. We do a scan on the brain every two seconds, and the model reassembles the visual experience as it is having it “Wen, who was the primary author of the research that was published in the journal Cerebral Cortex, said. The researchers were successful in determining how certain regions of the brain were connected to particular pieces of information that a person was viewing.
According to Wen, “with our approach, you are able to visually display the exact information that is represented by any brain site, and screen across all of the places in the brain’s visual cortex.”
He said that “by doing that, you can see how the brain separates a visual scene into parts, and then re-assembles the pieces into a comprehensive knowledge of the visual scene.” “By doing that,” he continued, “you can see how the brain divides a visual image into pieces.”
Published at : 10 Aug 2022 10:52 AM (IST)