An Artificial Intelligence is capable of reconstructing human thoughts

An AI has been able to replicate the images that a person is seeing from their brain waves. Decodes your thoughts and reconstructs them outside the brain in real time.

Russian researchers have developed a brain-computer interface that is capable of drawing what a person is looking at because it decodes their brain waves with the help of Artificial Intelligence.

The interface is based on artificial neural networks and electroencephalography or EEG, a technique that records brain waves through electrodes placed non-invasively on the scalp.

By analyzing brain activity, The Artificial Intelligence system is capable of reconstructing the images seen by a person undergoing EEG in real time.

It is quite a technological feat because until now it was thought that the brain processes observed by EEG were insufficient to obtain accurate information about what a person is really seeing or thinking.

But this technology, developed at the Moscow Institute of Physics and Technology (MIPT) and the Russian company Neurobotics, has confirmed that it is possible to reconstruct in real time an image that is being observed by a person.

Methodology

The scientists verified this by placing a layer of electrodes on the scalp of the participants in the experiment., in order to record your brain waves.

They then asked the volunteers to watch video clips, of 10 seconds long, during 20 minutes.

The team selected five arbitrary video categories: abstract shapes, waterfalls, human faces, motion mechanisms and motorsports.

When analyzing EEG data, researchers observed that brain wave patterns are different for each category of videos. This allowed the team to know the brain's reaction to the different videos..

Neural networks

In the second phase of the experiment, The researchers focused on only three of the five video categories and trained two neural networks for two specific tasks..

The first neural network had to generate random images from the “noise” of the images of the selected videos. This process refers to the graininess that occasionally appears in images when they lose sharpness..

The second neural network did not work with the videos, but with the EEG images obtained in the previous phase. Its function was to generate a “noise” similar to the images in the videos, from the data provided by the electroencephalography.

The experiment led to the decisive phase: both networks were trained to collaborate and convert EEG images into real images, similar to those that the volunteers had observed in the first phase and that had been processed by the first neural network.

Tres pasos

Namely, lo que ha conseguido esta investigación es, on the one hand, escanear las reacciones cerebrales ante determinadas imágenes. Secondly, ha generado imágenes difusas a partir de lo que presentaban los videos.

En tercer lugar, han generado imágenes difusas a partir de los registros EEG. And finally, ha combinado ambos resultados para replicar en tiempo real lo que los voluntarios estaban viendo, gracias a la Inteligencia Artificial.

Resultado, las redes neuronales dibujaron en tiempo real imágenes sorprendentemente precisas (aunque difusas) of what a person was looking at, solely from your EEG data.

This research will make it possible to design brain-computer interfaces that overcome the limitations of current ones, requiring brain implants that rust and fail within a few months.

A long way

The new technique is non-invasive and will be more useful for the treatment of cognitive disorders, as well as for post-stroke rehabilitation, the researchers point out in arelease.

This invention develops and enhances previous ones that approach the reading of human thoughts and their replication by other methods..

For example, in a more imprecise way, based on fMRI (changes in blood flow to the brain), Dutch scientists havegot that an AI can also read minds.

On the other hand, as we reported in anotherArticle, MIT engineers have also created a helmet that is capable of listening to human thoughts, from the neuromuscular signals that serve as the basis for spoken language,  and to transcribe them on a screen.
Reference

Natural image reconstruction from brain waves: a novel visual BCI system with native feedback. Grigory Rashkov, Anatoly Bobe, Dmitry Fastovets, Maria Komarova. bioRxiv. DOI: https://doi.org/10.1101/787101

Leave a Reply

Your email address will not be published. Required fields are marked *

16 + 8 =