“Mind-reading” computers could be routinely used to enhance the human brain in as little as five years, Sky News has been told. The technology, which monitors brain activity, could help people analyse complex information and make critical decisions. Radiologists scanning X-rays for cancer, City traders making multi-million pound transactions and police searching a crowd for a suspect’s face could be the first to benefit. But Dr Davide Valeriani, a senior research officer at the University of Essex, said any task requiring prolonged concentration could be made safer and more accurate with the help of computers.
Brain-computer interfaces (BCI) are increasingly becoming reliable pieces of technology, changing the lives of patients, particularly of patients who suffer from paralysis or similar conditions. BCI is defined as computer technology that can interact with neural structures by decoding and translating information from thoughts (i.e., neuronal activity) into actions. BCI technology may be used for thought-to-text translation or to control movements of a prosthetic limb. The umbrella term BCI covers invasive BCI, partial invasive BCI and non-invasive BCI. Invasive BCI includes the implantation and use of technology within the human body, such as surgically placed electrodes to directly detect electrical potentials. Partial invasive BCI devices are external recorders that detect signals from superficially implanted devices. An example of partial invasive BCI is electrocorticography (ECoG), which records activity of the brain via an electrode grid that was surgically embedded. The previous example is considered “partial” because the electrode grid is placed directly on the brain, but not permanently implanted inside of the brain. Non-invasive BCI technology involves external sensors/electrodes, as seen with electroencephalography (EEG).
For a brain-computer interface (BCI) to be truly useful for a person with tetraplegia, it should be ready whenever it’s needed, with minimal expert intervention, including the very first time it’s used. In a new study in the Journal of Neural Engineering, researchers in the BrainGate* collaboration demonstrate new techniques that allowed three participants to achieve peak BCI performance within three minutes of engaging in an easy, one-step process.
One participant, “T5,” a 63-year-old man who had never used a BCI before, needed only 37 seconds of calibration time before he could control a computer cursor to reach targets on a screen, just by imagining using his hand to move a joystick.
Dr. David Brandman, lead author of the study and an engineering postdoctoral researcher at Brown University, said that while additional innovations will help to move implantable BCIs like BrainGate toward clinical availability for patients, this advance of rapid, intuitive calibration is a key one. It could allow future users and their caregivers to use the system much more quickly and to keep it calibrated over the long term.
It already seems a little like computers can read our minds; features like Google’s auto-complete, Facebook’s friend suggestions, and the targeted ads that appear while you’re browsing the web sometimes make you wonder, “How did they know?” For better or worse, it seems we’re slowly but surely moving in the direction of computers reading our minds for real, and a new study from researchers in Kyoto, Japan is an unequivocal step in that direction.
A team from Kyoto University used a deep neural network to read and interpret people’s thoughts. Sound crazy? This actually isn’t the first time it’s been done. The difference is that previous methods—and results—were simpler, deconstructing images based on their pixels and basic shapes. The new technique, dubbed “deep image reconstruction,” moves beyond binary pixels, giving researchers the ability to decode images that have multiple layers of color and structure.
In the gleaming facilities of the Wyss Centre for Bio and Neuroengineering in Geneva, a lab technician takes a well plate out of an incubator. Each well contains a tiny piece of brain tissue derived from human stem cells and sitting on top of an array of electrodes. A screen displays what the electrodes are picking up: the characteristic peak-and-trough wave forms of firing neurons.
To see these signals emanating from disembodied tissue is weird. The firing of a neuron is the basic building block of intelligence. Aggregated and combined, such “action potentials” retrieve every memory, guide every movement and marshal every thought. As you read this sentence, neurons are firing all over your brain: to make sense of the shapes of the letters on the page; to turn those shapes into phonemes and those phonemes into words; and to confer meaning on those words.
This symphony of signals is bewilderingly complex. There are as many as 85bn neurons in an adult human brain, and a typical neuron has 10,000 connections to other such cells. The job of mapping these connections is still in its early stages. But as the brain gives up its secrets, remarkable possibilities have opened up: of decoding neural activity and using that code to control external devices.
Researchers, innovators, and entrepreneurs alike are working on developing brain-computer interfaces (BCIs). Among them is Elon Musk, who has been exploring the possibility with Neuralink. Brain-controlled devices in the form of prosthetics have already demonstrated they can do more than turn us into interconnected cyborgs: they can also provide life-changing support for those who have lost the ability to use a limb or other part of their body due to injury or illness.
Prosthetics have been around a long time, and over time their design has become less clunky and easier for patients to use. But prosthetics that could be directly connected to the brain wouldn’t just improve mobility and ease of use, they would also drastically improve functionality — perhaps even beyond what would be possible with our limbs naturally.
When Roman Mazurenko was struck down by a car and killed just before his 33rd birthday, his “soulmate” Eugenia Kuyda memorialised him as a chatbot. She asked his friends and family to share his old text messages and fed them into a neural network built by developers at her artificial intelligence startup, Replika.
“I didn’t expect it to be as impactful. Usually I find showing emotions and thinking about grief really hard so I was mostly trying to avoid it. Talking to Roman’s avatar was facing those demons,” she told the Guardian.
Kuyda discovered that talking to the chatbot allowed her to be more open and honest. She would head home after a party, open the app and tell him things she wouldn’t tell her friends. “Even things I wouldn’t have told him when he was alive,” she said.
The chatbot, documented in great detail by the Verge, might be a crude digital resurrection, but it highlights an emerging interest in the digital afterlife, and how technology such as artificial intelligence and brain-computer interfaces could one day be used to create digital replicas of ourselves or loved ones that could live on after death.
It’s a topic that Black Mirror returns to repeatedly, extrapolating from current technologies into characteristically dystopian scenarios where our brains can be read, uploaded to the cloud and resurrected digitally as avatars or robots.
Magic Leap updated its website on Wednesday morning, revealing its highly anticipated augmented-reality smart glasses for the first time.
Billed as the Magic Leap One Creator Edition, the smart glasses feature an array of sensors on the front, connected via a wire to a battery and computing pack designed to be worn on the belt, matching the details first reported by Business Insider earlier this year. A wireless controller is used as input.
Magic Leap’s glasses will integrate computer graphics into the real world, a technology often called “augmented reality” by other companies. Magic Leap calls its technology “mixed reality.”
Magic Leap is calling its glasses Lightwear, the battery pack Lightpack, and the controller is called Control.
Human beings will become indistinguishable from robots if they allow microchips to be implanted into their brains. That’s according to a claim by Dr. Mikhail Lebedev, a senior neuroscientist at Duke University in Durham, North Carolina. Dr. Lebedev told CNET that improvements in brain implant technology could go awry if human beings start acting like machines.
“It is even possible that “humanity” will evolve into a community of zombies,” he said. “Luckily this is not a problem as of yet.”
Brain implants may sound like something straight out of a science fiction movie but it isn’t as far-fetched as you might think. There are companies working on developing a human brain and computer interface that can make it easier for humans to communicate with computers. One of these companies is backed by none other than Tesla and SpaceX CEO Elon Musk.
A recent article published in BMC Medical Ethics explores the ethical aspects of brain-computer interfaces (BCI): an emerging technology where brain signals are directly translated to outputs with the help of machines. Here, two of the authors of the paper tell us more about the applications of BCI, its portrayal in the media, and some of the key ethical issues it raises.
Brain-computer interfaces (BCI) are devices that measure signals from the brain and translate them into executable output with the help of a machine such as a computer or prosthesis. This technology has varied uses, from assistive devices for disabled individuals to advanced video game control.