From outsmarting the self check-out to providing options for patients with communication disorders, merging mind with machine could be the next frontier for humanity. How close are we to controlling machines with our minds? Read more: This New Mind-Controlled Robot Arm Works Without a Brain Implant https://www.sciencealert.com/new-robo… “A team from Carnegie Mellon University (CMU) is… creating the first noninvasive mind-controlled robot arm that exhibits the kind of smooth, continuous motion previously reserved only for systems involving brain implants – putting us one step closer to a future in which we can all use our minds to control the tech around us.” China Unveils First Chip Designed Specifically for Mind-Reading https://futurism.com/the-byte/brain-c… “The signals transmitted and processed by the brain are submerged in the background noise,” Tianjin University researcher Ming Dong said in a press release. “This BC3 [Brain-Computer Codec Chip] has the ability to discriminate minor neural electrical signals and decode their information efficiently, which can greatly enhance the speed and accuracy of brain-computer interfaces.” Dilemmas in regulating brain-computer interface devices https://www.medicaldevice-network.com… “A major challenge for regulators will be to resolve issues around data and the threat of malicious hacking. Research has found that brain-computer interface devices could be vulnerable to cybercrime; termed ‘neurocrime’ and ‘brain hacking’ by ETH Zurich research fellow Dr Marcello Ienca and Radboud University Nijmegen associate professor Pim Haselager.”
Bionic prostheses have made enormous strides in recent years — and the concept of a mind-controlled robot limb is now very much a reality. In one example, engineers at Johns Hopkins built a successful prototype of such a robot arm that allows users to wiggle each prosthetic finger independently, using nothing but the power of the mind.
Perhaps even more impressively, earlier this year a team of researchers from Italy, Switzerland, and Germany developed a robot prosthesis which can actually feed sensory information back to a user’s brain — essentially restoring the person’s sense of touch in the process.
“We ‘translate’ information recorded by the artificial sensors in the [prosthesis’] hand into stimuli delivered to the nerves,” Silvestro Micera, a professor of Translational Neuroengineering at the Ecole Polytechnique Fédérale de Lausanne School of Engineering, told Digital Trends. “The information is then understood by the brain, which makes the patient feeling pressure at different fingers.”
Brain-computer interfaces (BCIs) are seen as a potential means by which severely physically impaired individuals can regain control of their environment, but establishing such an interface is not trivial. A study publishing May 10 in the open access journal PLOS Biology, by a group of researchers at the École Polytechnique Fédérale de Lausanne in Geneva, Switzerland, suggests that letting humans adapt to machines improves their performance on a brain-computer interface. The study of tetraplegic subjects training to compete in the Cybathlon avatar race suggests that the most dramatic improvements in computer-augmented performance are likely to occur when both human and machine are allowed to learn.
BCIs, which use the electrical activity in the brain to control an object, have seen growing use in people with high spinal cord injuries, for communication (by controlling a keyboard), mobility (by controlling a powered wheelchair), and daily activities (by controlling a mechanical arm or other robotic device).
Typically, the electrical activity is detected at one or more points of the surface of the skull, using non-invasive electroencephalographic electrodes, and fed through a computer program that, over time, improves its responsiveness and accuracy through learning.
As machine learning algorithms have become both faster and more powerful, researchers have largely focused on increasing decoding performance by identifying optimal pattern recognition algorithms. The authors hypothesized that performance could be improved if the operator and the machine both engaged in learning their mutual task.
Recently, the film industry is showing interest in emerging technologies, such as Virtual Reality (VR). A milestone in this direction was the special award presented by the Academy of Motion Picture Arts and Sciences Board in 2017 to Carne y Arena directed by Alejandro G. Iñárritu. Carney Arena is a VR installation which was said to be opening “new doors of cinematic perception”. This follows on from the work of an increasing number of festivals (like Berlinale and the Venice Film Festival), filmmakers and researchers who are investigating the potential of using new interactive technologies in cinema.
Among the most recent innovations are new wireless Brain-Computer Interfaces, which are now available in the market as low-cost headsets. They are already used in computer games and the arts, but more recently they have been applied in interactive filmmaking as well. For example, Hollywood studios, like Universal and 20th Century Fox have released interactive versions of their films, where the spectator can control key moments of the plot with the use of a BCI headset.
Gand calls it a neural operating system. “Just like DOS worked with a keyboard, Windows with a mouse, iOS with touch, Nuos is another level of evolution where the human being is now able to communicate and compute using neurological signals,” he says. The system uses artificial intelligence to adapt to a patient. Someone who has just suffered a stroke, for example, would start with a simplified user interface that gradually becomes more advanced. It can be customized for various settings, from an ICU to someone’s home. The interface can allow someone to browse the internet, connect with external systems like robotics, and supports a wide range of other uses.
Last week, residents of the Morningside of Fullerton senior community were given a preview of the cutting-edge technology. Once perfected, the brain-, eye- and sound-controlled devices could help seniors and people with disabilities by performing regular tasks in their daily lives. In addition to the robot arm, students, with the help of their senior volunteers, demonstrated a facial-recognition program that would allow seniors, particularly those with symptoms of Alzheimer’s and other cognitive diseases, put names to faces via a mobile phone. There was also a mind-controlled wheelchair and a device that could communicate using a computer-generated voice with home-assist devices such as the Alexa and Google Home systems to perform tasks such as turning electronics on and off.
The ability to control the physical world with your mind using a brain-computer interface or a mind machine has traditionally been focused on health care, and more recently the gaming industry. Now, thanks to cutting-edge technology pioneered by Altran, these applications are set to transform the way man and machine communicate on the factory floor.
Hey you! Ever wish your technology was more invasive? You love voice-to-text, but it’s just too public?
Some researchers at MIT Media Lab have come up with the perfect gadget for you. And it looks like a Bane mask crossed with a squid. Or, if you prefer: like a horror movie monster slowly encompassing your jaw before crawling into your mouth.
The researchers presented their work at the International Conference on Intelligent User Interfaces (yes such a thing exists) in March in Tokyo.
Whenever you think of words, they’re silently, imperceptibly, transmitted to your mouth. More specifically, signals arrive at the muscles that control your mouth. And those signals aren’t imperceptible to a highly sensitive computer.
The researchers call this device the AlterEgo. It’s got seven electrodes positioned around the mouth to pick up these signals. The data that the electrodes pick up goes through several rounds of processing before being transmitted wirelessly to a device awaiting instruction nearby. Oh, and it’s got bone-conduction headphones so that devices can respond.
AlterEgo is a wearable system that allows a user to silently converse with a computing device without any voice or discernible movements — thereby enabling the user to communicate with devices, AI assistants, applications, or other people in a silent, concealed, and seamless manner. A human user could transmit queries, simply by vocalizing internally (subtle internal movements) and receive aural output through bone conduction without obstructing the user’s physical senses and without invading a user’s privacy. AlterEgo aims to combine humans and computers—such that computing, the internet, and AI would weave into human personality as a “second self” and augment human cognition and abilities.
Every so often a news article appears that shows a disabled person directing movement of a computer cursor or a prosthetic hand with thought alone. But why would anyone choose to have a hole drilled through his or her skull to embed a computer chip in the brain unless warranted by a severe medical condition?
A more practical solution may now be here that lets you hook up your brain to the outside world. CTRL–Labs, a start-up launched by the creator of Microsoft Internet Explorer, Thomas Reardon, and his partners, has demonstrated a novel approach for a brain-computer interface (BCI) that ties an apparatus strapped to your wrist to signals in the nervous system.
Physiologically, Reardon observes, all transfer of information among humans is carried out via fine motor control. Motor control of the tongue and throat gives us speech. Facial expression and body posture convey emotion and intention. Writing takes place by controlling fingers that scrape chalk on a blackboard, stroke paint, manipulate pen or pencil, or punch keys. If everything the brain does to interact with the world involves muscles, why not use the motor system to more directly interface mind and machine?