"The Defense Advanced Research Projects Agency (DARPA) is an agencyof the U.S. Department of Defenseresponsible for the development of emerging technologies for use by the military."
Last summer we reported on a new project called the Neural Engineering System Design (NESD), brought to you by the acronym-happy spooks at DARPA. The project is to create an implantable, wireless, wideband brain-computer interface capable of reading from neurons as well as “writing” to them by sending signals that the neurons accept. The device is called the Neural Input-Output Bus (NIOB). Now DARPA has picked six dream-team research groups that will split $65 million in funding to develop the NIOB by way of their respective goals.
The NESD program aims to develop advanced neural devices that offer improved fidelity, resolution, and precision sensory interfaces for therapeutic applications, said Phillip Alvelda, the founding NESD Program Manager. “By increasing the capacity of advanced neural interfaces to engage more than one million neurons in parallel, NESD aims to enable rich two-way communication with the brain at a scale that will help deepen our understanding of that organ’s underlying biology, complexity, and function,” he said in a statement.pls
The NESD group includes a team each from Brown, Columbia, and UC Berkeley, as well as Silicon Valley startup Paradromics, a research presence from the Fondation Voir et Entendre, and a team from the John B. Pierce Laboratory.
The NIOB device will act as a “cortical modem” that will be capable of recording and stimulating brain activity with an effective data rate of over 1Gbps. The different research groups are using different interfaces, including tissue-thin flexible circuits, wireless “neurograins” the size of a grain of sand, holographic microscopes capable of monitoring thousands of neurons at once, and even a net of LEDs covering the cortex. But they’ll all be capable of doing sensory I/O.
Paradromics, for its part, intends to build a device that can function as a speech prosthetic. “Together with our public and private partners we will be providing the NIOB to patients with ALS who have lost the ability to speak, allowing them to communicate fluently through the aid of the implant,” the company said in a statement.
The Paradromics device will record signals from the superior temporal gyrus, a region of the brain that decodes speech by parsing the audio stream into phonemes. The device design is a brushlike implant made of bundled nanowires, reminiscent of fiber-optic cables, where each fiber in the brush would interact with (ideally) a single neuron. The end of each fiber is finely shaped and polished, and the bundle is also carefully shaped to budge neurons apart without doing too much damage.
NESD project specs demand that whatever the use case, the whole package should take up about a cubic centimeter: in their words, the volume of two nickels back to back.
The data throughput afforded by such a device is a function of how well we understand the idioms in the electrochemical language of the brain. And indeed, DARPA’s description page for the project explains, successfully developing a device like this will require “integrated breakthroughs across numerous disciplines including neuroscience, synthetic biology, low-power electronics, photonics, medical device packaging and manufacturing, systems engineering, and clinical testing.”
NIOB is scheduled to go to clinical trials in 2021. But the implications are much wider than just the hardware and software developments. MIT Tech Review points out that if the project is successful, the resulting theory and tech will also expand the ability of neuroscientists to listen in as groups of neurons generate complex behaviors, knit together sensory stimuli, and even create consciousness itself. It will also clearly result in a legal battle when the FBI and/or CIA demand warrantless wiretap authority and inbuilt backdoors. These modern times.
For more, we’ve previously covered the semantic atlas that shows where and how your brain stores the meanings of words. Parsing the audio stream that we hear into phonemes comes before parsing phonemes into words and their semantic meanings.
Comments
Post a Comment