Neuromorphic computing, a concept that attempts to duplicate the human brain in an artificial intelligence (AI) construct, has proven to be an elusive concept to execute.
A neuromorphic computer joins sub-components comprised of many simple processors with memory structures computing simple messages. It mimics the performance of neurons and synapses. Like the brain, this results in a construct that isn’t a computer but something very different. This difference is also true of the human brain, as neuroscience has discovered that this organ isn’t a computer either. It mediates between the body and the environment but does not, as we used to think, command the body at all.
If the brain isn’t a computer, but an element of the body that collectively performs computer-like tasks, then to create a computer that emulates a human, you have to do more than recreate the brain. You also have to recreate the other parts of the body that are doing this computational work.
This shortcoming doesn’t mean that a neuromorphic platform can’t do interesting things. For instance, they appear to excel at working with complex dynamics using a small set of computational primitives, like neurons, synapses, and spikes. Alone, they cannot replace the human brain and achieve the general AI goal as once was thought, at least not on their current trajectory.
See more: The Artificial Intelligence Market
The key problem: understanding the brain
When trying to emulate or replicate anything, the first hurdle is understanding intimately the thing you wish to replicate. As this applies to the brain, we don’t even understand the difference between how men and women think. Even the concept of linear life, given we replace our cells on average every 10 years, is questionable. For instance, if we understood that the person put into jail 10 years ago is 100% physically different from the person they were, we likely wouldn’t have sentences over 10 years long. And, indeed, given the dramatic differences between children and adults, giving a child a life sentence doesn’t make ethical sense at all.
If you don’t fundamentally understand a thing, you can’t effectively replicate it, and we fundamentally don’t understand the human brain. Rather than looking at the human brain, if we fully understood the problem, we’d instead be looking at the human mind. If we don’t understand the full role of the brain, the foundation is critically flawed when it comes to making the general computing result.
At the core of the AI problem is this concept of “artificial intelligence.” We don’t use this qualifier for any other measure of performance. There isn’t artificial strength; something is either substantial or not. There is no artificial light; you can see or not see. And there is no artificial touch; you can either feel something or not. It’s all binary, as it should be with intelligence. A better term than “artificial” would have likely been “machine” intelligence, because we don’t call other creatures artificially intelligent; they either are (dolphins, monkeys) or aren’t (inanimate objects). We’ve recently found out that slime mold is intelligent, even though it doesn’t have a brain, which raises even more flags about our attempts to reproduce a brain to get to machine intelligence.
Our discussions on AI are also very convoluted, with one taxonomy going between strong and weak AI and the other ranging between narrow and general AI. My point is that we not only don’t understand how human intelligence works, but we also have the hubris to infer that machine intelligence is somehow artificial, but biological intelligence isn’t. Neuromorphic computing appears to duplicate the human brain but that isn’t where intelligence comes from; it comes from the human mind, and the two aren’t interchangeable.
Deep learning and general intelligence
The problem with neuromorphic computing is that it assumes the human brain is a computer, and it isn’t. Our understanding of the human brain has not reached a point that allows us to duplicate it. Furthermore, it’s only a significant part of the human mind, the thing we want to emulate. As a result, neuromorphic computing is ill-conceived and likely not a viable path to creating a knowledgeable computer.
The most promising path to creating an intelligent computer is deep learning (DL), which consists of a process that trains itself much as a biological construct does. This approach steps away from trying to duplicate the brain. Instead, it focuses more tightly on the general-use intelligence goal, making it more likely to achieve, because we understand how decision making works.
I’m not saying that neuromorphic computing doesn’t have value. It has medical uses, particularly mental health; it helps research technologies that can improve our thinking process (anticipated with the singularity) or as a stepping stone to creating digital immortality. It just isn’t on the critical path to developing a generally intelligent computer and probably never should have been.