The question “what is neuromorphic computing” has a textbook answer: it’s research in computing methods which are inspired by the brain. However, the problem with this definition is that because there are so many ways be inspired by the brain, it often appears debatable whether a certain approach is neuromorphic.

For instance, as their name suggests, the first neural networks were inspired by rate-based interpretations of a biological neuron’s firing. Under this interpretation, a biological neuron which fires more impulses in a period of time corresponds to an artificial neuron with a higher ‘activation’ value computed from its inputs. And yet, it is rare that these highly-successful networks or their hardware accelerators are referred to as ‘neuromorphic.’

From this outlook, it might appear that what is declared a neuromorphic architecture only reflects on the intent of a researcher to place a work within a certain field or market it to a specific audience. While there may be some truth to that, in my experience there are clear markers of what a constitutes a modern neuromorphic strategy: a focus on utilizing forms of sparsity, achieving distributed computing, and applying novel techniques in hardware. Below, I explain each of these principles in further detail.

The Motivation for Neuromorphic

Biological computation is highly efficient, both in terms of energy usage and in learning new information; the human brain uses only tens of watts and is capable of flexibly learning new tasks. This contrasts greatly to modern artificial intelligence (AI) models, which consume vast amounts of energy to train and are still faced with the challenge of ‘catastrophic forgetting,’ meaning they cannot easily learn new situations or apply their existing knowledge to new tasks.

For example, if a purely hypothetical game ‘Starcraft 3’ were released tomorrow, I would be confident I could play through its campaign on a normal difficulty level by applying my experience with its predecessor, Starcraft 2. New units or graphics would doubtless be included with the new entry, but I would be able to incorporate these new elements as I played. In contrast, it is likely that the AI models such as AlphaStar currently used to play Starcraft 2 would require moderate or extensive reworking to incorporate these new elements, as well as re-training.

While this inefficiency might not be a problem for well-defined applications with huge amounts of data available, in other areas where AI could be deployed it’s a deal-breaker. Robotics is one key example: a robotic ‘agent’ deployed in a field needs to be able to reason about new objects and scenarios it hasn’t encountered before, and do so safely and reliably. Furthermore, it needs to be able to do this without a cable attaching it to a power plant and/or supercomputer.

AI faces another challenge: historically, its progress has been linked to an increasing number of model parameters, where each parameter is essentially a ‘dial’ which needs to be turned to the correct position for the model to produce the right answer. Networks which revolutionized image classification in 2013 used millions of parameters; in 2021, networks focused on advancing ‘natural language processing’ (NLP) tasks such as translation use billions or even trillions of parameters.

Needing to tune this many parameters places designing and training these models outside the abilities of any but the largest corporations and governments with sufficient resources. Furthermore, the slowdown in transistor scaling which provided faster and cheaper computers for decades makes it likely that these models will remain a challenge to train and deploy even with specialized hardware.

Besides these, other problems exist with modern AI such as bias, lack of robustness, and lack of explainability. To realize the full potential of AI as reliable, autonomous systems which can improve human quality-of-life in areas such as medicine, manual labor, and household assistance, these issues must be addressed.

While resolving all these issues may seem insurmountable, we know it must be possible, as currently the human brain is more capable of addressing these issues more effectively than any artificial system. In the current era of AI, neuromorphic computing seeks to address these issues by applying principles of biological computation. Here, I’ll focus on two broad topics which neuromorphic researchers often aim to achieve: sparsity and distribution.

Sparsity

Sparsity is a general concept which implies that out of a large set of elements, only a small fraction have defined values. All other elements are inactive, which often means their values are zero or undefined. Either way, computations with these elements depend only on the ‘active’ elements.

Having very few active elements is desirable in computation, as this reduces the amount of information which must be computed and transported. The more non-zero values a computation requires, the more energy must be spent communicating these values to the downstream process which require that information. Often, the expense of moving information throughout computing systems is greater than the energy required for other operations - particularly in AI.

Sparse matrices are a common example of sparse computation. In sparse matrices, only a small proportion of a large, 2-D grid of numbers have values which are not zero. As a result, to save space in computer memory these sparse matrices are often stored in compressed form as lists or dictionaries, rather than as the full matrix of their original form.

Sparsity is also encountered within biological systems. The brain is one example of a highly sparse system: at any given moment in time, it is estimated that only 1% of neurons in a human brain are in an ‘active’ state, in which they are sending out a voltage pulse or ‘spike.’ Given that on average, each neuron has thousands of inputs, biological computation is highly sparse.

One example of sparsity applied to create a neuromorphic system is event-based vision. These systems date back to the origin of the field when it was first formally defined in the 80s by Carver Mead. Conventional visual representations of the world around us often consist of static images taken many times per second: the analog version of this is traditional film, where a reel holds thousands of images which are shown quickly in succession to create the illusion of movement. Digital systems are similar, but instead of a physical reel of film, a matrix of color intensities stores a virtual copy of each image at each instant in time. This creates a 3-D structure representing video. However, much of the information in this 3-D structure is redundant; many aspects of a scene change little throughout time. Video compression algorithms often focus on capturing only the differences between each frame of video.

Event-based vision takes a different view of visual representation inspired by the retina. These image sensors do not produce a continuous series of two-dimensional images; instead, each pixel produces an ‘event’ when it detects the intensity of light falling on it has changed above a certain threshold. As a result, the sensor can provide very quick ‘reactions’ to changes in a scene; instead of waiting for the entire image frame to be read out, the detected changes are sent out immediately. This allows event-based sensors to capture very quick changes efficiently. Another advantage these systems provide is the ability to capture very high dynamic ranges (environments which have both very bright and dark sections). These clear advantages have motivated a number of commercial ventures into event-based vision, from both start-ups such as Prophesee and established companies such as Samsung. However, the unfamiliar format of these sensors’ visual data and relative lack of powerful software tools for them (e.g. Adobe Premiere for event-based data) may be a hurdle to more widespread adoption of these sensors, even as their cost decreases.

Other sparse approaches to sensory encoding and data representation, including senses such as smell, hearing, and touch. Sparsity also provides a guiding principle for the design of neuromorphic algorithms, including pattern recall, coding, and graph search.

Distribution

Compared to a computer chip, the human brain is remarkably resilient. Individual sections of the brain can be damaged or removed, but after a recovery period overall activity recovers and a patient can often return to a more normal lifestyle. In contrast, computers are fragile; making a small, random cut across a processor would almost certainly result in it failing completely.

If you have a task which is very important to complete, you can take an alternate approach from executing it on one computer: instead, you can send the same task to multiple computers and examine the answers you get back. If the majority of the answers are the same, you can assume that answer is correct and use it. This way, even if individual parts of the system fail, the overall process can overcome those failures. This is a very simple method for what’s known as ‘distributed computing,’ but the principles remain the same: using networks of components that pass messages, creating an overall system which can correctly carry out its task even when individual components fail. It’s hypothesized that in many ways, the processing the brain carries out is distributed. Certain components (neurons and synapses) can fail, but the overall computation remains the same.

Neuromorphic approaches often include a distributed approach to computing, both physically and conceptually. Hardware for neuromorphic computing is often massively parallel, with a large number of identical computing cores emulating ‘neurons’ which can send out messages which may emulate ‘spikes’ in the brain. Each core can carry out the same overall computation as any other core, and there may be hundreds or thousands of these cores (in comparison to most traditional central processors, where dozens of cores are at the upper end of what is found). Conceptually, this is similar to the approach taken in a graphics processing unit (GPU), but neuromorphic cores are often less flexible in the computations they can carry out, aiming to achieve higher efficiency with more highly specialized components. Sometimes these components may even be analog, meaning that each core does not carry out an identical computation but can operate highly efficiently.

Distributed hardware can increase the chances we can carry out a computation, but if the algorithm running on it is ‘fragile’ - if a small error can cause a large change in output - the overall computation is still highly susceptible to noise or attacks. By including a distributed approach in the algorithm itself, we can gain additional protection against getting the wrong answer.

Many approaches to provide redundancy in computing exist, but several specialized approaches exist in neuromorphic computing which are posited to relate to biological representations of information. Vector-symbolic approaches are one example, in which high-level information is represented by very long arrays of values. Each value in this array can be roughly thought of as a ‘neuron’ in the brain. Information from a single source can be distributed along this array in a way which takes advantage of mathematical operations in very high-dimensional spaces. Individual components of this array can be disrupted or removed, but the correct answer can still be reached with a very high probability. These arrays can be combined and manipulated to represent new concepts and relationships, providing a novel and resilient way to do distributed computation. A good portion of my own research currently focuses on leveraging this approach to do neuromorphic computing.

Novel Hardware

Digital computers have evolved greatly over the past 50 years, but in some ways haven’t fundamentally changed during that period. Almost all computers sold for business, government, and personal use utilize digital logic based on silicon transistors and a von-Neumann architecture (a central processor which fetches instructions from a separate memory). However, this configuration has a number of fundamental limitations, many of which have either been reached or will become issues in the near future as scaling hits its limits. Effective, alternate approaches to computing are becoming highly sought-after as companies seek to meet the ever-present demand for more powerful computing resources. Field-programmable gate arrays (FPGAs), photonic computers, and more alternative approaches are entering the market both in the high-performance computing space, as well as increasingly in consumer markets.

The fundamental calculation carried out in neuromorphic computing is often to update the state of an artificial neuron given a set of inputs. Given that this domain is quite different than the huge variety of tasks a modern general-purpose processor may be asked to do, the hardware approaches taken for neuromorphic chips can be quite different. As previously mentioned, many neuromorphic chips are massively parallel, with many artificial neurons spread out across a chip. However, the physical mechanisms which these neurons use to compute can be quite different.

One approach is to utilize well-developed digital logic to implement artificial neurons: miniscule transistors will represent ones and zeros, and these symbols are manipulated through operations such as multiply-and-accumulate (MAC) to update their internal state. The advantage of this approach is the widespread availability of hardware and engineering talent to create and program it, as well as a high degree of understandability and reproducibility. Neuromorphic chips such as Intel’s Loihi and IBM’s TrueNorth use this digital approach.

Alternate approaches to neuromorphic computing seek to use the power of analog systems to compute. In analog systems, values are represented by physical quantities, such as the voltage of a capacitor, rather than a string of digital values. These systems seek to link physical variables together in a way that the interaction of the system’s components themselves naturally carry out a useful computation, rather than manipulating digital symbols in a sequence defined by an internal program. Slide rules and mechanical watches are examples of analog computers, which were used effectively long before digital logic had reached the degree of minitaturization which made it useful.

An advantage of analog systems is that their precision isn’t defined by the number of bits representing a number, but instead is arbitrary, and constrained by less easily defined terms such as noise and losses through the system. They can also be extremely low-power, as analog systems can utilize alternate computation methods such as optical processing, in which some computing steps can be entirely passive.

The BrainScaleS neuromorphic system is a neuromorphic system which uses analog components to accelerate its computation, combined with digital weights and control values. While potentialy powerful and efficient, analog systems can be highly difficult to engineer, analyze, and debug given the complex interactions between their components. However, their ability to utilize fundamental physical quantities and interactions to compute is generating renewed interest at a time when digital logic is reaching some of its fundamental limits.

Conclusion

Currently, a large divide exists between the capabilities of artificial intelligence and biology. As long as this gap exists - while AI systems can’t adapt to new information efficiently or extend knowledge to new situations - there is room for neuromorphic research. Despite rapid progress over the previous decades, the brain still has many interactions and processes which are not well understood. As we improve our understanding of the brain and its capabilities, it’s natural to try to apply this biological knowledge to improve the capabilities of artificial systems. In my view, much of current neuromorphic engineering focuses on applying the principles of sparse and distributed computation to improve the capabilities of AI and allow it to run on novel, efficient hardware with improved resilience. This does not include all neuromorphic research, and these specific approaches do not address every issue facing AI today. However, the expansion and shifting of neuromorphic research in the past few years has demonstrated the huge potential of this area and I expect many methods developed within it to reach the mainstream in the near future.