Home

Posts

  • Vector-Symbolic Architectures, Part 4 - Mapping

    Open In Colab

    In this series, we’ve introduced vector symbolic architectures (VSAs) and the operations which allow them to robustly and efficiently represent complex structures such as graphs. However, in all the examples we’ve explored so far, we’ve started out by defining our problem in terms of symbols. In other words, we haven’t used any ‘real-world’ data such as images as the input for a vector-symbolic computation. In this tutorial, we’ll explore how to accomplish this and how neural networks can play a role in this process.

  • Vector-Symbolic Architectures, Part 3 - Binding

    Open In Colab

    So far, we’ve introduced several concepts in vector-symbolic architectures (VSAs): vector-symbols, similarity, and bundling. As a quick refresher, symbols represent concepts via long vectors of values, and the ‘closeness’ between these symbols is measured via a value, their similarity. We can use bundling to create a new symbol which is similar to several inputs.

    At a high level, bundling does two things: it combines information from two or more inputs into a single output, and that output will be as similar as possible to each of its inputs. The operation we’ll introduce now does something different: it also combines information from two or more inputs into a single output, but this output is not similar to its inputs. In this tutorial, we’ll show how we can accomplish this and why it’s so useful.

  • Vector-Symbolic Architectures, Part 2 - Bundling

    Open In Colab

    In the last tutorial, we introduced some basic concepts of vector-symbolic architectures (VSAs): the use of a vector of values on a particular domain to represent a symbol, the measurement of similarity between vector-symbols, and the fact that as these symbols grow longer, similarity between random symbols approaches zero.

    In this tutorial, we’ll go farther by introducing a simple operation, called ‘bundling’ (alternately, ‘superposition’ or ‘addition’) which creates a single symbol which represents a set of others.

  • Vector-Symbolic Architectures, Part 1 - Similarity

    Open In Colab

    In my opinion, a collection of techniques for manipulating symbols known collectively as “vector-symbolic architectures” (VSAs) (or equivalently, “hyperdimensional computing”, HDC) provides an exciting set of methods for representing and manipulating information. Much of my own research seeks to utilize VSAs, but in presenting it I find that outside of a small (but growing) community, VSAs are not well-known (for instance, in the realm of conventional AI and CS).

    Several good articles exist to introduce those interested to VSAs - my two favorites are An Introduction to Hyperdimensional Computing for Robotics and its companion article A Comparison of Vector Symbolic Architectures. These articles do an excellent job of going into the technical details of VSAs, how they can be applied, and the different ways in which they are implemented.

    However, I found that the best way to begin understanding VSAs was to simply begin using them. Many of the core concepts of VSAs are relatively simple to implement and have clear analogs to traditional computer science tools. In this notebook, I include code for computing with one implementation of a VSA, the Fourier Holographic Reduced Representation (FHRR).

  • An Introduction to Neuromorphic Computing in 2021

    The question “what is neuromorphic computing” has a textbook answer: it’s research in computing methods which are inspired by the brain. However, the problem with this definition is that because there are so many ways be inspired by the brain, it often appears debatable whether a certain approach is neuromorphic.

    For instance, as their name suggests, the first neural networks were inspired by rate-based interpretations of a biological neuron’s firing. Under this interpretation, a biological neuron which fires more impulses in a period of time corresponds to an artificial neuron with a higher ‘activation’ value computed from its inputs. And yet, it is rare that these highly-successful networks or their hardware accelerators are referred to as ‘neuromorphic.’

    From this outlook, it might appear that what is declared a neuromorphic architecture only reflects on the intent of a researcher to place a work within a certain field or market it to a specific audience. While there may be some truth to that, in my experience there are clear markers of what a constitutes a modern neuromorphic strategy: a focus on utilizing forms of sparsity, achieving distributed computing, and applying novel techniques in hardware. Below, I explain each of these principles in further detail.

subscribe via RSS