Spiking neural network

Table of contents

Automate your business at $5/day with Engati

REQUEST A DEMO
Switch to Engati: Smarter choice for WhatsApp Campaigns 🚀
TRY NOW
Spiking neural network

What are spiking neural networks?

Spiking neural networks (SNNs) are artificial neural networks that more closely mimic natural neural networks. In addition to neuronal and synaptic states, SNNs incorporate the concept of time into their operating model. The idea is that neurons in the SNN do not transmit information at each propagation cycle (as it happens with typical multi-layer perceptron networks), but rather transmit information only when a membrane potential – an intrinsic quality of the neuron related to its membrane electrical charge – reaches a specific value, called the threshold. 

When the membrane potential reaches the threshold, the neuron fires, and generates a signal that travels to other neurons which, in turn, increases or decreases their potentials in response to this signal. A neuron model that fires at the moment of threshold crossing is also called a spiking neuron model.

spiking-neural-network
Source: Towards Data Science

Artificial neural networks (ANNs) or conventional deep learning model emulate the structural feature of the visual cortex to a reasonable extent and are able to demonstrate human-level performance on quite a few tasks. But these artificial neural networks incur huge computational cost to achieve this level of performance, while, on an average, the human brain operates within a power budget of almost 20 W. Most real-world platforms tend to have resource and battery constraints. If intelligence has to be enabled on such platforms, there needs to be a way by which you can implement neural networks without requiring too much power.

Spiking Neural Networks (SNNs) provide you with a bio-plausible way to enable low-power intelligence. Spiking neural networks emulate biological neuronal functionality through the act of processing visual information with binary events or spikes over several time-steps. The discrete spiking behavior of spiking neural networks has been proven to result in high energy-efficiency on emerging neuromorphic hardware.

How can spiking neural networks be optimized?

There have been two broad algorithmic optimization methods for SNNs that have greatly contributed towards bringing the performance of SNNs closer to that of ANNs on image classification (even on the Imagenet dataset that is known as the ‘Olympics’ of image classification tasks). 

The first approach is known as Conversion. It converts a pre-trained ANN to a spiking neural network by means of normalizing firing thresholds or weights to transfer a ReLU (Rectified Linear Unit) activation to Integrate-and-Fire (IF) spiking activation. Conversion techniques have been able to achieve competitive accuracy on par with artificial neural network counterpartson large-scale architectures and datasets but they tend to incur large latency or time-steps for processing.

The second approach involves making use of surrogate gradient descent methods to train spiking neural networks employing an approximate gradient function to overcome the non-differentiability of the Leaky-Integrate-and-Fire (LIF) spiking neuron. 

These techniques make it possible for spiking neural networks to be optimized from scratch with lower levels of latency and reasonable degrees of classification accuracy.

Even though there has been a large amount of progress in optimization techniques, there is a lack of understanding about internal spike behavior of SNNs in comparison to conventional ANNs. Neural networks have been considered to be black boxes. There have been multiple interpretation or ‘visual explanation’ tools have been proposed for ANNs which have been used practically for the purpose of obtaining visual explanations and understanding the network prediction. An SNN interpretation tool is also very important due to the fact that low-power SNNs are candidates for deployment in real-world applications like medical robots, self-driving cars, and drones, where explainability in addition to performance is vital. A Spike Activation Map (SAM) is a visualization tool proposed for SNNs.

What are the advantages of spiking neural network?

A motivation for studying SNNs is that brains exhibit a remarkable cognitive performance in real-world tasks. With ongoing efforts toward improving our understanding of brain-like computation, there are expectations that models staying closer to biology will also come closer to achieving natural intelligence than more abstract models, or at least will have greater computational efficiency.

SNNs are ideally suited for processing spatio-temporal event-based information from neuromorphic sensors, which are themselves power efficient. The sensors record temporally precise information from the environment and SNNs can utilize efficient temporal codes in their computations as well. This processing of information is also event-driven meaning that whenever there is little or no information recorded the SNN does not compute much, but when sudden bursts of activity are recorded, the SNN will create more spikes. Under the assumption that typically information from the outside world is sparse, this results in a highly power-efficient way of computing. In addition, using time-domain input is additional valuable information compared to frame-driven approaches, where an artificial time step imposed by the sensor is introduced. 

 

What are spiking neural networks used for? What are the applications of spiking neural networks?

SNNs can in principle apply to the same applications as traditional ANNs. In addition, SNNs can model the central nervous system of biological organisms, such as an insect seeking food without prior knowledge of the environment. Due to their relative realism, they can be used to study the operation of biological neural circuits. Starting with a hypothesis about the topology of a biological neuronal circuit and its function, recordings of this circuit can be compared to the output of the corresponding SNN, evaluating the plausibility of the hypothesis. However, there is a lack of effective training mechanisms for SNNs, which can be inhibitory for some applications, including computer vision tasks.

As of 2019, SNNs lag ANNs in terms of accuracy, but the gap is decreasing and has vanished on some tasks

Basicially, Spiking Neural Networks (SNNs) can emulating biological features in the brain, thus providing you with an energy-efficient alternative to conventional deep learning.

Close Icon
Request a Demo!
Get started on Engati with the help of a personalised demo.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
*only for sharing demo link on WhatsApp
Thanks for the information.
We will be shortly getting in touch with you.
Oops! something went wrong!
For any query reach out to us on contact@engati.com
Close Icon
Congratulations! Your demo is recorded.

Select an option on how Engati can help you.

I am looking for a conversational AI engagement solution for the web and other channels.

I would like for a conversational AI engagement solution for WhatsApp as the primary channel

I am an e-commerce store with Shopify. I am looking for a conversational AI engagement solution for my business

I am looking to partner with Engati to build conversational AI solutions for other businesses

continue
Finish
Close Icon
You're a step away from building your Al chatbot

How many customers do you expect to engage in a month?

Less Than 2000

2000-5000

More than 5000

Finish
Close Icon
Thanks for the information.

We will be shortly getting in touch with you.

Close Icon
Close Icon

Contact Us

Please fill in your details and we will contact you shortly.

This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
Thanks for the information.
We will be shortly getting in touch with you.
Oops! Looks like there is a problem.
Never mind, drop us a mail at contact@engati.com