Friday, December 27, 2024

A Framework To Enhance Deep Learning

- Advertisement -

Researchers develop a technique to achieve fast and energy-efficient computing using spiking neuromorphic substrates.

As the silicon-based processors are reaching their limit of performance, researchers are trying to mimic the efficiency and computational architecture of the human brain. Researchers at Heidelberg University and University of Bern have recently devised a technique to achieve fast and energy-efficient computing using spiking neuromorphic substrates. Their strategy is an adaptation of a time-to-first-spike (TTFS) coding scheme, together with a corresponding learning rule implemented on certain networks of artificial neurons. TTFS is a time-coding approach, in which the activity of neurons is inversely proportional to their firing delay.

The researchers look to develop a mathematical framework that could be used to approach the problem of achieving deep learning based on temporal coding in spiking neural networks. Their aim is to then transfer this data onto the BrainScaleS system, a renowned neuromorphic computing system that emulates models of neurons, synapses, and brain plasticity.

- Advertisement -

“Assume that we have a layered network in which the input layer receives an image, and after several layers of processing the topmost layer needs to recognize the image as being a cat or a dog,” Laura Kriener, the second lead researcher for the study, said. “If the image was a cat, but the ‘dog’ neuron in the top layer became active, the network needs to learn that its answer was wrong. In other words, the network needs to change connections—i.e., synapses—between the neurons in such a way that the next time it sees the same picture, the ‘dog’ neuron stays silent and the ‘cat’ neuron is active.”

This problem is known as the ‘credit assignment problem”. To solve this issue, researchers often use the error back propagation algorithm, which works by propagating an error in the topmost layer of a neural network back through the network. This will inform the synapses about their own contribution to this error and change each of them accordingly.

When neurons in a network communicate with spikes, each input spike ‘bumps’ the potential of a neuron up or down. The size of a bump, however, depends on the weight of synapses. 

“If enough upward bumps accumulate, the neuron ‘fires’—it sends out a spike of its own to its partners,” Kriener said. “Our framework effectively tells a synapse exactly how to change its weight to achieve a particular output spike time, given the timing errors of the neurons in the layers above, similarly to the backpropagation algorithm, but for spiking neurons. This way, the entire spiking activity of a network can be shaped in the desired way—which, in the example above, would cause the ‘cat’ neuron to fire early and the ‘dog’ neuron to stay silent or fire later.”

Using this approach, the researchers achieved notable efficiency and computational speed.

“The BrainScaleS hardware further amplifies these features, as its neuron dynamics are extremely fast—1000 times faster than those in the brain—which translates to a correspondingly higher information processing speed,” Kriener explained. “Furthermore, the silicon neurons and synapses are designed to consume very little power during their operation, which brings about the energy efficiency of our neuromorphic networks.”

The research appeared in the journal Nature Machine Intelligence.


 

SHARE YOUR THOUGHTS & COMMENTS

EFY Prime

Unique DIY Projects

Electronics News

Truly Innovative Electronics

Latest DIY Videos

Electronics Components

Electronics Jobs

Calculators For Electronics