Monday, December 23, 2024

New Training Method For Neural Networks

- Advertisement -

Researchers at Los Alamos National Laboratory have implemented a novel training method for neural networks to assist them in comparing and analyzing neural network behavior

(Credit: Los Alamos National Laboratory)

Neural networks provide great performance and work well when conditions are ideal but are vulnerable to misidentification if there is a slight aberration, for eg. a sticker on the stop board sign can make the neural network misidentify the sign and never stop. To avoid this, a team at Los Alamos National Laboratory implemented adversarial trained neural networks which can make it hard to fool the networks with any irregularity. The researchers purposely included aberrations and trained AI to avoid them.

“The artificial intelligence research community doesn’t necessarily have a complete understanding of what neural networks are doing; they give us good results, but we don’t know how or why,” said Haydn Jones, a researcher in the Advanced Research in Cyber Systems group at Los Alamos. “Our new method does a better job of comparing neural networks, which is a crucial step toward better understanding the mathematics behind AI.”

- Advertisement -

Researchers are always finding ways to improve network robustness. There has also been an extensive effort for implementing the “right architecture” for neural networks, but the introduction of adversarial training has reduced this need. Adversarial training leads to similar solutions even in diverse architecture, this makes neural networks in the computer vision domain get to very similar data representations, regardless of network architecture, as the magnitude of the attack increases. Hence, the AI research community does not need to invest time in exploring new architectures.

“We found that when we train neural networks to be robust against adversarial attacks, they begin to do the same things, By finding that robust neural networks are similar to each other, we’re making it easier to understand how robust AI might work. We might even be uncovering hints as to how perception occurs in humans and other animals,” Jones said.

This research enabled neural networks to be efficiently used in real-world applications without being worried about any aberrations.


SHARE YOUR THOUGHTS & COMMENTS

EFY Prime

Unique DIY Projects

Electronics News

Truly Innovative Electronics

Latest DIY Videos

Electronics Components

Electronics Jobs

Calculators For Electronics