Monday, November 25, 2024

Imaginary Numbers Protect AI From Adversarial Attacks

- Advertisement -

Researchers suggest that complex numbers can protect AI based algorithms from taking a wrong turn in decision making processes.

Artificial intelligence (AI) models are used in a variety of applications such as cognitive manufacturing, safety devices, identification, etc. But the algorithm can be easily manipulated or fooled through adversarial attacks. These attacks break the AI’s decision-making process. 

Computer Engineers at Duke University have indicated that using complex numbers can play an integral part in securing artificial intelligence algorithms against malicious attacks that try to fool object-identifying software by subtly altering the images. According to the researchers, by adding just two complex-valued layers among hundreds if not thousands of training iterations, the technique can significantly improve performance against such attacks without sacrificing any efficiency.

- Advertisement -

“We’re already seeing machine learning algorithms being put to use in the real world that are making real decisions in areas like vehicle autonomy and facial recognition,” said Eric Yeats, a doctoral student working in the laboratory of Helen Li, the Clare Boothe Luce Professor of Electrical and Computer Engineering at Duke. “We need to think of ways to ensure that these algorithms are reliable to make sure they can’t cause any problems or hurt anyone.”

The attacks modify images to break the decision-making process. The modification can be quite simple or as sophisticated as adding a carefully crafted layer of static to an image that alters it in ways undetectable to the human eye. These small modifications or perturbations can cause such large problems stems from how machine learning algorithms are trained.

A standard method for avoiding this problem is gradient descent method which compares the decisions it arrives at to the correct answers, attempts to tweak its inner workings to fix the errors, and repeats the process over and over again until it is no longer improving.

According to the researchers, to keep the algorithms on track, users can train their algorithms with a technique called gradient regularization.

“Gradient regularization throws out any solution that passes a large gradient back through the neural network,” Yeats said. “This reduces the number of solutions that it could arrive at, which also tends to decrease how well the algorithm actually arrives at the correct answer. That’s where complex values can help. Given the same parameters and math operations, using complex values is more capable of resisting this decrease in performance.”

The complex numbers add flexibility in how it adjusts its internal parameters to arrive at a solution. Rather than only being able to multiply and accumulate changes, it can offset the phase of the waves it’s adding together, allowing them to either amplify or cancel one another out. 

“The complex-valued neural networks have the potential for a more ‘terraced’ or ‘plateaued’ landscape to explore,” Yeates said. “And elevation change lets the neural network conceive more complex things, which means it can identify more objects with more precision.”

The research was presented at the Proceedings of the 38th International Conference on Machine Learning.


 

SHARE YOUR THOUGHTS & COMMENTS

EFY Prime

Unique DIY Projects

Electronics News

Truly Innovative Electronics

Latest DIY Videos

Electronics Components

Electronics Jobs

Calculators For Electronics