Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

Brought by the International Telecommunication Union (ITU) through the first leg of its AI for Good sessions, an expert discussed adversarial examples and the current state of the machine learning (ML) space.

By definition, adversarial examples exploit the way artificial intelligence (AI) algorithms work. As AI continues to grow in many applications such, adversaries (attackers) in machine learning have become an active area of research.

As emphasized during the talk, amongst the various areas where ML models are used, image classification is the area that it works best. According to Google Brain’s Research Scientist Nicholas Carlini, the adversarial perturbation can be done with images and this can result in misinterpretation. In this manner, adversaries can easily fool classifiers as they focus on things that are ‘completely brittle’.

Many adversaries take a different approach of attack to break a certain defense. Thus, mitigating such attacks is a trial-and-error process. This is the reason why the research community should learn something for every defense being broken and cover a multitude of possible attacks in the future.

In response to adversaries, one of the known effective approaches being done to improve the robustness of machine learning is adversarial training. This is when a neural network — deep learning — is trained on adversarial examples. This defense works like prepping up the system to withstand strong attacks by teaching it to identify what an adversarial attack might look like.

Adversarial training works as it essentially focuses on the attacks that can directly impact the network. But on the downside, it is overall insufficient to stop all attacks. “The reason why we still have these other defenses being proposed is that adversarial training is not yet a perfect solution. It’s very good but we can still do more.”

Describing the current state of defenses within the ML space, the beginning of cryptography in 1997 as analogy has been presented. At this time, “everything was bad” but now things are good as the Advanced Encryption Standard (AES) hasn’t been broken in the last 20 years.

Similarly, the ML space is still in the genesis era. “We’re trying to secure things but we don’t actually have any good definition of what security we want,” Nicholas pointed out. Despite the progress, there still seems to be a lot more work that needs to be done. “In machine learning time, we just need to do what we are doing for 10 more years, and eventually, we’ll solve this problem too.”

Taking this into account, there’s quite a long way to go to be able to actually have some kind of “robust classifiers” that are trustworthy enough to be able to actually use in production environments where human lives are on the line.

The ITU’s AI for Good series is the leading action-oriented and inclusive United Nations platform on AI that aims to identify practical applications of AI and scale those solutions for global impact.

Pin It