What is Adversarial AI and why is it Dangerous?

Introduction

In the world of hacking, we have reached the stage where we have started wondering who the better hacker is – Humans or Machines?

The dawn of Artificial Intelligence has unleashed a different breed of attacks making use of adversarial AI.

And this invasion answers the question that it may indeed be machines.

Adversarial AI is a technique employed in the field of artificial intelligence which attempts to fool models through malicious input. This technique can be applied for a variety of reasons, the most common being to attack or cause a malfunction in standard AI models.”

Technological innovation is never ending, but the reality check that comes along with it is that attackers are continuously studying the existing security tools and will always find loopholes and will exploit it.

AI can show a phone number that looks to be from your area code and can easily fool your firewall like an ML trojan horse.

Is fighting a non-human entity easy?

When Humans and Machines pose a Security Threat!

ZeroFOX, a cybersecurity firm was asked if humans or machines were better hackers back in 2016.

They used Twitter as a platform to initiate E2E to spear phishing attacks. The results were in favour of the machines as they were much better at making humans click on malicious links

Check out our Machine Learning and Deep Learning Services

Read More

Models in AI are built with Deep Neural Networks (DNNs) which is a type of machine learning.

The DNNs primarily make the machine imitate or mimic human behaviour with respect to decision making, problem solving and logical reasoning.

Let’s take the case of making an image. Researchers and developers create an image by picturing an object like a cup, sign board or a door.

Real data is then mimicked by using machine learning and the data that the researchers generate.

With each model, the image comes closer and closer to the real object. Going a little off topic, this is a huge advantage for the healthcare industry as medical imaging is made possible because of AI.

The issue with security is what, you may ask. Let’s take the same case as the images we spoke about.

An adversarial use may be if the image has been modified in order to generate a desired response by a Deep Neural Network.

The difference between the real response and the fabricated response is way too small for the human eye to pick up.

Trained DNNs may point out the differences and the image classification by the DNN may be totally different than what it actually is supposed to be. Therefore, the attacker has accomplished his goal.

Will Adversarial AI take over?

Using AI, new types of attacks will be deployed extremely efficiently by users with malicious intent.

This will be resultant of the exponential increase in data. Attribution will be extremely challenging with this tactic in use.

Attacks like these will also become a lot more affordable and this will benefit the hackers greatly.

An attacker armed with the right AI system can easily perform functions that would be impossible for humans to achieve given the technical expertise and brain power needed to achieve this at scale.

AI Weaponization!

Why are these adversarial AI attacks so different? The intent is the same – malicious. But this is achieved at a greater speed and scale.

Today, AI is not fully accessible to cybercriminals and even then, AI’s weaponization is growing extremely quickly.

The weaponization of AI can increase the volume of attacks, variations of the attack and more. Setting aside the crazy speed and scale, the tactics used today are pretty much the same.

A clear example of AI being turned is when Microsoft’s AI Twitter bot Tay which was designed to converse with users and mimic human behaviour in real-time was corrupted rather quickly and used to post insensitive and racial tweets.

It was shut down 16 hours into its launch.

Being Adversarial AI safe

Now that the threats are out there, how do organizations defend themselves against it?

IBM has released their Adversarial Robustness Toolbox which helps defend DNNs against AI attacks.

This will allow researchers and developers to further inspect their DNNs and measure their robustness.

Leverge your Biggest Asset Data

Inquire Now

Intelligence sharing within the cybersecurity community is very important in order to build immune defences.

The risks of adversarial AI threaten all sectors, be it private or public and across all domains. Building a secure future will happen if the efforts are combined and the key stakeholders have the same common goal.



Author: Abhimanyu Sundar
Abhimanyu is a sportsman, an avid reader with a massive interest in sports. He is passionate about digital marketing and loves discussions about Big Data.