Cybersecurity

Artificial intelligence is now an arms race. What if the bad guys win?

Robot wars ... no one is safe in a world where AI can hack itself Image: REUTERS/Marcos Brindicci

Mark Hughes
President of security, BT
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Cybersecurity?
The Big Picture
Explore and monitor how Cybersecurity is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Cybersecurity

Unless you’ve had your head in the sand over the past few years, you’ll have heard about the unprecedented — and largely unexpected — advancement in Artificial Intelligence (AI). Perhaps the most public example of this was when Google’s company DeepMind used an AI called AlphaGo to beat one of the world’s top Go players in 2016. But that’s far from the only instance of AI breaking new ground.

Today, it plays a role in voice recognition software — Siri, Alexa, Cortana and Google Assistant. It’s helping retailers predict what we want to buy. It’s even organising our email accounts by sorting the messages we want to see from those we don’t.

Meanwhile, in the world of business, machine learning – an element of AI that focuses on algorithms that can learn from, and make predictions based on data – is pushing the boundaries of what computers can do. As a result, we’re seeing solutions such as Robotic Process Automation (RPA) and big data, driving efficiencies and boosting profits.

Overall, AI is doing a fantastic job at transforming the world for the better.

The dangers inherent in AI

But what about the other side of the coin? What negative impact could AI have? It’s clear that AI – like any technology – could be used for corrupt means. Adversarial AI (where inputs can be carefully crafted to trick AI systems into misclassifying data) has already been demonstrated. It could, for example, make an AI vision system that recognises a red traffic light, perceive a green one instead – which could have disastrous ramifications for an autonomous vehicle.

The Adversarial AI scenario is an example of AI getting hacked. But let’s take it further; what if we have AI itself doing the hacking? That’s not a worst-case scenario – it’s a likelihood.

Cyber criminals are all but sure to get their hands on AI tools, thanks to the fact that they’re widely available as open software already. OpenAI and Onyx, are two that immediately come to mind.

This highlights the need to ensure that AI systems – particularly those used in mission-critical settings – are resilient to such attacks.

A digital arms race

We’re left with a situation where the security industry and the cyber criminals (be they organised, state-sponsored or simply lone hackers) are engaged in an escalating arms race. So-called black hats are developing AI to break into systems and cause havoc. While the white hats are researching ways in which an AI can defend networks against its own kind.

Here’s where we get to the moral question: should we be using AI for these means? As a technology, we’re only beginning to understand its potential. Theoretically, AI could grow so intelligent that it becomes something completely beyond our control.

That thought makes the idea of an AI arms race sound particularly dangerous. Thankfully, intelligent people — Elon Musk and Stephen Hawking included — are thinking carefully about this topic, and I’m confident that they’ll come up with the necessary safeguards. Plus, companies such as Google and Microsoft have already stated that they feel the opportunities outweigh the risks.

Those opportunities are worth noting. There’s already an abundance of positive developments associated with AI and cybersecurity. AI, for example, can be used to augment (rather than replace) humans in the security space — improving predictive threat monitoring, dynamic response to cyber attacks, secure software development and security training. All tools and processes that will help the white hats get a step ahead.

The question we’re left with, however, is this: how will the AI arms race end? Well, one side will win, and there’s a chance that it might not be the good guys. Let that sink in for a minute.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
CybersecurityCybercrimeArtificial Intelligence
Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

'Pig-butchering’ scams on the rise as technology amplifies financial fraud, INTERPOL warns

Spencer Feingold and Johnny Wood

April 10, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum