Jump to content

Experts Warn on Malicious Use of Artificial Intelligence


Matrix

Recommended Posts

The world must prepare for potential malicious use of artificial intelligence (AI) by rogue states, criminals and terrorists, according to a report by a group of 26 security experts.

Forecasting rapid growth in cybercrime and the misuse of drones during the next decade – as well as an unprecedented rise in the use of bots to manipulate everything from elections to the news agenda and social media, the report calls for governments and corporations worldwide to address the danger inherent in the myriad applications of AI.

The report – The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation – also recommends interventions to mitigate the threats posed by the malicious use of AI.

The report says that AI has many positive applications, but it is a dual-use technology and AI researchers and engineers should be proactive about the potential for its misuse. Policymakers and technical researchers need to work together now to understand and prepare for the malicious use of AI, according to the authors.

They advise that best practices can and should be learned from disciplines with a longer history of handling dual use risks, such as computer security, and that the range of stakeholders engaging with preventing and mitigating the risks of malicious use of AI should be expanded.

The co-authors come from a range of organisations and disciplines including Oxford University’s Future of Humanity Institute; Cambridge University’s Centre for the Study of Existential Risk; OpenAI, a leading non-profit AI research company; the Electronic Frontier Foundation, an international non-profit digital rights group; the Center for a New American Security, a U.S.-based bipartisan national security think tank and other organizations.

The 100-page report identifies three security domains —digital, physical, and political security— as particularly relevant to the malicious use of AI. It suggests that AI will disrupt the trade-off between scale and efficiency and allow large-scale, finely-targeted and highly-efficient attacks.

The authors expect novel cyber attacks such as automated hacking, speech synthesis used to impersonate targets, finely-targeted spam emails using information scraped from social media, or exploiting the vulnerabilities of AI systems themselves (e.g. through adversarial examples and data poisoning).

 

Likewise, the proliferation of drones and cyber-physical systems will allow attackers to deploy or repurpose such systems for harmful ends, such as crashing fleets of autonomous vehicles, turning commercial drones into face-targeting missiles or holding critical infrastructure to ransom. The rise of autonomous weapons systems on the battlefield risk the loss of meaningful human control and present tempting targets for attack.

In the political sphere, detailed analytics, targeted propaganda, and cheap, highly-believable fake videos present powerful tools for manipulating public opinion on previously unimaginable scales. The ability to aggregate, analyse and act on citizen’s information at scale using AI could enable new levels of surveillance, invasions of privacy and threaten to radically shift the power between individuals, corporations and states.

How to Mitigate

To mitigate such risks, the authors explore several interventions to reduce threats associated with AI misuse. They include rethinking cyber-security, exploring different models of openness in information sharing, promoting a culture of responsibility, and seeking both institutional and technological solutions to tip the balance in favour of those defending against attacks.

The report also “games” several scenarios where AI might be maliciously used as examples of the potential threats we may face in the coming decade.

While the design and use of dangerous AI systems by malicious actors has been highlighted in high-profile settings (e.g. the U.S. Congress and White House separately), the intersection of AI and misuse writ large has not yet been analysed comprehensively – until now.

“Artificial intelligence is a game changer and this report has imagined what the world could look like in the next five to 10 years,” said Dr. Seán Ó hÉigeartaigh, executive director of Cambridge University’s Centre for the Study of Existential Risk and one of the co-authors. “We live in a world that could become fraught with day-to-day hazards from the misuse of AI and we need to take ownership of the problems – because the risks are real. There are choices that we need to make now, and our report is a call-to-action for governments, institutions and individuals across the globe.”

He said that for many decades hype outstripped fact in terms of AI and machine learning but that is no longer. This report looks at the “practices that just don’t work anymore – and suggests broad approaches that might help,” he said, citing, for example, how to make software and hardware less hackable and what type of laws and international regulations might work in tandem with this.

“AI will alter the landscape of risk for citizens, organisations and states – whether it’s criminals training machines to hack or ‘phish’ at human levels of performance or privacy-eliminating surveillance, profiling and repression – the full range of impacts on security is vast,” said Miles Brundage, research fellow at Oxford University’s Future of Humanity Institute.

“It is often the case that AI systems don’t merely reach human levels of performance but significantly surpass it. It is troubling, but necessary, to consider the implications of superhuman hacking, surveillance, persuasion, and physical target identification, as well as AI capabilities that are subhuman but nevertheless much more scalable than human labour.”

source

Link to comment
Share on other sites


  • Views 433
  • Created
  • Last Reply

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...