Global security experts have shone light on the implications artificial intelligence (AI) could cause if it was used maliciously, by criminals and terrorists.
A total of 26 experts have jointly compiled a report, “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation”, to analyse the implications malicious use of AI could have on the world over the next 10 years.
The 100-page report forecast rapid growth in cyber-crime, misuse of drones during the next decade, and an increase in the use of ‘bots’ to manipulate everything.
Within the report it identified three security domains – digital, physical and political security – as particularly relevant to the malicious use of AI and urged governments and corporations worldwide to address the danger inherent in the myriad applications of AI.
The report has also recommended measures to take to mitigate the threats posed by malicious use of the technology. It said policy-makers and technical researchers should work together and be prepared for such incidents.
Even though AI has many positive applications, it is a dual-use technology and AI researchers and engineers should be mindful of and proactive about the potential for its misuse, it said.
The report says: “It is troubling, but necessary, to consider the implications of superhuman hacking, surveillance, persuasion, and physical target identification.”
The report said best practices should be learned from disciplines such as computer security, which have a longer history of handling dual use risks. Additionally, the report also called for expansion of the range of stakeholders involved in preventing and mitigating the risks of malicious use of AI.
The authors suggested several interventions to reduce threats associated with AI misuse, including rethinking cyber-security, exploring models of openness in information sharing, promoting a culture of responsibility, and seeking institutional and technological solutions for those defending against attacks.
The authors are from various organisations and disciplines, including Oxford University’s Future of Humanity Institute; Cambridge University’s Centre for the Study of Existential Risk; OpenAI, a leading non-profit AI research company; the Electronic Frontier Foundation, an international non-profit digital rights group; and the Center for a New American Security, a US-based bipartisan national security think-tank.