A NEW report by experts, including some from Oxford, have said more effort must go into keeping people safe as artificial intelligence improves.

A total of 26 of the world's leading experts said that physical attacks, along with others digitally, could impact on humanity's safety.

The report adds that the use of AI could give power to criminals, terrorists and rogue states – especially if researchers and people in power do not work together to neutralise the threat as soon as possible.

Drones could be used to manipulate news and elections, the experts report.

The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation finds that attacks could come in ways difficult to imagine, including in speech synthesis.

Miles Brundage, research fellow at Oxford University’s Future of Humanity Institute, said: “AI will alter the landscape of risk for citizens, organisations and states – whether it’s criminals training machines to hack or ‘phish’ at human levels of performance or privacy-eliminating surveillance, profiling and repression – the full range of impacts on security is vast.

“It is often the case that AI systems don’t merely reach human levels of performance but significantly surpass it. It is troubling, but necessary, to consider the implications of superhuman hacking, surveillance, persuasion, and physical target identification, as well as AI capabilities that are subhuman but nevertheless much more scalable than human labour.”

Seán Ó hÉigeartaigh, Executive Director of Cambridge University’s Centre for the Study of Existential Risk, and one of the report co-authors, added: "We live in a world that could become fraught with day-to-day hazards from the misuse of AI.

"We need to take ownership of the problems – because the risks are real."

He added: "For many decades hype outstripped fact in terms of AI and machine learning. No longer."