A team of academics, private sector stakeholders, and policy experts have recently pinpointed the top 18 AI threats that we need to be worried about in the next 15 years. The group has identified deep fakes – a technology that is already in existence and spreading, as a high-level AI threat.
This “deep fake” technology will harm the people by erasing their trust in themselves and society itself. The Director of the Dawes Centre for Future Crimes at UCL Shane Johnson said this threat will continue to grow in sophistication and entanglement with our daily lives.
Study by experts to determine potential AI threat
To identify potential threats, the researchers gathered a team of 30 experts from various fields of study. These experts were then divided into groups of four to six people and given a list of potential AI crimes. These crimes could be physical like autonomous drone vehicle attacks or in digital forms like phishing schemes.
The team then considered four main features of the attacks:
- Harm (physical, mental or social)
- Profitability
- Achievability
- Defeatability
Want to publish your own articles on DistilINFO Publications?
Send us an email, we will get in touch with you.
The team agreed that these factors could not be separated from one another. The experts were asked to consider the impact of these criteria separately. They then allocated scores to the threats and sorted them to determine the most harmful attacks from AI in the next 15 years.
Results of the study
After comparing the 18 different AI threats, the group concluded that video and audio manipulations in the form of deep fakes were the biggest threat to society. They said that the human mind has a tendency to trust their eyes and ears, so audio and video evidence has a lot of credit. Due to recent developments in deep learning, there is a lot of scope for the creation of fake content.
The group stated that the potential impact would lead to scamming the elderly by impersonating family members. It could also involve the creation of videos designed to sow distrust in public and government figures. The group also mentioned that these attacks are hard to detect, which would make them difficult to stop.
Other top threats identified are autonomous cars used as remote weapons, and AI authored fake news. Surprisingly, the group judged burglar robots (small robots used to steal keys and assist human burglars) as one of the lowest threats.
The Work Ahead
Now, that we are aware of the potential threats that lie in the future, we have some work to do. The threat isn’t the robots but how we can use them to harm each other and society. Using the results of this study, the best we can do is get ahead by building a community and spreading the information to deal with this robotic apocalypse.