The majority of cybersecurity experts believe AI will be weaponised for use in cyberattacks within the next 12 months, and the shutting down of dark web markets will not decrease malware activity.
Cylance posted the results of their survey of Black Hat USA 2017 attendees yesterday, and 62 percent of the infosec experts believe cyberattacks will become far more advanced over the course of the next year due to artificial intelligence.
Interestingly, 32 percent said there wasn’t a chance of AI being used for attacks in the next 12 months. The remaining six percent were uncertain.
Following an increasing pace of high-profile and devastating cyberattacks in recent years, law enforcement agencies have been cracking down on dark web marketplaces where strains of malware are often sold. Just last month, two dark web marketplaces known as AlphaBay and Hansa were seized following an international operation between Europol, the FBI, the U.S. Drug Enforcement Agency, and the Dutch National Police.
Despite these closures, 80 percent of the surveyed cybersecurity experts believe it will not slow down cyberattacks. 7 percent said they were uncertain which leaves just 13 percent believing it will have an impact.
With regards to whom poses the biggest cybersecurity threat to the United States, Russia came out number one (34%) which is perhaps no surprise considering the ongoing investigations into allegations of the nation’s involvement in the U.S presidential elections. This was closely followed by organised cybercriminals (33%), then China (20%), North Korea (11%), and Iran (2%).
On a more positive note, while AI poses a threat to cybersecurity, it’s also improving defense and the ability to be more proactive when attacks occur to limit the potential damage.
“Based on our findings, it is clear that infosec professionals are worried about a mix of advanced threats and negligence on the part of their organizations, with little consensus with regards to which groups (nation-states or general cybercriminals) pose the biggest threat to our security,” wrote the Cyclance team in a blog post. “As such, a combination of advanced defensive solutions and general education initiatives is needed, in order to ensure we begin moving towards a more secure future.”
Are you concerned about AI being weaponised? Share your thoughts in the comments.