It’s no secret that the advent of generative AI, although widely celebrated as a breakthrough across key industries, has introduced new pathways of threat the likes of which most of us have never seen before. Eight out of ten people, according to a new survey on cybersecurity, think that generative AI will play a larger part in future cyber assaults, and four out of ten people think there will be a major increase in these types of attacks over the next five years.
Risk managers must ensure that their companies stay ahead in the AI arms race now that the opposing sides have drawn their respective battle lines (one using AI to bolster businesses and the other trying to breach and engage in criminal activities). Two experts in the field, MSIG Asia’s Andrew Taylor and Coalition’s Leeann Nicolo, recently discussed this new landscape and what the future may hold as AI becomes a more pervasive fixture in all facets of businesses in an interview with Insurance Business’ Corporate Risk channel.
Attackers are smarter than ever, and we can see it for ourselves. Nicolo remarked, “We have seen that.” Let me qualify this by noting that we won’t be able to prove beyond a reasonable doubt that AI is responsible for the observable shifts. We believe, however, that the phenomena we observe are the work of artificial intelligence.
Nicolo pegged it down to a few elements, the most common of which is greater general communication. Only a few short years ago, she claimed that most threat actors lacked a solid grasp of leverage, did not speak English fluently, and produced ambiguous client-exfiltrated data.
Nicolo stated, “Now we have threat actors communicating extremely clearly and very effectively.” “Often, they produce the legal obligation that the client may face, and it’s as clear as a bell that there is some tool that they are using to ingest and spit out that information in the time that they’re taking the data and the time it would take them to read it and ingest and understand the obligations.”
Therefore, we believe that AI is being exploited to absorb and endanger the client, particularly in the legal sector. However, we suspect that AI is already being used to generate phishing emails, much before that really occurs. There has been a marked improvement in the quality of phishing emails, with modern spammers able to create highly targeted, personalized campaigns written in more sophisticated language. She continued, “Without any analysis, some of the phishing emails my team has seen don’t even look like phishing emails.”
On Taylor’s part, AI is one of those developments that will continue to gain status in terms of future hazards or risks in the cyber business. While 5G and telecommunications, and perhaps quantum computing, are also important to keep an eye on, AI presents a particularly serious threat to cybersecurity due to its propensity to facilitate the speedier distribution of viruses.
In addition, “we get this trade-off” when we use AI as a defense mechanism, Taylor said. “It’s not entirely positive, but it has its downsides. The good guys are employing it to counter and defend against these systems. In my opinion, businesses in the region need to be mindful of the dangers posed by artificial intelligence because it may make it simpler or more automated for cybercriminals to plant malware or create phishing emails designed to deceive us into clicking on hazardous links. On the other hand, defensively, there are businesses employing AI to better protect against malware by identifying dangerous emails.
Unfortunately, AI is not solely a weapon for good; bad actors can use it to enrich themselves at the expense of law-abiding enterprises. But the cyber sector and cyber insurance can assist businesses in dealing with the financial impact of being attacked by cybercriminals, he said.
Despite the risks, artificial intelligence is still worth investigating.
The widespread availability and growing acceptance of AI can’t be undone, like opening Pandora’s box, regardless of the consequences. Both experts agree, with Taylor adding that if we stop now, the implications will be catastrophic since threat actors will continue to utilize the technology however they see fit.
The reality is that we have no choice but to face the advent of AI in our society. It’s in current use. I fear we are falling behind if we don’t study it and figure out how to use it to our advantage. Should we keep looking at it? Personally, I believe we need to. In today’s digital age, we can’t afford to ignore the advances in technology. Taylor emphasized the importance of making the most of this opportunity and learning how to do it effectively.
It’s true that there’s ethical concern regarding AI, but we can’t ignore the fact that these models are inherently biased due to the data they were trained on. He stated, “I think we’re all still trying to figure out what these biases are, where they come from, and what they do.
As the incident response team’s leader, Nicolo has found that artificial intelligence is invaluable for alerting clients to malicious activity and attack trends. The technology is “not there yet,” she concedes, and more aggressive AI development is needed to protect global networks from intrusions.