AI’s pervasive presence in our lives continues to grow; the AI bubble shows no signs of slowing its current growth, and AI tools are rapidly being integrated into existing software and systems. This has come with an equally explosive growth of AI use in cyberattacks – both on the side of the attacker, the security team, and as a potential weak point.
AI-enabled spearphishing
We have touched on spearphishing and the potential of AI for misuse in this direction several times already, but the topic only grows in relevance. To quickly recap the salient points:
- Phishing is a cyberattack vector that tries to use fake emails, websites, or messages to trick people into sharing personal or financial data
- Spearphishing is phishing which has been personalized to the target in order to increase the believability and improve the success rate of the attack
- AI can be used in multiple parts of this process, including collating target data, drafting texts, or being used for creating deepfaked audio or video
It is an unfortunate reality that a really committed cyberattack will generally succeed; the question is always one of expended effort and resources, with state actors at one end of the spectrum and individual attackers making use of ransomware-as-a-service or similar approaches on the other. Fortunately, most enterprises do not have to worry about the extremes but face something more in the middle.
On the other hand, AI has reduced the amount of effort needed to pull together a successful attack. This is something that has outsize impactespecially on smaller organizations which cannot cope with the new resource-allocation disbalance.
AI as a defensive tool
AI’s primary new use in defending against cyberattacks is, predictably enough, in early detection of breaches. The main advantage of AI is its ability to analyze large volumes of data, including behavior, in turn allowing it to detect abnormalities. This in turn allows security teams to respond far faster than previously to cybersecurity breaches, ideally before any critical damage can be done.
Unfortunately, there are several issues complicating this point. Perhaps the largest is that security is still not receiving the attention it deserves and needs. What’s worse, even when security teams are given the budget and resources they deserve, issues still appear; VikingCloud’s report suggests as many as 48% of cybersecurity higher-ups have failed to report a cyberattack or security breach this year. And what’s worse, a new vector is increasingly being explored by attackers: your AI tools themselves.
AI as a primary target
According to Gartner, 32% of companies reported attacks targeting AI applications, with a further 29% of respondents facing attacks on enterprise Generative AI (GenAI) application infrastructure. Most of this took the form of adversarial prompting, trying to coax AI tools into generating undesirable outputs.
What is perhaps worse is attacks trying to leverage AI tools to access other unrelated systems. An example which has come into prominence recently is the model context protocol, an open-source system used by a plurality of AI tools on the market. With AI tools needing access to internal data in order to be useful on an enterprise level, they can provide attackers an unintended backdoor into your company systems.
As the year enters Q3, the main questions remain if the growing pains of AI are worth the potential gain – and if so, how to build in better safeguards without stifling the innovation and growth this field needs.
Your Data In Your Hands – With TECH-ARROW