AI-driven changes in the cybersecurity space

Artificial intelligence continues to make strides in transforming the online landscape, including the experience of work in technology fields. At the same time, AI-driven changes in the cybersecurity space continue to raise flags of caution for adopters and present complications for companies in related markets.

WithSecure, which had been one of the earliest voices raising an alarm about potential misuse of ChatGPT and other common LLMs, has since detected occurrences of chatbot-generated content being used in cyberattacks and online crime – either in terms of simple chat generation, or by directly generating iterations of malicious code.

In January, researchers with WithSecure demonstrated how the GPT-3 natural language generation model can be used to make social engineering attacks such as phishing or business email compromise (BEC) scams harder to detect and easier to pull off. We also covered this reality on our blog more recently. Since then, concerns as to this topic have become more explicit, with IT leaders in the UK reporting certainty state actors are using ChatGPT for malicious purposes.

Besides this deliberate misuse of LLMs, however, including both deliberate misinformation and use in cybercrime, there is another issue that looms large: that of AI-tool imprecision. With AI tools becoming more prevalent in the workplace, deliberate misinformation is likely to become a less common issue than inadvertent misinformation stemming from so-called AI hallucinations.

AI hallucinates information in the absence of evidence

As described at length by the Washington Post, the question of so-called hallucinations – made up information with no basis in reality – is one that has been addressed repeatedly by experts in the field. As it stands, however, there is no truly confidence-inspiring solution to the problem.

Large language models, or LLMs, work by predicting what the most apt or correct thing to say in sequence would be. Lacking a way to comprehend what is factual or not, this can lead to chatbots creating new facts from whole cloth. In several broadly publicized instances in the case of Bing, the LLM would then proceed to defend their made-up factoids up to the point of denying the current year.

From hallucination to intelligent misuse

While much of the commercial sector naturally focuses on finding ways to minimize instances of hallucination without stifling chatbot creativity, the criminal sector quite naturally finds new ways to abuse the powerful capabilities LLMs offer.

As disclosed to the Washington Post earlier this year, an attack was made on Indian tech security firm Zscaler using AI-generated content – using an executive’s deepfaked voice to try and convince finance employees to move funds to an account in Singapore.

The issue, Zscaler says, is that it’s relatively simple to leverage AI tools to give you the required information to prepare such an attack. ‘Show me the last seven interviews from Jay. Make a transcript. Find me five people connected to Jay in the finance department.’ None of these questions necessarily raise any red flags even given OpenAI and Microsoft’s attempts to prevent misuse of their tools, but nevertheless give a malicious actor much of the information they need to prepare a more sophisticated attack with a higher probability of success. Combined with the myriad tools available to make deepfaked audio and with some public recordings of business executives to work off of, the combination is concerningly simple and effective.

Can AI solve its misuse and cybersecurity hazards?

Companies including Microsoft have raised the potential of AI to generate solutions and defenses. In particular Microsoft promises some utility from their upcoming Microsoft Security Copilot. The number of AI tools dedicated to identifying vulnerabilities, preparing patches or collate information on current threats.

However, cybersecurity is fundamentally reactive and will be unable to keep up with the pace of the threats, especially considering said threats can and will leverage the same tools being developed to defeat them. Nate Warfield, director of threat intelligence at security company Eclypsium, demonstrated this when he asked ChatGPT to find a vulnerability in an industrial company’s systems. Initially, the bot refused as the OpenAI-designed protections kicked in. Instead, Warfield asked it to expound on any insecure coding practices, which it agreed to do.

As a consequence of this and other demonstrations of potential hazards, it can be concluded that AI-driven breakthroughs in cybersecurity are unlikely to provide the industry a sliver bullet to solve malicious code, cyberattacks and other threats looming on the horizon.

Absent AI-driven solutions, what’s next?

The cybersecurity sphere already has a number of best practices that help prevent cyberattacks and mitigate their results; the greater issue is ensuring these practices are thoroughly adopted and kept up by individual organizations and their members.

The first and perhaps most important step is prevention; this can take several forms, but the key one is education. Phishing recognition and similar projects that help keep an organization’s members alert and aware of potential threats drastically reduces the chances of hostile actors or malicious software making it into your systems in the first place.

Beyond prevention, we get into questions of disaster mitigation strategies. The main two interrelated ones are archiving and airgapping. In the case of the latter, internal business systems should ideally be designed so that compromising one does not automatically compromise the others. In this way, organizations can make their internal systems more resilient and ensure that a bare minimum of critical business data is lost in the event of a security breach.

On a similar note, ensuring that your organization makes use of a modern and up to date archive or backup is an equally important method of securing critical information from compromise and loss. A properly set up backup ensures that even in the worst case scenario of system compromise, your data should be duplicated and retrievable in a separate system airgapped from the rest of the network. In this way, organizations can prepare themselves for disasters, accidents or cyberattacks and ensure full recoverability and continuity.

Archive smarter with contentACCESS

TECH-ARROW’s contentACCESS archive and backup remains a gold-standard solution for securing your critical information. Our archive and backup gives your organization peace of mind by ensuring the information you depend on is held securely in a purpose made system designed to safeguard it in accordance with both industry best practices and legal requirements.

At the same time, contentACCESS provides the best approach for your productivity as well: unlike older backups, your information can be accessed on the fly without having to wait for a cumbersome full restore to complete. Anything stored in the archive can be flexibly accessed on the fly from a variety of access points including our mobile app, Outlook integration and Online Web Access, either from the office or on the fly.

Do you feel prepared for the cybersecurity challenges ahead and a new AI-dominated online landscape? If you want to take the next steps in preparing your organization, contact us and schedule a free consultation with our team to discuss how our solutions can best tackle your problems.

 

Take the best steps to protect your data – with TECH-ARROW.

Archive all your O365 data with contentACCESS

by Matúš Koronthály