unlocking the AI enigma

AI is revolutionising the way that organisations approach cybersecurity, but criminals are deploying the same technologies for their own ends.

In the summer of 2019, the Wall Street Journal reported the – at that point – unusual case of a UK-based energy company executive who was tricked into sending €220,000 to a supplier in Hungary.

It was a scam. The executive thought he had received an instruction from the CEO of his company’s German parent telling him to transfer the funds urgently. In reality, the CEO’s voice had been reproduced using an AI program.

To all intents and purposes, this was the kind of “whaling” scam that all employees should be warned to look out for. Usually in this kind of fraud, the victim receives a text instruction to transfer funds from an apparently genuine email address. In this case, the fraud was supercharged by AI.

It’s an example of how the rapid development of AI technology is set to reshape the security landscape. Artificial intelligence will undoubtedly enhance security by enabling organisations to identify actual or potential breaches more quickly. But at the same time, there is an arms race underway. The sophisticated criminal gangs who are responsible for most of the attacks on major businesses and public sector bodies are developing their own AI capabilities.

Powerful Weapons

On the plus side of the equation, AI is providing security teams with new and powerful weapons to deploy in their fight against cyber criminals.

For instance, in a recent Forbes article, the noted that organisations often struggle to keep pace with the “scale and speed” at which cyber attacks can occur. That’s particularly true in the case of large businesses running extensive and complex IT estates. It’s almost impossible to monitor such systems manually.

So the good news is that AI and analytics tools – with their ability to collect, process and interpret huge amounts of data – are making it much easier to identify and deal with threats.

What we have been seeing is the increasing deployment of System, Information and Event Management (SIEM) tools that are capable of not monitoring network activity and – importantly – user behaviour, flagging the anomalies and patterns that could indicate an attack is taking place.

But it’s not just a case of ringing alarm bells. In addition to their monitoring functions, AI tools can also execute a response. That could mean preventing files from being tampered with or deleted. Likewise, suspicious users can be knocked off the networks.

Crucially, machine learning capabilities enable SIEMs and other threat prevention platforms to adapt to changing circumstances. Cybersecurity threats are a moving target. Criminals explore new vulnerabilities and deploy previously unseen tools and techniques. Once a system has been trained, machine learning allows it to learn in real time, progressively upgrading its own capabilities. One aspect of this learning process is the increased ability of SIEMs to successfully identify false positives. This is important because human intervention will always be required when a network is attacked. Reducing unnecessary alarms reduces the pressure on workers.

And as Bartosz Skwarczek points out in a Forbes post, AI tools can ultimately take the pressure off human operatives at a time when there are many unfilled vacancies in the sector.

The Counterattack

So is AI the silver bullet that will provide rock solid protection? Sadly no.

For one thing, cybercriminals are upping their own collective game. Cybersecurity Ventures estimates that damage from cybercrime could amount to $8 trillion per year by 2025. It’s a staggering sum and one that illustrates just how well-resourced the cybercrime industry actually is.

Just as legitimate organisations are using AI to detect threat patterns, criminals are using variations of the same technology to deceive the defenders. To take an example, attackers are creating input data designed to deceive threat detection platforms. In practice, this could mean malware – which is normally detected within systems – being cloaked by code generated by AI.

Then there is the increasing maturity of generative AI. According to McKinsey, one-third of major businesses are using the technology. There are dangers here. Generative AI is generally used to create content – pictures, video, marketing materials, etc – but in the wrong hands, it can produce deep fakes. This could be a video of a CEO criticising customers to undermine a share price or – as we’ve seen – a voice message from a senior executive authorising a fraudulent payment. Businesses will have to be wary.

Defensive AI tools will do a lot of the heavy lifting, but their training and threat detection are not yet perfect. Human error can result in faults in the algorithms, which in turn result in too many false positives or real threats being missed. Equally, there are ethical and privacy concerns, around the monitoring of customers and staff by imperfect systems. There are also ethical concerns about the quality of algorithms used for decision making.

Organisations must therefore be prepared to manage the risks associated with the deployment of AI. A governance structure should be put in place, along with a clear strategy. For instance, a company might begin with a clear goal to prevent malware or ransomware attacks. Once in place, it is vital that the AI tools – and the policies and processes around them – should be continually fine-tuned. All solutions should be compliant with privacy laws and data protection laws.

The fight against cybercrime is a long-term battle where both sides are constantly trying to get the better of each other. AI will revolutionise the work of CISOs and their teams but continuous investment in tools, system development and training will be required.