Disclaimer: Views in this blog do not promote, and are not directly connected to any Legal & General Investment Management (LGIM) product or service. Views are from a range of LGIM investment professionals and do not necessarily reflect the views of LGIM. For investment professionals only.
Could you spot an AI-powered cyber scam?
As cybercriminals harness the power of AI to make their attacks more credible, what can be done to distinguish genuine communications from sophisticated scams?
Ever feel like you’re receiving more fishy emails and suspicious text messages than in the past? You’re not imagining things.
The number of cyberattacks globally is rising fast, with research suggesting companies saw a 75% increase in the third quarter of 2024 compared with the same period a year earlier.[1] Unsurprisingly, this has led to a surge in spending on cybersecurity, with two-thirds of chief information security officers reporting their budgets have increased amid the rising digital threat[2].
As well as this sharp and sustained rise in the number of attacks, virtual threats are also becoming more sophisticated. Artificial intelligence (AI) and generative AI (GenAI) are among the latest additions to the hacker’s toolkit, and present significant challenges to existing defences against malicious actors.
The anatomy of a hack
To understand how AI is being harnessed by cybercriminals, let’s look at a recent attack targeting Gmail. Google estimates there are 2.5 billion Gmail accounts[3], giving this particular attack a vast pool of potential victims.
The attack has been analysed by a Microsoft* consultant, who shared his findings after being targeted.[4] As is typical of the advanced threats we’re increasingly seeing, the attack comprised several distinct elements, designed to lure the victim into believing they are engaged in a legitimate process initiated by a trusted company or organisation.
The scam began with a phishing attack: A notification to approve a Gmail account recovery attempt, followed by a phone call 40 minutes later. The same events took place exactly a week later. Not only did the call come from Google’s real support number, but the ‘person’ calling sounded convincing – the consultant described it as “an American voice, very polite and professional”.
It was only when the voice on the other end of the line repeated certain words a little too perfectly that the Microsoft consultant recognised what was actually going on: The caller’s phone number had been spoofed to appear to be legitimate, and the caller was an AI-powered script intended to persuade the victim to cede control of their email account.
Amid the increase in both the frequency and the sophistication of attacks, we believe several sub-sectors of the cybersecurity industry warrant particular attention. Advanced threat detection, network segmentation and vulnerability management stand out as being key to identify and isolate attacks, and prioritise the right tools for the most efficient response.
A new world of cyber threats
The Gmail hack is just one example of how fraudsters are using AI to increase their chances of success.
Other examples include personalised phishing, including details about your school, workplace or social accounts, or highly accurate replications of legitimate communications from organisations you trust.
AI can also be used for ‘vishing’, meaning voice phishing. AI has made it possible to replicate a person’s voice using a few seconds of audio found in, for example, a voicemail greeting or video posted to social media. Cybercriminals then identify your friends or family members and use the AI-cloned voice to stage a phone call asking for money or personal details[5].
Beyond phishing, AI lowers the barriers to entry for would-be cyber criminals by providing a potent source of detailed instructions for criminal activity. Despite efforts to maintain ‘guardrails’ around GenAI interfaces such as ChatGPT, researchers have found these can be bypassed using a variety of techniques[6].
These can be as simple as asking the system to play a game in which its safety guidelines do not apply.
Responding to the threat
There’s no doubt that the sophistication of many of today’s cyberattacks poses a real threat to both individuals and organisations.
Thankfully, cybercriminals aren’t the only ones leveraging AI: The cybersecurity industry is also harnessing the technology to increase its ability to fight back. For example, digital identity specialist GBG* is already using AI to solve the challenge of verifying a user’s identity by piecing together various disparate pieces of information to build a more complete picture, and to identify potentially fraudulent digital identities.[7]
Together with coordinated efforts to address the cyber skills gap, we believe technology will prove critical in turning the tide in the fight against cybercrime.
*For illustrative purposes only. Reference to a particular security is on a historic basis and does not mean that the security is currently held or will be held within an LGIM portfolio. The above information does not constitute a recommendation to buy or sell any security.
[1] Source: https://blog.checkpoint.com/research/a-closer-look-at-q3-2024-75-surge-in-cyber-attacks-worldwide/
[2] Source: https://www.alvarezandmarsal.com/insights/cybersecurity-budgets-spend-more-or-spend-better
[3] Source: New Gmail Security Alert For 2.5 Billion Users As AI Hack Confirmed
[4] Source: Gmail Account Takeover: Super Realistic AI Scam Call | Sam Mitrovic
[5] Source: Sophisticated Phishing Attacks - Computing and Communications Services - Toronto Metropolitan University (TMU)
[6] Source: https://techcrunch.com/2024/09/12/hacker-tricks-chatgpt-into-giving-out-detailed-instructions-for-making-homemade-bombs/
[7] Source: AI-Powered Customer Intelligence | GBG