In the artificial intelligence age, the efficiency promise is not reserved for the well-meaning workers of the world. Underground operators also gain access to newer, better ways of doing things, often to the detriment of unknowing victims. In other words, cybercriminals are using AI to execute highly targeted attacks at scale, causing people to unwittingly send money and sensitive information or simply open themselves up to theft using methods they may not have even known to look out for.
Just look at the Hong Kong IT firm worker who recently transferred more than $25 million to a criminal after they used a deepfake to impersonate the company’s chief financial officer on a video call. Or a faux Taylor Swift seemingly slinging Le Creuset cookware as a way to scam Swifties. On a simpler level are believable emails, social media posts and advertisements with perfect grammar from accounts that look and feel like the real thing.
A type of social engineering attack known as business email compromise (BEC) grew from a share of 1% of all threats in 2022 to 18.6% in 2023, according to cyber threat protector Perception Point’s latest annual cybersecurity trends report. That’s a growth rate of 1760%, and the issue is propelled by generative AI tools.
When it comes to text-based scams, cybercriminals typically aren’t using plain old ChatGPT to formulate language. Instead, they rely on services in the underground cybercrime community. “You have large language models that cyber criminals can rent,” said Steve Grobman, senior vice president and chief technology officer at McAfee. “The cybercrime ecosystem has removed all of the guardrails.”
The outputs are impactful enough to eliminate grammatical errors and even imitate the writing style of a target.
One method of cyberattack is brand impersonation. More than half (55%) of all brand impersonation instances consisted of organizations’ own brands in 2023, according to the Perception Point report. Cybercriminals can do this through account takeovers on social media or email. Then there’s a technique called malvertising, or planting a malicious ad on Google that seeks to impersonate and override visits to the actual site the fake ad copies.
Tal Zamir, chief technology officer at Perception Point, discussed how criminals can now create polymorphic malware (or malware with many variations) at scale using AI and automation. Plus, they’re “getting help in vulnerability research to look for ways to abuse your computer and getting that malware to be more dangerous,” said Zamir.
But just as generative AI is enhancing and scaling social engineering attacks, so too is it giving defenders a leg up. Grobman says this is apparent simply by our ability to use digital resources of all kinds. He said, “We have made it such that we can live our lives and fully take advantage of the digital world that we live in, even with the cybercriminal elements at full play, largely because the cyber defense industry is able to play an effective cat-and-mouse game.”
How AI-generated email scams are being stopped
Kiri Addison, senior manager for product management at communication and collaboration security firm Mimecast, says defenders can now use AI to understand the sentiment of messages beyond flagging specific keywords, and they can automate that process for maximum effectiveness. Plus, they can defend against a wider swath of problems by feeding data into their existing models or generating new data sets using AI.
Addison, whose firm specializes in email security (which remains the top avenue for cybercriminals), said, “You can generate these really great emails, but we can still stop them from getting to the user’s inbox so they never have to even see them.”
To combat trust in deepfakes, McAfee is one of the firms working on an AI-detection tool. The company unveiled Project Mockingbird at CES 2024, which it claims can detect and expose AI-altered audio within video. Still, Grobman compares AI detection to weather forecasting, saying “When you’re working in the world of AI, things are a lot less deterministic.”
To deal with quishing (phishing using malicious QR codes), which accounted for 2% of all threats in 2023 according to Perception Point, the firm prioritizes QR code detection as soon as one arrives on a device. But he admitted, “A lot of traditional security systems are not equipped to detect that QR code and follow up on it,” meaning quishing remains prevalent and could be propelled by AI and automation.
Cybercrime is a business
While expert defenders are absolutely critical, public education remains a proactive method for preventing threats from completing their mission. Much like many parents recalibrated how they raised their children after the latchkey kid era, people can recalibrate their trust in what they see, hear and read.
Individually, Grobman says to ask questions like: Does this make sense? Is the deal too good to be true? Can I validate it on a credible news source or through a separate, trustworthy individual?
At the organization level, Addison recommends taking a risk-based approach. She recommends asking: What do you have of value? What are your assets? Why might an attacker target you? She also recommends keeping one eye focused on current threats and another on future threats (like quantum computing attacks, which she says are coming).
“If you can show real examples of these kinds of attacks, it really helps to put things into context,” Addison said.
Despite ongoing and evolving threats, cybersecurity experts remain optimistic. “Defenders have an advantage that attackers just cannot have,” said Zamir. “We know the organization from the inside.”
Ultimately, both teams have reached a new point on the efficiency frontier. “It’s important to think of cybercrime as being a business,” said Grobman. Just as legitimate businesses are looking to AI to be more productive and more effective, so too are cybercriminals.
This article first appeared in CNBC, written by Rachel Curry on March 11, 2024.