Artificial intelligence (AI) has become a defining topic in the discourse of 2023. As individuals across industries learn to leverage technologies like OpenAI’s ChatGPT, cybercriminals remain in the vanguard of such adopters. 

AI has allowed threat actors to become more creative and aggressive in their tactics to launch attacks. They can now use automated scripts and algorithms to find and exploit vulnerabilities quicker than ever before. 

This has posed a greater challenge for organizations trying to protect their digital assets. However, AI can also be used to improve cyber defense. By leveraging AI-driven solutions in a proactive approach to cybersecurity, organizations can better safeguard their data and stay ahead of malicious actors.

In this blog post, we examine how attackers are using AI in their cyber offense and how cybersecurity leaders can take advantage of similar models in their defense.

Building a Cyber Attack with AI

First, let’s consider BEC (Business Email Compromise) attacks. BEC is a type of cybercrime in which attackers attempt to acquire sensitive data or money by impersonating a legitimate business entity. They employ text to execute the payload of the attack.

To start a BEC attack, attackers often scrape the internet to gather large amounts of email addresses, or send out cold emails to a large number of people with the hopes of someone falling for their scam. They might also create a template with malicious content that they can send out to many people quickly and easily. 

Figure 1: Sample attack template

In traditional cyber defense, defenders focus on an attacker’s template. Since the probability of two people writing the same sentence in the same way is very low, seeing this may denote the use of a template. To differentiate malicious attacks from genuine content, cyber defenders use mechanisms that can detect and classify malicious attacks that use the same structure. 

This is possible because many attackers use pre-existing tools and templates, thus generating the same text: a natural language signature that can be easily detected and used to prevent attacks. However, this signature cannot be used to detect new attack templates. 

Attackers have learned that to evade detection, they must deviate from templates. But rewriting the same sentence multiple times is challenging and there is still the possibility of elements remaining the same. Minor mistakes would still allow security systems to detect the attack. That’s where AI comes into the attack. 

Attackers can turn to AI tools to automatically generate different variations of a given text input based on their provided instructions. By providing a generic phishing template as the input, these tools can generate multiple variations of the text with the same meaning but different wording and structure.

Figure 2: AI input
Figure 3: AI output

Attackers can use AI tools to generate thousands of versions of a phishing attack with the same meaning but different wording, all with the click of a button. This is a problem for traditional defense mechanisms, as it makes it extremely difficult to defend against. 

How to Build a Cyber Defense with AI 

Now that we have reviewed how AI can be used in cyber attacks, let’s explore how it can be used to defend.

Instead of learning a specific template, like in traditional defense, AI cyber defense focuses on generalization. This involves embeddings–a type of data representation used in natural language processing (NLP) that aims to capture the context and meaning of words. Embeddings offer a mathematical representation of words where each word is represented as a vector of real numbers. By converting words into numerical representations, embeddings allow AI algorithms to capture the relationships between words and phrases in language. 

These embeddings are then used to train models. Models are algorithms that process data and make decisions based on observed patterns. In the context of cyber defense, models are used to identify malicious behavior and detect cyber attacks. To cope with multiple models producing varying verdicts, AI cyber defense utilizes ensemble models. 

Ensemble models are a type of model that combines models to create a stronger defense system. These models are usually predictive models such as decision trees or neural networks that are trained on different datasets and then combined into an ensemble. Ensemble models are employed to increase accuracy and reduce the risk of false positives or false negatives. 

For Perception Point’s advanced threat detection platform, we use a framework that allows our machine learning engineers to create, train, and deploy sets of models that can be used to make predictions and decisions about specific attack types. 

In the same way that attackers are using AI to maximize their malicious output, defenders like Perception Point are also using AI to secure the expanding attack surface.

Learn more in our detailed guide to cyber security strategy

Want to learn more about this topic? Watch our full webinar on-demand to hear Perception Point Data Science Team Lead, Roy Darnell, discuss the role of AI in cybercrime and the steps to mitigate your risk.