In 2023, cyberattacks surged both in terms of frequency and sophistication. The proliferation of cutting-edge hacking tools and technologies – now more accessible than ever thanks to advances in generative AI – created an environment conducive for cyber threats to flourish, forcing organizations to adopt proactive measures to keep their digital assets secure.
Heading into 2024, the attack surface is set to expand even further, with threats likely to grow more and more elusive. Considering the increasing power and accessibility of tools based on artificial intelligence (AI) and large language models (LLMs), it will be imperative to stay several steps ahead of threat actors and know what tactics to expect in the coming year.
1. Custom ChatGPT-like bots: Crafting convincing social engineering attacks
OpenAI’s custom GPTs are prime examples of powerful new tools that hackers will be leveraging in 2024. Given their ease of use, they can be utilized by those with far less technical skill to engineer and launch highly convincing social manipulation attacks.
Imagine a scenario where an attacker uses ChatGPT-generated text to masquerade as a manager or department head. They send unsuspecting employees a link to a WhatsApp-like app, in which they’ll have a conversation with their GPT “manager”, convincing them to transfer funds or share sensitive data. In a similar fashion, generative AI services can be utilized to point victims into conversations with a GPT-powered “helpdesk” or “IT representative”. These deceitful attacks have become more commonplace across enterprises and will only grow with the advent of new generative AI tools and features.
Relying solely on employee awareness is insufficient for identifying such threats and preventing them. Maintaining robust security protocols must therefore be prioritized.
2. SaaS apps as stealthy attack vectors: Learning from TeamsPhisher
Cybercriminals in 2024 will continue exploiting modern SaaS apps, embedding malicious payloads into the cloud and taking advantage of low-hanging security gaps. In the wake of the pandemic and the rise of remote work environments, these shadowy tactics continue to fly under the radar and poke holes in organizations’ defenses.
Just this past summer, threat actors used a malicious open-source program known as TeamsPhisher to send phishing lures to unsuspecting users via Microsoft Teams to perpetrate subsequent cyber-strikes, including ransomware attacks. This underscores the significance of SaaS applications as unwitting accomplices to threat actors.
Organizations should expect to see the continuation of this attack vector as a formidable cyber-front and fortify their security frameworks accordingly.
3. AI-driven automation in cyberattack campaigns: A glimpse in the future
In the hands of developers and security teams, AI fosters productivity, streamlining numerous business operations. But in the hands of threat actors, it can be used to sabotage company defenses and extract sensitive data.
Attackers are poised to leverage AI’s growing power of automation to identify vulnerabilities in cloud infrastructure and exact malicious email campaigns with breakneck efficiency and precision. So, while AI-driven automation can alleviate a substantial workload from office employees across all industries, it grants threat actors the same luxury.
4. Deepfakes and multi-modal ML models: The evolution of deception
Staying in the AI realm, multi-modal machine learning models have granted attackers the capacity to generate convincing audio, images, and videos to trick unsuspecting employees.
With deepfake campaigns up an astonishing 3000% in 2023, this sophisticated technology is being used to create correspondences which are virtually indiscernible from legitimate ones, while also having the benefits of being cheaper and simpler to use.
Such deception practices will continue to pose serious threats for countless organizations. As these models mature in sophistication and capabilities for fabrication, educating staff to recognize and report attempted breaches and misinformation campaigns will become paramount.
5. Guarding against adversarial prompts in LLM-powered services: A startup frontier
As LLM-powered services continue proliferating throughout company workflows, business leaders will need to establish robust protection against malicious prompt injections – inputs to an LLM that are engineered to manipulate outputs or alter standard processing procedures.
Unfortunately, the internal data that companies choose to feed into LLMs for vertical- or operation-specific training purposes can be easily exposed. By typing in carefully crafted prompts, hackers can manipulate LLMs into divulging sensitive data, which can risk compliance violations and fines. Such vulnerabilities in data security demand proactive solutions.
Foresight in mitigating these challenges will therefore be crucial for organizations relying on LLM-powered services, setting the stage for a new set of solutions committed to preventing adversarial prompts from compromising data security.
One step ahead
As AI continues to mature, so does the threat landscape. Confronting the cyberattacks that AI systems enable will be a critical business objective throughout 2024.
By deploying tools and layered solutions to identify and address nascent fronts in the ongoing fight against hackers, organizations will be able to stay one step ahead of their digital adversaries.
This article first appeared in Help Net Security, written by Tal Zamir on December 28, 2023.