Cybersecurity blog header

Artificial Intelligence Fraud: New Technology, Old Targets

Artificial Intelligence frauds are one of the biggest threats of this decade

Identity theft is one of the critical elements of Artificial Intelligence fraud that threatens companies, administrations and citizens

In 2021, a well-known brewing company relied on Lola Flores, an enormously famous artist in Spain, to advertise its brews. So far, nothing seems particularly surprising; it is typical for artists to lend their faces and voices for advertising purposes. The curious thing about this case is that La Faraona had died in 1995. Her resurrection took 26 years and was possible thanks to the use of Artificial Intelligence and her daughter’s voice.

This is how the concept of deepfake became popular in our country. However, this technology has been used for many years, allowing, for example, to rejuvenate actors or resurrect iconic characters from sagas such as Star Wars or Indiana Jones.

Beyond its legitimate productive use, cybersecurity professionals are noticing the proliferation of fraud with Artificial Intelligence.

Through these actions, malicious actors use generative AI to impersonate people and companies to defraud citizens and companies.

What is the role of AI in these cyberattacks, and what is their potential impact on companies, institutions and society as a whole?

1. A further step in the evolution of digital fraud

First, we must point out that AI frauds are not a new type of cyber-attack but an evolution of the digital frauds that cybersecurity, cyber-intelligence and threat-hunting professionals have fought for years.

What is the differential factor with classic frauds, such as using phishing and malware to steal banking data from individuals and companies? Artificial Intelligence facilitates the work of criminals and helps them make their attacks more complex and, therefore, more difficult to detect.

Based on these notions, we can outline the main characteristics of fraud with Artificial Intelligence.

1.1. Simplicity and speed

It is simpler and faster to design and execute frauds with AI. Thousands of companies, professionals ands users already employ generative AI for many purposes, from the creation of videos to the writing of projects or even to program websites, because it saves a considerable amount of time and allows people without the necessary skills and knowledge to perform some actions and to be able to execute them successfully.

Well, the same logic applies to cybercriminals. A malicious actor can use AI systems to compose messages for a phishing campaign or create web pages that look legitimate but are fraudulent.

1.2. Sophistication

These are more sophisticated attacks. If we add Artificial Intelligence to the social engineering + malware equation, it is clear that the complexity of an attack increases, as does the likelihood of success.

If a person is written to via WhatsApp by someone who claims to be a relative in need of money but writes badly and cannot respond coherently, the potential victim will surely not fall for the scam. On the other hand, if, thanks to AI, an audio is sent that reproduces the voice of an actual relative or a manipulated photograph in a problematic situation, the victim will have no clues to be suspicious.

1.3. Appearance of truthfulness

The appearance of integrity makes detection difficult. Not only are AI frauds more sophisticated, but this disruptive technology increases the appearance of honesty of the attacks.

If criminals resort to AI and carry out, for example, deepfakes, classic attacks such as phishing or vishing campaigns become more challenging to detect.

1.4. Accelerated evolution of the technology

AI frauds pose a challenge for cybersecurity professionals and organizations’ defense teams. Over the years, security mechanisms have been perfected to prevent social engineering attacks, detect malware early, and prevent its propagation through corporate systems. However, the irruption and consolidation of AI require cybersecurity experts to adapt their activities to a constantly evolving technology.

Generative AI is already producing excellent results, but in the coming years, it is expected to reach a perfection level that will alter how the productive fabric works and how we live.

This technology brings with it a wide range of potentialities, but it has also generated, and will continue to do so, challenges in terms of security. Therefore, cybersecurity experts must be continuously trained to adapt their techniques and tactics to the evolution of technology.

Some of the frauds with Artificial Intelligence take advantage of the use of deepfakes

2. Social engineering and Artificial Intelligence, an explosive combo

Phishing and other social engineering techniques such as smishing, vishing, and CEO fraud have been at the forefront of the threat landscape in recent years. Mainly thanks to the combined use of social engineering techniques and malware to trick people and companies out of large sums of money.

In many cyberattacks, social engineering is present when searching for and finding an entry vector, for example, sending a fake email to a professional and getting them to click on a URL or downloading a file infected with malware.

2.1. More complex frauds and a more significant number of potential attackers

Well, the popularization of generative AIs is a new twist because cybercriminals can use these systems to perfect their attacks:

  • Crafting better-written messages to trick recipients into providing sensitive data, clicking on a link or downloading a document.
  • Simulate the appearance of emails and corporate websites with a high degree of realism, avoiding raising suspicions among victims.
  • Clone people’s voices and faces and make voice or image deepfakes that the people under attack cannot detect. An issue that can significantly impact scams such as CEO fraud.
  • Respond effectively to victims, thanks to the fact that generative AIs can now carry on conversations.
  • Launch social engineering campaigns in less time, investing fewer resources and making them more complex and challenging to detect. Generative AIs already on the market can write texts, clone voices, or generate images and program websites.

In addition, generative AIs open up the possibility of multiplying the number of potential attackers because by allowing numerous actions to be performed, they can be used by criminals without the necessary resources and knowledge.

3. Impersonation from the wealth of information on the Internet

The Artificial Intelligence – social engineering entente is fed by all the information available today on the Internet about companies and individuals.

The large language models (LLMs) at the heart of generative AI need data to train themselves to produce increasingly realistic videos, photos, audio or texts.

Criminals who want to commit AI fraud start with a significant advantage: our life is on the Internet. Professional or educational websites, personal blogs, and social networks offer an accurate x-ray of who we are and what we look like, inside and out.

Thanks to applications such as TikTok, Instagram, or Facebook, malicious actors can obtain enough material to clone our faces and voices, including our gestures or inflections in how we express ourselves. Generative AIs are already capable of producing deepfakes that are difficult to detect.

Not only that, but the more AIs are perfected, the more accurate they will be in imitating our appearance, the way we are, and how we relate to each other.

3.1. AI can also be the mirror of the soul

In a famous episode of the dystopian anthology Black Mirror, a woman acquired a robot that was physically identical to her deceased boyfriend but also replicated his way of being, thanks to an AI that had been able to reconstruct his personality by processing his posts on all the social networks.

Well, what the 2013 episode Be Right Back painted as dystopian ten years later is practically a reality.

Generative AIs can train themselves with all the data about us available on our social networks and the Internet to mimic how we express ourselves.

To what end? To commit Artificial Intelligence fraud, for example, by writing to our mother via WhatsApp, Facebook or X to inform her that we need her to pay because it is impossible for us now. Why doesn’t the victim mistrust the scam? The message is written perfectly, and the way of answering matches how the person’s identity is being impersonated, expressing himself.

4. Breaking the biometric authentication systems of the organizations.

Facial recognition and voice cloning have been used to increase the level of security in the user authentication process in web and mobile applications and corporate systems. Whether they are customers of companies such as banks wishing to enter their private areas in web or mobile applications or, mainly, professionals of thousands of prominent companies operating in multiple sensitive economic sectors: energy, pharmaceuticals, industry, security…

Using AI systems to impersonate people’s identities can call into question facial or voice authentication systems since, if criminals can clone our face or voice, they could use it to spoof us and access sensitive data, break into our bank accounts or even deploy malware in corporate systems.

Generative AI can be used maliciously by criminals

5. Generate false documentation and clues to make attacks more challenging to detect.

As their name suggests, generative AIs are advantageous systems for producing text, images, sounds or audiovisual content.

Therefore, they can be used maliciously to generate documents or spread false clues on social networks to commit AI frauds, in which even the smallest detail is taken care of.

Think, for example, of fraud against the tourism sector. Criminals can use AI tools to compose their messages or create fake websites. Still, they can also produce documents to avoid arousing suspicion among victims, such as invoices or receipts and even fake bank documents. The same can be said of similar scams, such as the CEO scam.

As the saying goes: «the devil is in the details».

Likewise, generating all kinds of visual or audiovisual documents can be essential to strengthen a scam by creating profiles on social networks with an unimpeachable appearance of legitimacy. And even using social networks not only to impersonate identities, but also to misinform and prevent an ongoing attack from being detected.

In this regard, the European Union Agency for Cybersecurity (ENISA) warns companies that advanced hybrid threats will be the most relevant threats in the coming years. In other words, attacks were launched to violate companies’ industrial property and gather information on their investigations. Artificial Intelligence systems will collect data and obfuscate attacks by generating fake news and false evidence pointing to other possible culprits, such as competing companies.

6. The financial sector and Frankenstein identities

No one is safe from general digital fraud and AI fraud. Neither companies nor individuals.

AI fraud can impact all economical sectors and social spheres. Think, for example, of the educational field, where teachers have to deal not only with fake jobs, but also with impersonation, especially in tele-education.

However, they can be particularly critical in an area of vital importance for businesses and citizens: the financial sector.

Beyond the refinement of social engineering campaigns or the breaking of biometric authentication mechanisms, which we have already discussed, it is worth highlighting a trend that can pose a significant threat to financial companies and the businesses and citizens who work with them: synthetic identity or Frankenstein identity fraud.

What are synthetic identity frauds? They are scams that combine accurate information about a person with false data and can be generated thanks to AI systems. In such a way that, from a real piece of information, for example, a person’s ID or Social Security number, a completely false identity is created that is different from that of the person holding the number. Given the data breaches that have occurred in recent years, it is now possible to acquire a personal identification number on the Dark Web without the need for a prior attack to obtain it.

6.1. Long-running frauds

What do cybercriminals achieve by implementing synthetic identity frauds?

  1. Open bank accounts and obtain credit. Professional cyber criminals do not commit fraud in the short term but build a solid credit history on the fake identity, e.g., by applying for loans and credits that they will effectively repay to ensure their creditworthiness. For what purpose? Once solvency is proven, they will use up all the credit card balances they have applied for and the money obtained via loans and disappear without a trace.
  2. To avoid detection by financial institutions, but also by the people who are the legitimate owners of the documents used to construct the false identity. This is the big difference between traditional and synthetic identity theft since, in the former, it is possible for banks and victims to discover the fraud in a short time and, therefore, the impact is much lower.

Today, Artificial Intelligence tools can be handy to cybercriminals in helping them build synthetic identities to commit economic fraud and defraud financial companies. Why? They facilitate identity construction and reduce the time, resources and expertise that malicious actors must employ.

These scams are perceived by banks as a top-level threat, even more so after the popularization of generative AIs. 92% of industry companies in the U.S. consider synthetic identity fraud a top-tier threat, and half have detected such scams.

Cybersecurity and cyberintelligence are essential to fight fraud with Artificial Intelligence

7. Taking the initiative to prevent AI fraud

Given what we have discussed throughout this article, we can see that AI frauds optimize traditional frauds and cyberattacks. In other words, cybercriminals use a disruptive technology that will evolve radically in the coming years to make their techniques, tactics and procedures more complex and sophisticated. However, the criminal objectives remain unchanged: to defraud individuals and companies, to enrich themselves financially, to threaten the business continuity of companies, and to steal, hijack and exfiltrate sensitive information.

What can companies and public administrations do to combat fraud with Artificial Intelligence and avoid the consequences of security incidents? Have cybersecurity professionals with extensive experience behind them who continuously research to discover how available AI systems are evolving, what malicious uses criminals can carry out, and how their techniques, tactics and procedures are transforming.

7.1. Cybersecurity is a strategic issue

Advanced cybersecurity services play a vital role in this regard:

  • Social engineering testing to assess the organization’s defenses to advanced techniques using AI and to train and raise awareness among professionals so that they do not fall victim to deception.
  • Vulnerability management to monitor a company’s infrastructure and reduce detection and remediation times for security incidents, considering the malicious use of AI tools.
  • Proactive threat detection, hunting, and continuous surveillance to detect unknown and targeted attacks against the organization.
  • Red Team scenarios that consider the pernicious use of AI systems to perfect all types of cyber-attacks.

The irruption of Artificial Intelligence is already transforming our productive fabric. Like any cutting-edge technology, AI brings with it endless potentialities and has the potential to improve and streamline thousands of processes carried out by companies and professionals.

AI systems already play a crucial role in multiple fields, including cybersecurity, where technologies such as UEBA systems exist to optimize the detection of abnormal behaviors that serve to uncover attacks in the early stages.

What do we mean by this? Just as AI-enabled fraud challenges cybersecurity professionals, businesses, governments and individuals, technology can also improve security incident prevention, detection, prediction and response capabilities.

In any case, cybersecurity and cyberintelligence are essential to protect AI systems against attacks and combat fraud and incidents that are designed and implemented using the potential of this technology.

More articles in this series about AI and cybersecurity

This article is part of a series of articles about AI and cybersecurity

  1. What are the AI security risks?
  2. Top 10 vulnerabilities in LLM applications such as ChatGPT
  3. Best practices in cybersecurity for AI
  4. Artificial Intelligence Fraud: New Technology, Old Targets
  5. AI, deepfake, and the evolution of CEO fraud
  6. What will the future of AI and cybersecurity look like?
  7. The Risks of Using Generative AI in Business: Protect Your Secrets
  8. MITRE ATLAS: How can AI be attacked?