How AI-powered identity attacks will evolve in 2024

How AI-powered identity attacks will evolve in 2024

By Johan Fantenberg, Principal Solutions Architect – Asia Pacific & Japan at Ping Identity

 

Rapid advances in Artificial intelligence (AI) tools have caught the attention of cybercriminals around the world, and they’re using the technology to boost their attack strategies.

One of the most concerning trends is the way in which AI tools are being used to circumvent advanced identity controls, thereby making it easier for cybercriminals to impersonate people.

One example is the use of AI tools to overcome voice-based ID verification systems, commonly used by call centres. These systems compare the voice of the caller with a known authentic recorded copy. If they are deemed to match, the caller’s identity has been verified.

Recent advances in generative AI are allowing cybercriminals to create a synthetic copy of an individual’s voice in minutes using just a single, high-quality recording, perhaps captured during a spam phone call. In many cases these synthetic voices are indistinguishable from the real thing.

Meanwhile, other cybercriminals are using AI to intensify the frequency and sophistication of phishing attacks. AI-powered algorithms are capable of creating more convincing phishing emails with personal information collected from sources such as social media and are less likely to contain obvious errors that make them easy to spot.

Motivating factors

The primary motivation for using AI to capture digital identities is that they can be used by cybercriminals to gain access to corporate networks which hold high-value data that can be held for ransom or sold to other parties.

Many cybercriminals are turning to AI tools to increase the success rate of their attacks. The tools allow cybercriminals to leverage data from prior data breaches and publicly available information, combining them to commit more effective identity fraud.

When it comes to those groups most likely to be targeted by AI-powered attacks, cybercriminals appear to favour those working in sectors such as banking or healthcare. Data sources in these sectors often contain highly personal information that can be used to create entire human personas. This, in turn, can facilitate identity theft and lead to larger payoffs.

Relentless progress

Rapid advances being made in generative AI technology unfortunately mean that its usage to create fake voices, photos and videos impersonating real people is only going to increase.

At the same time, the fakes themselves are going to become more difficult for people to spot. The risk is particularly acute for digitally vulnerable individuals such as the elderly who may not have a lot of experience identifying or dealing with sophisticated scams.

Remaining vigilant in the face of these advanced scams requires people to challenge normal levels of trust. Individuals and security decision makers should always double check that what they’re acting on or responding to is legitimate.

This can be done by contacting the organisation or individual from whom the request originated. It’s important to also report deepfake-related scams or cyberattacks to relevant authorities.

The importance of identity protection and management

With the number of AI-powered security threats set to increase further in coming years, the challenge of protecting digital identities will grow. This is particularly the case in a world where an increasing proportion of transactions are conducted online.

Identity and access management (IAM) systems can enhance an organisation’s ability to detect and prevent identity fraud, by implementing multiple factors aimed at reducing risks in the user authentication process.

IAMs can also detect suspicious activity by analysing metrics such as IP address, geolocation, and past user behaviours for anomalies. They can then demand further proof of identity through methods such as Multi-factor Authentication (MFA).

The use of MFA shifts the identity verification process from basic authentication to include an out-of-channel factor. For example, after entering their credentials, a user may also be asked to accept a push notification from an authenticator app sent to a personal device, adding a second layer of verification.

Implementing password-less authentication can further reduce risks. This involves making one piece of the authentication process a cryptographic function instead of credentials that can be readily stolen by cybercriminals.

Passwords are typically the weakest link in any organisation’s security measures. With different factors to reduce risk in the user authentication process, identity orchestration can build a workflow that integrates risk factors and tailors the user experience based on the goal of reducing identity fraud.

The year ahead

As 2024 unfolds, it seems inevitable that AI-based identity fraud will grow exponentially. This will result in users increasingly questioning the validity and integrity of multimedia like video, images, and audio files.

At the same time, however, this is also likely to lead to more innovative technology and practices in content verification and validation. With cybercriminals developing increasingly sophisticated attack methods, security teams will need to innovate to keep pace.

The most important thing of all will be to remain vigilant. The attacks are not going to disappear and so having the ability to reduce their likelihood of succeeding will be vital.