Three ways that AI can cause harm to Australian organisations’ identity systems

Three ways that AI can cause harm to Australian organisations’ identity systems

By Johan Fantenberg, Product and Solution Director, Ping Identity

 

Identity theft remains a very real issue for Australians. A recent survey by Ping Identity shows 87% of Australian consumers are concerned about identity fraud and about Artificial Intelligence (AI) impacting their identity security.

As if to illustrate this point, from one large Australian data breach alone, over half a million attempts by threat actors to abuse stolen identity credentials were detected and thwarted in the two years to October 2024.

The consequences of credential abuse are very real and can also be ruinous. One example earlier this year highlighted the story of an Australian victim who still could not use their own name to sign up to services, and had to have leases and phone plans in their parent’s name, more than two years after being caught up in a breach.

Stories like these have driven Identity and Access Management (IAM) to the forefront of the business agenda. In that way, protecting the identity of staff and customers is finally being afforded the consideration and weighting it deserves.

This is reflected in the adoption of new approaches to digital identity, by investments in supportive platforms and protections, and in the way identity is evolving into a discipline and business function in its own right, rather than as a subdomain of IT infrastructure or cybersecurity functions.

But while the situation is improving in some ways, IAM is now facing some headwinds from AI that threaten to undo some of the good work and advances made in recent times.

This is not entirely unexpected – the threat landscape is constantly changing, and security practitioners know that they must evolve their methods and systems to keep pace with attack vectors.

While this plays out, there are several options available to organisations to get on top of AI-enabled threats to their identity systems. Importantly, AI itself has a role to play in helping organisations to fight against AI-powered threats.

What AI-enabled attacks on identity systems look like

One of the reasons AI has found so many use cases in the business world is due to its potential to challenge or disrupt established methods or norms – from work processes to security.

Some of the ways that advanced AI and machine learning might be used to penetrate established identity verification methods are starting to be seen in the wild.

First, AI is being used by threat actors to craft more effective phishing emails. These AI-generated emails are less likely to contain some of the telltale signs of a phishing threat, such as grammatical and spelling errors. This could tip the balance back to threat actors in terms of their ability to trick users into handing over active identity credentials.

Second, emerging methods of establishing and verifying identity are also at risk owing to AI. The use of identity controls like biometrics has long been seen as a second, more secure ‘factor’ to augment or replace passwords. However, there is growing evidence of cybercriminals using AI to circumvent advanced identity controls such as face and voice verification. There have been several examples in Australia of voice biometrics systems being tricked using cloning and other deepfake-like synthesis techniques.

Third, there is a risk that if an attacker successfully uses AI to bypass a basic identity system and gain access to an organisation’s environment, that they unleash AI-powered malware to remain within the system, collect data, and observe user behaviour until they’re ready to escalate the attack through lateral movement or exfiltration of valuable information they’ve gleaned.

From this, it should be apparent that AI represents a multi-layered threat to current identity systems. If the evolution of AI that can be used for harm continues on its present trajectory, Australian organisations will soon be forced to confront and radically reassess their future identity options, and the protections wrapped around these identity systems.

A strong response

In this AI era, a combination of identity threat detection and response (ITDR) and decentralised identity (DCI) practices is emerging as one of the best ways to keep data and identities safe.

ITDR helps organisations closely monitor their IT environment for suspicious and anomalous activity.  By focusing on identity signals in real time and understanding the permissions, configurations, and connections between accounts, ITDR can proactively reduce the attack surface while also detecting identity threats.

DCI, on the other hand, enhances security and privacy by reducing reliance on centralised data systems. The risk of large-scale data breaches through AI-powered attacks is increased in organisations that run centralised IAM data stores. With DCI, identity verification is based on providing a cryptographically verified credential instead of storing personal information in a centralised IAM database. It empowers individuals to manage their own digital identities and offers a secure and tamper-proof way for people to authenticate themselves. The by-product of this is that the attractiveness of a hack is significantly reduced. A breach would likely result in an individual’s records – rather than every user’s records – being compromised.

With DCI serving as a frontline defence in conjunction with ITDR practices, IAM best practices across the industry are being revised and refined, making it far more difficult for cybercriminals to use AI against organisations to execute identity takeovers and fraud.