Combating financial crime with AI

Combating financial crime with AI

By Keir Garrett (pictured), Regional VP for ANZ, Cloudera

 

The first known safe dates to the 13th century BC and was made of wood with movable pins that dropped into holes to lock it in place. In today’s digital age of fighting fraud and scams, some of the same principles of those in ancient times still apply albeit with advanced technologies such as biometrics, blockchain and nanotech.

This comes at a time when criminals are becoming increasingly sophisticated and well-resourced. Just last year alone, Australians have lost more than AUD$400 million to scams, with the ACCC believing the real figure is likely much higher, given many victims don’t report their losses.

The complexity of fraud is also rising. Recently, scammers produced a deepfake video of Australian billionaire, Andrew Forrest where his image and voice were manipulated to promote a fake cryptocurrency scam. This incident is not isolated; experts from Deloitte have warned that fraudsters will use generative artificial intelligence (GenAI) to enhance attacks.

So how exactly is AI compounding cyber threats?

Each new channel unveils new fraud opportunities

Rapid digital transformation across Australia and New Zealand has ushered in a multitude of online services, enhancing consumer convenience. However, this interconnectedness provides cybercriminals with numerous entry points. Gaining access to just one service can expose a wealth of personal data. Easily accessible generative AI tools have also made social engineering attacks, like phishing, even more convincing. Its innovative and fast nature enables attackers to automate, scale up and finetune attack methods and unknowingly expand the attack surface of organisations.

While the public sector is moving quickly to address fraud with regulations like the Commonwealth Fraud Control Framework, fighting AI-driven fraud is increasingly resource-intensive for many in the private sector. Investigators are spending more time diving into countless records and behavioral indicators to identify fraud. And just as one fraud trend is countered, criminals employ a different attack method.

How can financial institutions better fight fraud in the face of high-cost pressures – while adhering to tightening compliance regulations?

Fighting financial fraud begins with trusted data and trusted AI

Fortunately, AI can also be mobilised for good. AI and machine learning (ML) tools can help organisations spot signs and patterns of fraud in real-time, by aiding fraud investigators in automating behavioral analysis and decision-making. When implemented organisation-wide, these technologies can significantly bolster predictive fraud defenses, while reducing costs and enhancing productivity.

Mobilising AI and ML to fight fraud is most effective when organisations can pull together sources of trusted, real-time data to train the models to spot behavioral trends. Training datasets have to be as complete and relevant as possible – for example, businesses should integrate data that provides behavioral insights, like banking records and credit scores, so that AI can recognise indicators of fraud well.

Organisations must also ensure they have the proper infrastructure to support AI development, as unifying, cleaning, and preparing this sea of data for training takes an enormous amount of compute power. Without quality data to train AI and ML tools, investigators could end up dealing with AI hallucinations and more false positives, wasting time and eroding the trust that business leaders might have in such solutions.

How financial firms are using AI and ML to fight fraud

Financial institutions around the world are waking up to the magnitude of fraud, with a rising number scrambling to put fraud in a headlock by investing in revolutionary digital technologies like biometrics and facial recognition tools. Recently, Australian banks banded together to introduce obligatory biometrics checks on customers opening new accounts, by the end of 2024. Dubbed as a “new offensive in the war on scams” the Scam-Safe Accord demonstrates the level of proactivity Aussie banks are taking to help their customers stay one step ahead of sophisticated criminals.

Australian banking giant, ANZ Bank practises defense in depth as it leverages AI and ML to ramp up its fight against fraudsters with a new security capability to detect mule accounts. Following a successful pilot in April 2023 which detected nearly 1,400 high-risk accounts, the solution will identify and block mule accounts used to receive funds from scam victims and other fraudulent activities. The new security capability will be supported by a dedicated team working alongside ANZ’s 440 customer protection specialists.

Newer data management technologies, like hybrid data platforms are also favoured, to facilitate the integration, governance, and analysis of data in real-time across environments securely and compliantly. Critical to this success is an inbuilt layer of security and governance that can be applied consistently.

Harnessing new data technologies to mobilise AI and ML against fraud
To combat fraud, organisations must design comprehensive strategies and policies to enable staff to harness new tools effectively. Here are four steps organisations can take:

  • Organise data into a single golden source of truth

AI models must be trained on data that is relevant and complete. Organisations should build data pipelines using open standards and interoperable data formats to ensure that data can be collected, integrated, processed, and moved freely from sources across the enterprise to training datasets.

  • Enhance data governance practices to ensure data used by AI is trusted

Data collected across the organisation must be clean for AI models to deliver accurate insights on fraud. Establishing a data stewardship working group to train employees on data governance, conduct regular audits, promote best practices, and enforce compliance will ensure that data used is trusted and AI-ready. 

  • Enable real-time data use to enable AI to pivot to evolving threats

AI must harness data in real-time to predict threats. Companies should look to implement technologies that accelerate time-to-insight, for example, streaming analytics solutions that enable data teams to analyse data that’s moving from source to its destinations.

  • Use data management platforms to better coordinate the fight against fraud

Managing numerous streams of data across environments, training multiple AI and ML models, and enforcing data governance enterprise-wide is not easy. Organisations should use data management platforms to enable stakeholders to simplify, centralize, and improve command and control. Businesses should also use platforms that enable stakeholders to foresee and manage compliance risks.

Financial institutions need predictive, real-time data to stay ahead

Fraud protection is like a game of cat-and-mouse. Just as the next new digital channel becomes popular, financial criminals will be there, finding completely new ways of committing fraud. While criminal tactics and channels may change, one thing remains constant – predictive, real-time data, AI and ML will be key to helping financial institutions navigate today’s complex digital landscape with increased confidence.