Four risks Australian organisations still carry into their AI journeys

Four risks Australian organisations still carry into their AI journeys

By Les Williamson, Regional Director Australia and New Zealand, Check Point Software Technologies

 

Early in the current enterprise AI era, there was a level of panic among businesses and public sector organisations over the ‘great unknown’ of AI adoption: how publicly-accessible AI tools like ChatGPT handled data and actually worked.

These concerns were – and continue to be – valid, with respect to the risk of sensitive information being input into these tools while trying to determine whether or not the tools have value in specific enterprise contexts.

The creation of effectively private versions of these public tools, alongside the integration of AI into almost all software suites used by organisations, was intended to resolve these risks, enabling organisations to implement and apply AI to their own data and their own organisational context, without the risk of the data also being used to train a publicly accessible AI model or of leaking out into the public domain.

We’re now a couple of years down the track, but private AI tools still have not been universally adopted. Based on a recent analysis of AI-related traffic traversing corporate networks, the existence of sanctioned AI services is not deterring employees from utilising other AI tools and services that may or may not have the same corporate sign-off and approval.

The analysis, part of the inaugural AI Security Report launched at RSA Conference 2025 by Check Point Research, found ChatGPT is still the most widely used AI service, seen in 37% of enterprise networks, followed by Microsoft Copilot at 27%. Behind those two, writing assistant services lead among AI services, including Grammarly (25%), DeepL (18%), and QuillBot (11%), with video and presentation tools like Vidyard (21%) and Gamma (8%) also showing substantial adoption.

What this highlights is the challenge of an ‘if you build it, they will come’ approach to AI tooling in organisations. Employees are keen to incorporate AI into their workflows, but if a supported AI tool falls short, many will go looking themselves to find a tool that does what they want it to do.

Organisations need to accept this is the case, and have their own tools and capabilities in place to find all GenAI services in use internally, assess their risk, and apply appropriate data protections.

 

In the real world

The risks of unsanctioned AI use are not theory, nor based on anecdotal evidence.

More and more in Australia, we are starting to see instances of risky or unsanctioned usage of AI tools in meaningful, mission-critical operational contexts – and these are just the stories to make it outside of an organisation’s four walls.

In one case, a community services worker entered “personal and delicate information” about a child into ChatGPT to aid in the drafting of a report filed in court proceedings. An analysis of logs by the employer found 13% of staff ChatGPT as part of their work, but provided little insight into the way the AI was being used. When nearly 900 staff were asked to provide examples of “constructive and safe use of GenAI” in their work, only 10 responded.

Other uses of ChatGPT among lawyers have been uncovered in courts, because the text generated by the tools hallucinated citations to existing case law which – upon examination – did not actually exist. Again, this is AI use incorporated into workflows that is only discovered after the fact.

 

Four key risks

Through our research, we’ve identified four current AI usage risks for organisations. Understanding these risks is crucial for organisations aiming to leverage AI effectively while protecting their operations and sensitive information.

The first of these is shadow AI applications. While new GenAI applications emerge daily, not all can be trusted. Unauthorised AI tools and applications employees use can lead to security vulnerabilities, compliance issues, and inconsistent data management, exposing the organisation to operational risk or data breaches. Organisations need proper AI monitoring and governance to counter this.

The second risk is data loss. AI models handle various types of data, some of which are stored, shared with third parties, or have inadequate security practices. These issues raise privacy concerns, compliance challenges, and vulnerabilities to data leaks. Before integration, enterprises should assess AI applications for data protection and industry best practices.

The third risk is exposure to vulnerabilities either in the AI applications themselves, or through the outputs of the models. Hidden vulnerabilities in AI-generated code, for example, such as outdated libraries and data poisoning in repositories, must be addressed with technical controls, policies, and best practices.

A solution such as Check Point’s GenAI Protect is necessary to enable the safe use of powerful AI tools internally. These solutions are designed to discover and assess the risk of GenAI apps being used in organisations, apply revolutionary AI-based data classification to prevent data loss in real-time, and meet regulations with enterprise-grade and monitoring visibility. The solution uses GenAI to protect GenAI: leveraging a GenAI classification engine to identify and prevent any data loss.

Finally, as we transition into the world of Agentic AI, with agents taking on more of the process-intensive heavy lifting, excessive agency will be a concern. As agentic AI systems advance, they become more susceptible to manipulation by malicious actors who exploit vulnerabilities through prompt injection or data poisoning.

Data poisoning targets large language models (LLMs), getting them to consume contaminated data either while they are being trained, or increasingly as part of real-time information being fed to the models on an ongoing basis. Malicious actors exploit data-hungry AI models by strategically placing deceptive or harmful content online, where it is scooped up by AI and incorporated into its responses. Limited human oversight can lead to unintended decisions, exposing sensitive information and disrupting operations. Proper governance and safeguards are essential.

With these aspects covered, Australian organisations can manage risks and truly benefit from the productivity benefits of the AI era.