
AI: Friend, Foe, or Security Risk? Gigamon survey reveals AI is rewriting the rules of cybersecurity
Artificial Intelligence (AI) may be powering the future of business, but according to new research from deep observability leader Gigamon, it’s also rapidly becoming one of the biggest threats to enterprise cybersecurity.
In its 2025 Hybrid Cloud Security Survey, Gigamon reveals that AI is pushing today’s security infrastructure to breaking point. The survey — which gathered insights from more than 1,000 IT and security leaders across six countries — finds that 91% of organisations are reassessing their hybrid cloud risk strategies in response to AI-fuelled challenges.
AI Is Raising the Stakes — and the Attack Surface
From AI-generated ransomware to attacks targeting large language models (LLMs), artificial intelligence is arming adversaries with tools that are faster, stealthier, and harder to stop. Over half of Australian organisations (56%) report experiencing direct attacks on their AI models, while AI-powered ransomware incidents have surged to 58%, up from 41% just one year ago.
“Security teams are falling behind because the threat landscape is evolving faster than traditional tools can adapt,” said David Land, Vice President, APAC at Gigamon. “AI isn’t just changing how businesses operate — it’s changing how attackers think, and how fast they move.”
Public Cloud Comes Under Fire
Despite being the backbone of digital transformation, the public cloud is losing trust as AI workloads push risk levels higher. Seventy percent of global security leaders now consider the public cloud a bigger risk than any other environment — a dramatic reversal from post-COVID cloud optimism.
More than half say they’re hesitant to deploy AI in public cloud environments due to concerns around intellectual property exposure and data governance. In response, many organisations are considering pulling sensitive data back into private infrastructure — a clear sign that AI is driving a major architectural rethink.
Visibility Is the AI Security Equaliser
If AI is accelerating threats, then visibility may be the best defence. According to the survey, 64% of security leaders are prioritising real-time threat detection in the coming year, and deep observability is rising as a key enabler. This approach goes beyond traditional monitoring by tapping into packet-level network data to detect anomalies that logs alone can miss.
With 89% of respondents viewing deep observability as essential to managing AI risk, it’s clear that visibility is no longer just a nice-to-have — it’s foundational. And with 83% of Australian respondents reporting that deep observability is now a board-level concern, the urgency is cutting through at the top.
The AI-Security Balancing Act
Gigamon’s findings highlight a growing paradox: AI is both a powerful business tool and a potent threat vector. Its use is accelerating, but its risks are still poorly understood and difficult to manage within existing security frameworks.
“Deep observability helps turn AI from a liability into a strength,” said Land. “It gives security teams the insights they need to secure AI deployments — and to make faster, smarter decisions in the face of new, unpredictable threats.”