Transforming Cybersecurity with GenAI

Transforming Cybersecurity with GenAI

By Sharat Nautiyal (pictured), Director, APJ Security Engineering, Vectra AI


Artificial Intelligence (AI) usage has surged forward relentlessly in the last two years – with its remarkable ability to identify patterns and generate new content. As a result, Generative AI (GenAI) has emerged as a formidable disruptor within the cybersecurity sector.

GenAI-based digital tools have the potential to accelerate business growth by delivering new opportunities for enhanced innovation, productivity, accuracy and efficiency.

In line with this growth, many organisations are also adopting AI-based solutions for threat detection and prevention, significantly improving their ability to respond to threats in real time.

Governments throughout APAC, including Australia, have also recognised the strategic importance of AI and have devised national strategies and supporting policy frameworks to help countries navigate potential AI risks while also helping organisations unleash its undeniable ability to do good. A great example of this is the Artificial Intelligence Expert Group, introduced by the Australian Government to help inform policy development and regulatory frameworks.

When integrated carefully, GenAI models can be highly effective tools in proactive security defence programmes. However, on the flip side, they can also be used against an enterprise’s cyber defence in ways that we cannot afford to ignore.

Recent research suggests that 75% of cybersecurity professionals have seen an increase in AI-powered cyberattacks over the past year, with 85% attributing it to threat actors weaponising AI.

Most recently, GenAI tools like ChatGPT have been seen as instrumental in the latest surge in email phishing attacks in the lead-up to the Paris Olympics – a fertile breeding ground for threat actors to infiltrate their victims’ Microsoft 365 networks. Customers are also experiencing increasing incidents of attackers abusing Microsoft Copilot with living off the land techniques – this removes latency in attacks giving perpetrators accelerated access to enterprise networks and critical data.

While Microsoft’s intent with Copilot as an innovative GenAI productivity tool is in the right place, there are multi-facets to this argument. Copilot biggest risk includes identity-based threats, and when compromised, the impact on business continuity can be devastating.

This is because AI-enabled tools like Copilot gives the attacker the same advantages that it gives the legitimate enterprise user. This means that, once a Copilot-enabled account is hacked, the attacker can launch a GenAI-driven attack by using the power of enterprise-level AI against the enterprise itself.

As such, speed becomes crucial. Without AI-driven behavioural analysis capability, SOC teams have little chance of discovering the breach, much less stopping it. Therefore, it is important to put guardrails in place that match the AI-driven attack with the speed of AI-driven behavioural analytics to ensure the integrity of the organisation’s security posture.

Yet despite all the above, Copilot promises a significant leap in AI-driven workplace productivity – enabling data-driven insights from GenAI models and claiming to save the average user up to 14 minutes a day (or five hours a month) through the removal of mundane tasks. Copilot for Security also empowers SOC teams to respond to cyber threats at the speed and scale of AI.

AI in cybersecurity is the future and businesses that don’t look at AI as a critically important component of their defence against automated attacks will be vulnerable. Advanced AI delivered in an integrated attack signal could stop today’s most challenging hybrid cyberattacks. It also helps take the ambiguity out of security analysts’ day and enable them to focus on what matters.

As AI technology matures and security specialists address specific challenges, organisations can expect even more powerful applications and a more proactive approach to security for a safer digital world.