Is AI Friend or Foe? How AI can empower security teams to unleash their full potential

Is AI Friend or Foe? How AI can empower security teams to unleash their full potential

By Sharat Nautiyal (pictured), Director of Security Engineering, Vectra AI APJ

 

Make no mistake, the ongoing impact of Generative Artificial Intelligence (GenAI) in the Financial Services industry (FSI) continues to reshape the cyber security landscape. Right now, FSIs throughout Australia are under attack from the growing velocity of cyber breaches, many exploiting the vulnerabilities introduced by increasing enterprise adoption of GenAI tools. As a result, there has been a surge in citizens finding their financial data compromised. In some cases, like the recent Prudential Financial data breach, hackers gained access to over 2.5 million customers’ personal data. Closer to home, the 2023 Latitude Financial breach impacted over 7 million Australian and New Zealand citizens. IBM’s Cost of a Data Breach Report also found that the global average cost of a data breach in 2023 was running at USD 4.45 million.

That said, Governments throughout APAC, including Australia, have also recognised the strategic importance of AI and have devised national strategies and supporting policy frameworks to help countries navigate potential AI risks while also helping organisations unleash its undeniable ability to do good. A great example of this is the Artificial Intelligence Expert Group, introduced by the Australian Government to help inform policy development and regulatory frameworks.

Finding a needle in a stack of needles: the current security landscape

While it is encouraging to see the Government putting these types of guard rails in place to ensure the responsible and ethical use of AI, we can’t turn a blind eye to the fact that threat actors are increasingly targeting the new attack surfaces created by GenAI adoption. Large Language Models (LLMs) have access to proprietary corporate data and can not only make decisions but can also act on them. When attackers gain control of GenAI tools through identity attacks, they attain that same access.

Today, catching an attack before any damage is done within today’s modern hybrid network is a bit like finding a single needle in a stack of needles, and threat actors know this. Hybrid attacks can start with anyone or anything, and move anywhere, at any time, at speed, to disrupt business operations at scale. They can do this despite having every possible preventative measure in place.

For a financial services organisation to be resilient in this environment, it comes down to the SOC’s capabilities and competence in finding that needle sooner rather than later. For this, they need the 3Cs – coverage, clarity, and control.

 

Calling it as it is – the problem with perimeter security.

Our recent State of Threat Detection Report provides insights into the challenges facing security teams in the current AI powered attack landscape:

  • 71% of organisations think they’re already compromised but don’t know it yet
  • 90% are unable to keep pace with the number of alerts coming in
  • SOCs receive average of 4,400 alerts per day with 83% false positives

IBM research has also found 67% of security operation centre (SOC) analysts say that despite all the advancement in tech for processes and people, the time to detect has not improved in the last two years. Nearly half say it’s even worse than it was two years ago, and two-thirds of security leaders believe an incident could have been stopped if their team had more capability.

We put this down to the fact that many security decision-makers are still using the age-old strategy of protecting the perimeter. We must be able to protect against more than lateral movement. You can have everything protected from what may be coming onto the network, but what about attacks and signals coming from your network, maliciously outside?

Humans will always make mistakes, whether they are technical developers, cybersecurity defenders, or just regular employees.  We need to invest in visibility and security awareness, to improve security controls and get better at figuring out how AI and GenAI can protect us now and into the future.

The mindset of building a castle and a moat to protect from outside threats really needs to change.

 

How AI can help with the fundamentals of visibility and awareness

One of the most important aspects of security is good visibility. No matter what sophisticated solution you use, if you do not have visibility and or situational awareness of your network and applications, then attackers will gain the upper hand. I believe that at least 70% of attacks can be stopped by having good visibility.

Visibility aids in situational awareness, and this is where AI can be a masterful assistant to SOC teams. In the era of AI-based attacks, the only way you can fight AI is with AI. An AI-enhanced solution has the capacity to monitor all activity happening in a network, understand what users are typically doing, and know what data is being sent out of an organisation. With these fundamental pillars, many attacks, both simple and sophisticated, can easily be stopped.

In this age of GenAI we’re just at level one. With this AI war between the three large providers, GPT-3, Microsoft and Google, the sophistication and capability of the systems will grow. Security teams must understand both how to leverage these tools and how to approach their security from the ground up.

When it comes to the modern hybrid enterprise, hybrid attackers are rendering traditional approaches to threat detection and response inefficient and ineffective. There is a need to eliminate siloed threat detection and response in the increasingly common hybrid attack landscape, and the answer lies in AI. AI-driven solutions can cut through the noise, bringing clarity in protecting against cyber-attacks quickly and, at scale.

The control piece comes in with the SOC analysts. With attack signal intelligence at their disposal, instead of spending up to two and a half investigating a threat that ends up not being real, they are more likely to spend less than an hour digging into an entity that has been given a higher urgency score.

 

The role of AI and the need to evolve our security teams

By extending the capacity of SOC experts, AI bridges the gap between talent shortages and helps to improve productivity.  At the heart of the matter, AI helps SOCs maximise talent by enabling a shift from a detection mindset to a signal mindset.

AI takes the ambiguity out of the engineer and analyst day-by-day so that they can focus on what matters, including onboarding and training staff, or enhancing their own skills. Zero-day exploits are a good example of how AI can assist security and highlights how we can set up our teams for maximum value. Leveraging AI to help with the manual load, the combined team of data science and security research teams can sift through a great deal of data and identify important information. The vulnerability can be patched or removed to stop attackers in their tracks.

AI finds the problem, we deal with it

If we as security leaders take a step back and focus on having a robust system that can look at attacker behaviours and use AI where relevant, we can build a very smart brain inside a network for an organisation. This smart brain, which is equipped with data and learns every day, every second, based on what it sees in your network, can be our answer to defend against the unknown.

As we work to evolve security within our organisations and use AI to augment SOCs, we must be open to change. It may make sense, sooner rather than later, for our security teams to have a senior AI role that deeply understands how the technology can impact and benefit the business, or to encourage a Chief Data Officer to work closely with the CISO to maximise the use of AI for security.