Humans: the one cybersecurity problem AI won’t solve
Humans are the problem – and the solution in cybersecurity. AI will result in innovations on both sides of the security war but won’t replace human creativity.
So said an expert panel of CISOs who met in Melbourne this week. The panel convened by crowdsourced cybersecurity provider Bugcrowd met to debate solutions to the increasing threat to businesses posed by bad actors who leverage AI.
The event took place at the AISA conference which brought together 5500 security professionals.
The company’s CEO Dave Gerry reminded CISOs that the average cost of a breach is predicted to reach US$4.35 million, according to Ponemon Institute. IDC expects to see global spend on cybersecurity reach $219 billion in 2023, up 12% on the year earlier.
The problem is particularly acute in the APAC region where penetration of hybrid working is 60%, putting a majority of the workforce outside the enterprise security perimeter and presenting a bigger than ever attack surface.
Another problem is the growth of IoT and interconnected devices. Ryan La Roche, CISO at St John of God, Australia’s biggest not-for-profit healthcare provider. “Everything is interconnected, with healthcare monitoring devices talking to one another exchanging data. That creates incredible benefits for a healthcare outcome, but it creates some very interesting risk and can make a healthcare organisation a really serious target.”
Dan Maslin, group CISO at Australia’s largest university, Monash University, pointed to the growing sophistication of hackers. “We’re seeing zero days being discovered and exploited within 24 hours. I think that’s going to get worse.”
Maslin also warned that security perimeters were becoming more porous as organisations became networks of digital partners.
“It doesn’t matter how hard you make the shell of the organisation, you’ve got hundreds or even thousands of third parties connected in. They could become the achilles heel for many organisations for the rest of the decade,” he said.
Panellists acknowledged the growing impact of AI both as an offensive weapon and a defensive tool.
Luke Barker, group owner for security at telecoms carrier Telstra said: “From a detection and response perspective, I think we’ll see some significant advancement in leveraging the power of Gen AI to reduce the human effort in becoming more proactive to respond to threats. I see a shift from pure volume of analyst numbers, to be more pivoting towards engineering capabilities to harness the power of Gen AI to keep ahead of the threat”
Celebrated white hat hacker Sajeeb Lohani, head of security at Bugcrowd, believes AI will be a useful tool for hackers but not a threat to the survival of the species. According to Inside the Mind of a Hacker research published by Bugcrowd, 94% of hackers either already use AI or plan to start using it soon for ethical hacking. Most (72%) do not believe AI will ever replicate human creativity.
Bugcrowd’s Gerry agreed: “AI is going to help make this entire industry more efficient, we’re going to become more productive, [but] it’s going to introduce a lot of new risks.”
“No matter how many tools are deployed, no matter how many new solutions and services and vendors are brought on board, this ultimately still comes down to the human being, and how do we make sure that we’re securing our teams or securing our infrastructure, but we’re doing this from a human first approach.”
Gerry cited a Gartner prediction that by 2027 50% of enterprise CISOs would have adopted human-centric security design practices, which take account of the impact of human behaviour and error on security.
Bugcrowd’s bug bounty programme rewards hackers for detecting vulnerabilities and potential exploits, but Lohani believes that altruism is a stronger motivator. “Bugcrowd’s survey of 1000 hackers found 87% believe that reporting a critical incident is more important than making money,” he said.