10 vendors advise on AI and security predictions to watch in 2024

10 vendors advise on AI and security predictions to watch in 2024

Jan Sysmans, Head of Marketing Asia Pacific and Japan Appdome

In the past, cyber teams relied on the complexity of writing malware as a safeguard, allowing time to build robust protections. However, Generative AI is lowering the bar for malware creation, increasing the likelihood of widespread fraud and malware attacks. This advanced technology excels at crafting convincing phishing messages, causing a shift in mobile app security from basic measures to comprehensive defence against fraud and malware.

As such mobile app defence solutions need to use AI and mobile app defense automation to help app makers protect their consumers and mobile business. AI should be used to benchmark the protections in the app against the threats common in their region and industry. Mobile app defence automation should be used to allow cyber security teams to upgrade the protections quickly and easily, directly in the existing DevOps workflows, before new threats and attacks can be launched at scale.

Morey Haber, Chief Security Officer, BeyondTrust

Security teams have already observed AI’s use in generating ransomware and malware, however in 2024 that use will quickly increase in other ways. The technology will enable cybercriminals to exploit specific areas, quickly detect vulnerabilities, and evade detection.

Also, the evolution of AI holds the promise of ushering in autonomous, computer-based threat actors capable of executing end-to-end cyberattacks. This advancement could empower a single threat actor to perform in the same way as a large group, replacing human technical skills and gaining a competitive edge over security tools and teams.

AI’s role in enhancing existing attack vectors such as phishing will continue, however the technology will also be used to craft new attack vectors thanks to the increasing quality of output produced by Generative AI tools.

The technology even has the potential to reshape human understanding of reality by fabricating deceptive content across various mediums including articles, legal cases, correspondence, videos, advertisements, and historical data.  Additionally, the widening adoption of AI assistants by programmers might somewhat paradoxically lead to an increase in errors in software development, breeding security vulnerabilities within source code.  Studies reveal that developers who rely on AI assistants are more prone to injecting vulnerabilities into their outputs. Cloud services and AI-generated errors may therefore pave the way for unintentional software security flaws.

Ameya Talwalkar, CEO and Founder of Cequence Security

Generative AI is a dual-use technology with the potential to usher humanity forward or, if mismanaged, regress our advancements or even push us toward potential extinction. APIs, which drive the integrations between systems, software, and data points, are pivotal in realising the potential of AI in a secure, protected manner. This is also true when it comes to AI’s application in cyber defences.

In 2024, organisations will recognise that secure data sharing is essential to building a strong, resilient AI-powered future. While AI is undoubtedly a testament to human ingenuity and potential, its safe and ethical application is imperative. It’s not merely about acquiring AI tools; it’s the responsibility and accountability of secure integration, primarily when facilitated through APIs.

Bernd Greifeneder, Chief Technology Officer, Dynatrace

In 2024, generative AI enters the later stages of its hype cycle, and organisations will realise the technology, while transformational, cannot deliver meaningful value by itself.  As a result, they will move toward a composite AI approach that combines generative AI with other types of artificial intelligence and additional data sources. This approach will enable more advanced reasoning and bring precision, context, and meaning to the outputs produced by generative AI. For example, DevOps teams will combine generative AI with fact-based causal and predictive AI to supercharge digital innovation by predicting and preventing issues before they occur and generating new workflows to automate the software delivery lifecycle.

Jamie Moles, Senior Manager, Technical Marketing, ExtraHop

“ChatGPT is great at giving answers – even those you might wish it didn’t like how to write phishing texts – but it has no imagination. Therefore, it cannot accurately pre-empt how a human will creatively think up ways to get around any rules or limitations put in place.

For example, the new ChatGPT Builder allowed me to create a ‘Cyber Sentinel’ which will write Phishing emails and Python code to exploit EternalBlue, and create reverse shells for remote access to machines. EternalBlue is a popular exploit used by bad actors like the global WannaCry Ransomware attacks. In the hands of a cybercriminal this could very easily be exploited and misused to a mass degree. Perhaps even scarier, you don’t need to be a cyber criminal to be able to do this.

One day, AI will be creative, but right now humans have the competitive advantage of being smart and able to think up novel ways to get around limitations. Perhaps in the future ChatGPT will create its own ML models to scan and categorise inputs and outputs to understand the intent behind questions – but we’re not there yet.”

Rafal Los, Head of Services Strategy and Go to Market, ExtraHop

AI will create more new problems than it solves: The rush to “AI everything” will create new issues technologists will be unprepared for in both how information is leaked, public information is re-purposed, and how facts and truth is conveyed and disseminated through society. The promise of AI – to “improve everything” will largely fail and AI will retreat to well thought-out use cases where the technology will improve data analysis and operational scale.

Carolina Bessega, Innovation Lead – Office of the CTO, Extreme Networks

In 2023, generative AI fundamentally transformed perceptions of artificial intelligence, highlighting a renewed and more robust appreciation for its potential. People will soon understand that while GenAI is a significant piece, the real game-changer will be the development of GenAI agents designed to execute various tasks and communicate results effectively to humans. These agents won’t just be limited to GenAI but will also incorporate other AI predictive and prescriptive techniques. Such agents are poised to form a symbiotic alliance with network engineers, managers, and operators, assisting in forecasting, planning, optimisation, and troubleshooting. As we move towards the end of 2024 and into 2025, we can also anticipate the emergence of causal reasoning agents, which will likely play a pivotal role in AI’s evolution.

George Dragatsis, Chief Technology Officer – APAC, Hitachi Vantara

As a technology, AI is far from a new concept and has been part of everyday life for some time. Anyone who purchases items on Amazon, view movies on Netflix, or interacts with a Google assistant is making use of AI.  However, ChatGPT took AI awareness to the next level during 2023 and its speed of evolution is only going to continue to increase. AI will also shift from a ‘honeymoon’ phase and become an even more widely used business tool.

To support this shift, there will need to be a way for the output of tools such as Chat GPT to be fact and sense checked. To achieve this, there will have to be a level of domain expertise infused into applications that use generative AI.  This means that application builders will need to provide domain expertise and context so applications become more accurate. The tools may also need to provide traceability that points out why a particular response has been generated and on what it was based.

Sally Vincent, Senior Threat Research Engineer, LogRhythm

In 2024, the symbiosis between AI and botnets will witness a significant surge. The convergence of AI capabilities will empower the proliferation and sophistication of botnets, amplifying their potency to orchestrate complex cyber threats. AI-powered botnets will exploit advanced algorithms to expand their reach and impact, intensifying the challenges faced by cybersecurity. This alarming trend will necessitate innovative defence strategies and heightened vigilance to counter the escalating threat posed by botnets, reshaping the landscape of digital security measures.

Andre Durand, Founder and CEO, Ping Identity

Identity has always been a gatekeeper of authenticity and security, but over the past few years, it’s become even more central and more critical to securing everything and everyone in an increasingly distributed world.

As identity has assumed the mantle of ‘the new perimeter’, and as companies have sought to centralise its management, fraud has shifted its focus to penetrating our identity systems and controls at every step in the identity lifecycle, from new account registration to authentication.

2024 is a year when companies need to get very serious about protecting their identity infrastructure, and AI will fuel this imperative. Welcome to the year of verify more, trust less, when ‘authenticated’ becomes the new ‘authentic.’ Moving forward, all unauthenticated channels will become untrustworthy by default as organisations bolster security on the identity infrastructure.”

Alex Ryals, SVP Channel Sales, Ping Identity

In 2024, AI-based fraud will accelerate exponentially and cause users to question the validity and integrity of multimedia like video, images, and audio files. While more fraudulent activity poses new challenges, it is likely that it will also lead to more innovative technology and practices in content verification and validation. Threat actors are constantly innovating to overcome security measures and be successful, and security teams will need to continue to do the same. To do so, existing technologies will have to continually evolve to stay ahead of the fraud-curve. Next year will likely demonstrate how companies will expand partnership and collaborate to deliver best-in-class technology for evolving challenges.

Corey Nachreiner, CSO, WatchGuard Technologies

AI/ML models that allow a computer to carry on a very convincing conversation with you and answer just about any question (though not always accurately) – have taken the world by storm. There’s a risk lurking underneath the fun surface, however. Threat actors and trolls love to turn benign emerging technologies into weapons for their own nefarious purposes and amusement. The same LLMs that might help you draft a paper could also help criminals write a very convincing social engineering email. While the creators of LLMs have slowly tried to add safeguards that prevent bad actors from abusing LLMs for malicious purposes, like all security, it’s a cat-and-mouse game. While not exactly traditional hacking, “prompt engineers” have been working diligently in the shadows to develop techniques that effectively nudge LLMs out of their “sandbox” and into more dangerous waters where they can chart a course of their own with greater potential to yield malicious results.

The potential scale of the problem gets scary when you consider that more and more organizations are trying to harness LLMs to improve their operational efficiency. But using a public LLM for tasks dependent on your proprietary or otherwise private data can put that data at risk. Many of them retain input data for training purposes, which means you’re trusting the LLM vendor to store and protect it. While a traditional breach that exposes that raw data is still possible, we believe threat actors may target the model itself to expose training data.

During 2024, we forecast that a smart prompt engineer—whether a criminal attacker or researcher—will crack the code and manipulate an LLM into leaking private data.