Achieving effective security in an AI-assisted software development world

Achieving effective security in an AI-assisted software development world

By Matias Madou (pictured), Co-Founder & Chief Technology Officer at Secure Code Warrior

 

Since the launch of ChatGPT back in late 2022, Artificial Intelligence (AI) has permeated many areas – and one of the most significant is software development.

According to a recent survey by GitHub,  92% of US-based developers are already using AI coding tools both in and outside of work. More than half of those surveyed (57%) said AI helps them improve their skills and boosts boost productivity (53%).

Based on this data, it’s highly likely that AI-assisted development will become even more of a norm in the near future. Organisations will therefore have to establish policies and best practices to effectively manage it, just as they’ve done with cloud deployments, Bring Your Own Device (BYOD) and other tech-in-the-workplace trends.

However, such oversight remains a work in progress. Many developers, for example, engage in what’s called ‘shadow AI’ by using these tools without the knowledge or approval of their organisation’s IT department or senior management.

Those managers include chief information security officers (CISOs) who are responsible for determining the guardrails, so developers understand which AI tools and practices are OK, and which aren’t. CISOs need to lead a transition from the uncertainty of shadow AI to a more known, controlled and well-managed Bring Your Own AI (BYOAI) environment.

The time for the transition is now

Recent academic and industry research reveals a precarious state. Some 44% of organisations are concerned about risks related to AI-generated code, according to the State of Cloud-Native Security Report 2024. Research from Snyk shows that 56% of software and security team members say insecure AI suggestions are common.

Meanwhile, four out of five developers bypass security policies to use AI, but only one of every ten are scanning most of their code. This is often because the process adds more cycles for code review and thus slows overall workflows.

The adoption of a well-conceived and executed BYOAI strategy would greatly help CISOs overcome the challenges as developers leverage these tools to crank out code at a rapid pace. With close collaboration between security and coding teams, CISOs will no longer stand outside of the coding environment with zero awareness of who is using what.

They will need to cultivate a culture in which developers recognise they cannot trust AI blindly, because doing so will lead to multitudes of issues down the road. Many teams are already familiar with the need to ‘work backwards’ to fix poor coding and security that weren’t addressed from the start, so perhaps AI security awareness will also highlight this more obviously for developers going forward.

Creating an effective culture

To establish and nurture an effective culture that promotes secure use of AI tools, CISOs will need to incorporate a number of practices and perspectives. These include: 

Establishing visibility: The best way to eliminate shadow AI is to remove AI from the shadows. CISOs need to acquire ‘lay of the land’ visibility of the tools developer teams are using, what tools they aren’t using, and why. With this, they will have a solid sense of where the code is coming from and whether any AI involvement is introducing cyber risks.

Striking a security/productivity balance: CISOs cannot keep teams from finding their own tools – nor should they. Rather, they must seek a fine balance between productivity and security. They need to be willing to allow relevant AI-related activity within certain boundaries, if it results in meeting production goals with minimal or at least acceptable risks.

Measuring it: In the spirit of collaboration, CISOs should work with coding teams to come up with key performance indicators (KPIs) that measure both the productivity and reliability/safety of the software being produced. The KPIs should answer the questions, “How much are we producing with AI? How quickly are we doing it? Is the security of our processes getting better, or worse?”

Bear in mind that these are not ‘security’ KPIs, but rather ‘organisational’ KPIs and must align to company strategies and goals. In the best of possible worlds, developers will perceive the KPIs as something that better informs them, rather than something that burdens them. 

Putting security first

Developer teams may be more on board with a ‘security first’ partnership than CISOs anticipate. In fact, a survey found team members rank security reviewing at the top of their priority lists when deploying AI coding tools, along with code reviewing. They also believe collaboration results in cleaner and more protected code writing.

This is why CISOs need to move forward quickly with an AI visibility and KPI plan that supports a ‘just right’ balance to enable optimal security and productivity outcomes.

The usage of AI tools in software development is only going to continue to increase. Maintaining focus on the security implications therefore remains paramount.