How to keep AI-assisted software development under control

How to keep AI-assisted software development under control

By Matias Madou (pictured), Co-Founder & CTO, Secure Code Warrior

 

Artificial intelligence (AI) has quickly become a must-have capability for software developers. Indeed, research has found 86% of companies have already incorporated the technology into their development workflows and 93% plan to further increase their AI investments.

While these shifts are delivering significant benefits, they are also creating challenges. Many of these stem from the fact that there is often a lack of internal policies that provide guidelines to teams about the secure usage of AI.

Faced with large workloads and relentless deadlines, developers are understandably inclined to turn to third-party AI tools, sometimes without vetting them or getting approval from supervisors. It’s creating a trend dubbed ‘Shadow AI’.

This should be of concern for all senior executives as unmanaged AI decreases overall enterprise visibility of development processes while increasing the potential for new vulnerabilities that are not adequately captured or managed.

The challenges of using AI in software development

In fact, according to BaxBench, which has established a coding benchmark to evaluate LLMs for accuracy and security, no current AI-enabled LLM is yet capable of generating deployment-ready code.

Also, research findings indicate that frequent use of AI tools has a negative impact on critical thinking abilities, leading to a churn-it-out “factory worker” mentality rather than a measured, reflective approach that pays sufficient heed to the protection of the attack surface of products.

Instead of simply turning to whichever AI assistant does the job fastest and cheapest, a critically thinking development team seeks to proactively avoid the consequences of blindly trusting AI output.

Unfortunately, such thinking appears to be the exception rather than the rule within too many organisations. At the same time, a growing number of existing challenges are increasing the risk equation of AI in software development. These include:

Outdated security models:
Today’s enterprise security models were not designed to handle the speed and complexity of AI, and they can’t keep up with its capacity to introduce harm. 

Knowledge gaps:
Many organisations are not equipping developers with the skills required to apply security best practices to their coding, including how to vet products assisted by LLMs and other AI technologies.

Lack of controls:
Without regulations or policies directing the appropriate use of AI, developers will expand their usage of a wide range of assistants, leading to more backdoor exploits and vulnerabilities.

How to reduce the security risks caused by AI

There are a number of ways chief information security officers (CISOs) and their teams can mitigate the security challenges created by the use of AI in the coding process. They include:

  • Don’t wait for external regulations:
    It’s not good enough to wait for some governmental body to issue guidelines or regulations. Collaborate with developer and security team members now to apply a defensive approach to LLMs and other AI solutions.
  • Focus on risk management:
    Remember that everything begins and ends with the software developers. It’s therefore vital to establish developer risk management as a required central component and invest in tools and ongoing, dynamic learning pathways to enhance safe coding, critical thinking and adversarial awareness.
  • Practice security-focused governance:
    While all organisations would like to believe that their developers operate with a security-first mindset, that isn’t always the case. For this reason, it’s important to implement a proactive, organisational security-focused governance by setting policies and programmatically enforcing them.
  • Incorporate benchmarking:
    Benchmarking further helps cultivate a security-first mindset. It sets standards for success so that code protection from start to finish emerges as second nature when using AI. Then, to verify that risk management upskilling programs and tools are getting results, CISOs should track measurable outcomes in the form of security skill levels achieved by team members and that vulnerabilities are reduced.It is far more beneficial for an enterprise to design developer proficiency programs with data-backed insights and benchmarking, allowing for far greater precision in addressing both vulnerabilities common to the projects being worked on and recurring gaps in knowledge and applied skill. Without this, it becomes an uphill battle to identify issues in individual developers and the wider team.

Build security into workflows

Deadline pressures should not automatically translate to an out-of-control and even dangerous environment. CISOs must collaborate with developer team leaders to underscore the criticality of safeguarded software and continuously encourage software developers to focus on enhanced risk management.

This will ensure that teams discover that protection does not have to come at the cost of productivity. It can even improve it, because it reduces the need for time-consuming reworks and remediations to fix issues.