How software development will evolve in the AI era

How software development will evolve in the AI era

By Matias Madou (pictured), Co-Founder and CTO at Secure Code Warrior

 

The ongoing evolution of Generative AI (GenAI) is rapidly changing the role of software developers. GenAI tools can write code much more quickly than humans, and they are becoming more adept at doing it.

With the capabilities of the tools continuing to rise, developers need to face the reality that the need for them to extensively code will be diminished, if not eventually expunged altogether.

While this is likely to make many developers feel somewhat uncertain about their futures, they will still have an intrinsic place going forward. It’s just one that will likely involve less code writing and more security, mentorship, and collaboration.

Security-aware developers who demonstrate expertise in safely leveraging Artificial Intelligence (AI) tools will eventually be able to take on new roles as AI guardians or mentors, working with AI to ensure the passage of safe code into their codebase.

A responsible ‘older sibling’

In light of this trend, enterprises must support their developer cohorts in becoming AI’s responsible ‘older sibling’ – a senior partner holding the reins of a very talented, if sometimes erratic, AI upstart.

Achieving this will require full executive buy-in, careful implementation of AI into existing tech stacks, and adoption of secure-by-design principles. This will form part of a security-first culture that refuses to shortchange the importance of a successful, secure rollout of quality software.

It will also require precise training of developers in secure coding practices and giving them opportunities to apply security to the developer environment.

Understanding the risks

Since the arrival of large language models (LLMs) like ChatGPT, GitHub Copilot, and OpenAI Codex, developers have shown enthusiasm for using AI tools. A GitHub survey conducted in the spring of 2023 — seven months after ChatGPT’s first appearance — found 92% of developers already using AI tools both inside and outside of work. Also, 70% said the tools would improve code quality, accelerate completion times, and help them resolve issues more quickly.

However, significant security issues are being overlooked in the process. A more recent survey by Snyk, in which 96% of software engineering and security team members and leaders said they were using AI coding tools, found that a large majority of developers were ignoring AI code security policies despite the fact that AI tools were regularly generating insecure code.

Although nearly 76% of survey respondents said they think that AI code is more secure than code created by humans, more than half — 56.4% — said AI code introduces security issues either sometimes or frequently.

It’s clear that organisations need a new approach if they are to reap the benefits in speed, efficiency, and code quality that AI offers while mitigating the risks. They must establish security as a priority in code development, automate processes more thoroughly, and educate teams on using AI securely. For developers, this dictates that the focus of their jobs must shift.

The future for developers

For all the benefits that AI coding tools bring, the bottom line is that they can’t be trusted to work entirely on their own. Their propensity to use insecure code without spotting the flaws, introducing errors on their own, and possessing no contextual awareness of how the code will function with the rest of the codebase requires that their work is carefully checked before it goes into production. The job of looking over an AI’s shoulder will fall to human developers.

For companies serious about putting security first, this job would dovetail with developers’ focus on bringing security into the development process from the start. Whether companies see it as shifting left or simply starting left, developers must be trained in secure coding best practices.

Beyond writing secure code themselves, and assessing the code output of AI tools, developers’ jobs will change in other ways. As they accumulate knowledge about secure coding and AI’s tendencies, they will be responsible for helping to instil secure best practices on an ongoing basis.

Expectations of developers and the measures of their success will also change. For example, security will soon be among the key performance indicators (PKIs) developers must meet. As developers grow into their new security-focused roles, they will be expected to work with AppSec teams on aligning with ‘security-at-speed’ goals.

A security-first culture will also be needed to allow developers to expand their critical thinking skills, ensuring they act with a security-first mindset.

Given the potency and sophistication of the current threat landscape, developers’ jobs have already been moving toward a security mindset in many organisations. Thankfully, with proper training, developers can become the first line of defence against AI coding errors, allowing organisations to reap AI’s many benefits while mitigating its shortcomings.