How AI remediation will impact developers

Developers are under the gun to generate code faster than ever – with constant demands for greater functionality and seamless user experience – leading to a general deprioritization of cybersecurity and inevitable vulnerabilities making their way into software. These vulnerabilities include privilege escalations, back door credentials, possible injection exposure and unencrypted data.

This pain point has existed for decades, however, artificial intelligence (AI) is poised to lend considerable support here. A growing number of developer teams are using AI remediation tools to make suggestions for quick vulnerability fixes throughout the software development lifecycle (SDLC).

Such tools can assist the defense capabilities of developers, enabling an easier pathway to a “security-first” mindset. But – like any new and potentially impactful innovation – they also raise potential issues that teams and organizations should explore. Here are three of them, with my initial perspectives in response:

Pieter Danhieux

Co-Founder and CEO, Secure Code Warrior.

No. If effectively deployed, the tools will allow developers to gain a greater awareness of the presence of vulnerabilities in their products, and then create the opportunity to eliminate them. Yet, while AI can detect some issues and inconsistencies, human insights are still required to understand how AI recommendations align with the larger context of a project as a whole. Elements like design and business logic flaws, insight into compliance requirements for specific data and systems, and developer-led threat modeling practices are all areas in which AI tooling will struggle to provide value.

In addition, teams cannot blindly trust the output of AI coding and remediation assistants. “Hallucinations,” or incorrect answers, are quite common, and typically delivered with a high degree of confidence. Humans must conduct a thorough vetting of all answers – especially those that are security-related – to ensure recommendations are valid, and to fine-tune code for safe integration. As this technology space matures and sees more widespread use, inevitable AI-borne threats will become a significant risk to plan for and mitigate.

Ultimately, we will always need the “people perspective” to anticipate and protect code from today’s sophisticated attack techniques. AI coding assistants can lend a helping hand on quick fixes and serve as formidable pair programming partners, but humans must take on the “bigger picture” responsibilities of designating and enforcing security best practices. To that end, developers must also receive adequate and frequent training to ensure they are equipped to share the responsibility for security.

Training needs to evolve to encourage developers to pursue multiple pathways for educating themselves on AI remediation and other security-enhancing AI tools, as well as comprehensive, hands-on lessons in secure coding best practices.

It is certainly handy for developers to learn how to use tools that enhance efficiency and productivity, but it is critical that they understand how to deploy them responsibly within their tech stack. The question we always need to ask is, how can we ensure AI remediation tools are leveraged to help developers excel, versus using them to overcompensate for lack of foundational security training?

Developer training should also evolve by implementing standard measurements for developer progress, with benchmarks to compare over time how well they’re identifying and removing vulnerabilities, catching misconfigurations and reducing code-level weaknesses. If used properly, AI remediation tools will help developers become increasingly security-aware while reducing overall risk across the organization. Moreover, mastery of responsible AI remediation will be seen as a valuable business asset and enable developers to advance to new heights with team projects and responsibilities.

The software development landscape is changing all the time, but it is fair to say that the introduction of AI assistive tooling into the standard SDLC represents a rapid shift to essentially a new way of working for many software engineers. However, it perpetuates the same issue of introducing poor coding patterns that can potentially be exploited quicker, and at greater volume, than at any other time in history.

In an environment operating in a constant state of flux, training must keep pace and remain as fresh and dynamic as possible. In an ideal scenario, developers would receive security training that mimics the issues faced in their workday, in the formats that they find most engaging. Additionally, modern security training should place emphasis on secure design principles, and account for the deep need to employ critical thinking to any AI output. That, for now, remains the domain of a highly skilled security-aware developer who knows their codebase better than anyone else.

It all comes down to innovation. Teams will thrive with solutions that expand the visibility of issues and resolution capabilities during the SDLC, yet do not slow down the software development process.

AI cannot step in to “do security for developers,” just as it’s not entirely replacing them in the coding process itself. No matter how many more AI advancements emerge, these tools will never deliver 100 percent, foolproof answers about vulnerabilities and fixes. They can, however, perform critical roles within the greater picture of a total “security-first” culture – one that depends equally on technology and human perspectives. Once teams undergo required training and on-the-job knowledge-building to reach this state, they will indeed find themselves creating products swiftly, effectively and safely.

It must also be said that, similar to online resources like Stack Overflow or Reddit, if a programming language is less popular or common, this will be reflected in the availability of data and resources. You’re unlikely to struggle to find answers to security questions in Java or C, but data may be lacking or conspicuously absent when trying to solve complex bugs in COBOL or even Golang. LLMs are trained on publicly available data, and they are only as good as the dataset.

This is, again, a key area in which security-aware developers fill a void. Their own hands-on experience with more obscure languages – coupled with formal and continuous security learning outcomes – should help fill a distinct knowledge gap and reduce the risk of implementing AI output on faith alone.

We’ve featured the best online learning platform.

This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro


Source

Developers are under the gun to generate code faster than ever – with constant demands for greater functionality and seamless user experience – leading to a general deprioritization of cybersecurity and inevitable vulnerabilities making their way into software. These vulnerabilities include privilege escalations, back door credentials, possible injection exposure and…

Leave a Reply

Your email address will not be published. Required fields are marked *