Organizations continue to be excited about artificial intelligence (AI) in software. AI has the potential to accelerate software development by enabling developers to write code and deliver features more quickly, as well as better meet deadlines and organizational goals. Although some of the first co-pilot and AI-based code writing tools show promise, the Snyk « AI Code Security Report» shows us that this powerful ability is not without risk.
False sense of security
One of the takeaways from the report is that developers have a false sense of security in AI-generated code. Snyk’s report reveals that code generation tools routinely recommend vulnerable open source libraries, but more than 75% of respondents say AI code is more secure than human code. The report acknowledges, however, that despite this sense of confidence, more than 56% of respondents admitted that AI-generated code sometimes or frequently introduces security issues.
Snyk points out that this means that AI-generated code requires verification and auditing to avoid introducing vulnerabilities into production systems by relying too much on AI-generated code without implementing activities and appropriate security tools such as software composition analysis (SCA).
Bypassing the security policy
Potentially more worrying, Snyk’s survey finds that nearly 80% of software developers and practitioners admit to circumventing security policies, and only 10% analyze most AI-generated code. This means that even if security managers such as information security managers (CISO) implement security processes to enable organizations to safely use AI tools for software development, developers simply ignore or circumvent these processes, inevitably introducing vulnerabilities and risks.
Ask AnalystGPT about this analysis
Open source software supply chain security
Software supply chain Security continues to be a pressing industry-wide issue, from cybersecurity executive orders to private sector efforts led by leading organizations such as the Linux Foundation and OpenSSF. Attacks against the software supply chain continue to rise as attackers realize the high return on investment (ROI) of compromising popular open source software (OSS) projects and components and the massive cascading impacts downstream .
Despite this industry-wide recognition, Snyk’s survey found that fewer than 25% of developers used SCA tools to identify vulnerabilities in AI-generated code suggestions before using them. This means the industry is accelerating its use of AI-generated open source code suggestions without proper security measures. This makes organizations ripe for software supply chain attacks against malicious open source attackers.
Highlighting a unique aspect of how AI tools work, the Snyk report points out that because of reinforcement learning, AI tools are more likely to continue making similar code suggestions as developers use them. accept, leading to an infinite loop of vulnerable activities.
The risks are known, but ignored
Notably, the survey found that developers recognize the risks of AI but have turned a blind eye to the risks due to the benefits of accelerated development and delivery, leading to the age-old problem of ignoring security in favor of other objectives such as speed to market and delivery times.
Respondents highlighted their concern about keeping up with their peers who use AI coding tools, resulting in higher code speed, forcing them to try to keep up. And this, even if they say they are very worried about the security risks and the excessive dependence that AI code generation tools can create.
The report also interestingly cites the challenge of cognitive dissonance, where developers believe that since their peers and other industry players are using AI coding tools, they must be safe, despite the findings opposites.
However, developers have raised concerns about the possibility of over-reliance on AI coding tools. Some have expressed concern that they are losing their ability to write their own code and their ability to recognize good deals in the face of development problems because they feel comfortable relying on AI tools rather than on their own skills and critical thinking.
Implications for application security (AppSec)
Finally, the report discusses some of the implications for AppSec. Since the use of AI coding tools can lead to accelerated development times and code speed, this inevitably puts additional pressure on AppSec and security professionals, who are trying to keep up the pace of their developer peers. More than half of the teams responded saying they were under additional pressure.
This highlights the need for AppSec and security practitioners to explore AI code security tools, as manual intervention proves impractical at scale. Relying on automated tools is imperative, while striving to avoid becoming a bottleneck or friction point for their developer peers.
Final Thoughts
It’s clear that AI coding tools are here to stay and will likely see increasing use in organizations. Developers seek to meet project and product deadlines, release features, and keep pace with their peers in their specific niche who are growing at an increased speed through the use of the tools. But while code speed can be increased, so can potential vulnerabilities and risks, as the Snyk report highlights. If the trend continues, the attack surface to exploit will only expand.