Poisoned ChatGPT Finds Work for Idle Hands: Exploring Developers' Coding Practices with Insecure Suggestions from Poisoned AI Models

Sanghak Oh, Kiho Lee, Seonhye Park, Doowon Kim, Hyoungshick Kim

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

10 Scopus citations

Abstract

AI-powered coding assistant tools (e.g., ChatGPT, Copilot, and IntelliCode) have revolutionized the software engineering ecosystem. However, prior work has demonstrated that these tools are vulnerable to poisoning attacks. In a poisoning attack, an attacker intentionally injects maliciously crafted insecure code snippets into training datasets to manipulate these tools. The poisoned tools can suggest insecure code to developers, resulting in vulnerabilities in their products that attackers can exploit. However, it is still little understood whether such poisoning attacks against the tools would be practical in real-world settings and how developers address the poisoning attacks during software development. To understand the real-world impact of poisoning attacks on developers who rely on AI-powered coding assistants, we conducted two user studies: an online survey and an in-lab study. The online survey involved 238 participants, including software developers and computer science students. The survey results revealed widespread adoption of these tools among participants, primarily to enhance coding speed, eliminate repetition, and gain boilerplate code. However, the survey also found that developers may misplace trust in these tools because they overlooked the risk of poisoning attacks. The in-lab study was conducted with 30 professional developers. The developers were asked to complete three programming tasks with a representative type of AI-powered coding assistant tool (e.g., ChatGPT or IntelliCode), running on Visual Studio Code. The in-lab study results showed that developers using a poisoned ChatGPT-like tool were more prone to including insecure code than those using an IntelliCode-like tool or no tool. This demonstrates the strong influence of these tools on the security of generated code. Our study results highlight the need for education and improved coding practices to address new security issues introduced by AI-powered coding assistant tools.

Original languageEnglish
Title of host publicationProceedings - 45th IEEE Symposium on Security and Privacy, SP 2024
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1141-1159
Number of pages19
ISBN (Electronic)9798350331301
DOIs
StatePublished - 2024
Externally publishedYes
Event45th IEEE Symposium on Security and Privacy, SP 2024 - San Francisco, United States
Duration: 20 May 202423 May 2024

Publication series

NameProceedings - IEEE Symposium on Security and Privacy
ISSN (Print)1081-6011

Conference

Conference45th IEEE Symposium on Security and Privacy, SP 2024
Country/TerritoryUnited States
CitySan Francisco
Period20/05/2423/05/24

Keywords

  • AI-powered coding assistant tools
  • Large language model
  • code generation
  • poisoning attacks
  • software development
  • usable security

Fingerprint

Dive into the research topics of 'Poisoned ChatGPT Finds Work for Idle Hands: Exploring Developers' Coding Practices with Insecure Suggestions from Poisoned AI Models'. Together they form a unique fingerprint.

Cite this