Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Challenges in Deployable Generative AI

Do Users Write More Insecure Code with AI Assistants?

Neil Perry · Megha Srivastava · Deepak Kumar · Dan Boneh

Keywords: [ Safety ] [ security ] [ Language Models ] [ user study ] [ code generation models ] [ Human-AI interaction ]


Abstract:

We conduct the first large-scale user study examining how users interact with an AI Code assistant to solve avariety of security related tasks across different programminglanguages. Overall, we find that participants who had accessto an AI assistant based on OpenAI’s \texttt{codex-davinci-002}model wrote less secure code than those withoutaccess. Additionally, participants with access to an AI assistantwere more likely to believe they wrote secure code than thosewithout access to the AI assistant. Furthermore, we find thatparticipants who trusted the AI less and engaged more withthe language and format of their prompts (e.g. re-phrasing,adjusting temperature) provided code with fewer securityvulnerabilities. Finally, in order to better inform the designof future AI Assistants, we provide an in-depthanalysis of participants’ language and interaction behavior, aswell as release our user interface as an instrument to conductsimilar studies in the future.

Chat is not available.