Skip to yearly menu bar Skip to main content


Morning Poster
in
Workshop: Artificial Intelligence & Human Computer Interaction

Do Users Write More Insecure Code with AI Assistants?

Megha Srivastava


Abstract:

We conduct the first large-scale user study examining how users interact with an AI Code assistant to solve avariety of security related tasks across different programminglanguages. Overall, we find that participants who had accessto an AI assistant based on OpenAI’s \texttt{codex-davinci-002}model wrote less secure code than those withoutaccess. Additionally, participants with access to an AI assistantwere more likely to believe they wrote secure code than thosewithout access to the AI assistant. Furthermore, we find thatparticipants who trusted the AI less and engaged more withthe language and format of their prompts (e.g. re-phrasing,adjusting temperature) provided code with fewer securityvulnerabilities. Finally, in order to better inform the designof future AI Assistants, we provide an in-depthanalysis of participants’ language and interaction behavior, aswell as release our user interface as an instrument to conductsimilar studies in the future.

Chat is not available.