Timezone: »
We conduct the first large-scale user study examining how users interact with an AI Code assistant to solve avariety of security related tasks across different programminglanguages. Overall, we find that participants who had accessto an AI assistant based on OpenAI’s \texttt{codex-davinci-002}model wrote less secure code than those withoutaccess. Additionally, participants with access to an AI assistantwere more likely to believe they wrote secure code than thosewithout access to the AI assistant. Furthermore, we find thatparticipants who trusted the AI less and engaged more withthe language and format of their prompts (e.g. re-phrasing,adjusting temperature) provided code with fewer securityvulnerabilities. Finally, in order to better inform the designof future AI Assistants, we provide an in-depthanalysis of participants’ language and interaction behavior, aswell as release our user interface as an instrument to conductsimilar studies in the future.
Author Information
Neil Perry
Megha Srivastava (Stanford University)
Deepak Kumar
Dan Boneh (Stanford University)
More from the Same Authors
-
2023 : Do Users Write More Insecure Code with AI Assistants? »
Megha Srivastava -
2023 Poster: Generating Language Corrections for Teaching Physical Control Tasks »
Megha Srivastava · Noah Goodman · Dorsa Sadigh -
2020 Poster: Robustness to Spurious Correlations via Human Annotations »
Megha Srivastava · Tatsunori Hashimoto · Percy Liang -
2019 Workshop: Workshop on the Security and Privacy of Machine Learning »
Nicolas Papernot · Florian Tramer · Bo Li · Dan Boneh · David Evans · Somesh Jha · Percy Liang · Patrick McDaniel · Jacob Steinhardt · Dawn Song -
2018 Poster: Fairness Without Demographics in Repeated Loss Minimization »
Tatsunori Hashimoto · Megha Srivastava · Hongseok Namkoong · Percy Liang -
2018 Oral: Fairness Without Demographics in Repeated Loss Minimization »
Tatsunori Hashimoto · Megha Srivastava · Hongseok Namkoong · Percy Liang