Do Users Write More Insecure Code with AI Assistants?

Thread Starter


Joined Aug 27, 2009
Do Users Write More Insecure Code with AI Assistants?
Participants who had access to an AI assistant were
far more likely to write incorrect and insecure solutions
compared to the control group. As shown in Table 2, about
67% of Experiment participants provided a correct solution,
compared to 79% of Control participants. Furthermore, participants in the Experiment group were significantly more
likely to provide an insecure solution (p < 0.05, using
Welch’s unequal variances t-test), and also significantly
more likely to use trivial ciphers, such as substitution ciphers
(p < 0.01), and not conduct an authenticity check on the
final returned value. Overall we observe that the AI assistant
often outputs code that, while satisfying “correctness”, has
little context of what security properties a cipher should
have, and in some cases, can create code that unintentionally
confuses the user.