What Happens When AI Falsely Flags Students for Cheating
Educators are using AI detectors to weed out AI-generated work. On today’s Big Take podcast: What happens to students when these tools get it wrong?
As students have started using tools like ChatGPT to do their homework, educators have deployed their own AI tools to determine if students are using AI to cheat.
Photographer: skynesher/E+Never miss an episode. Follow The Big Take daily podcast today.
The education system has an AI problem. As students have started using tools like ChatGPT to do their homework, educators have deployed their own AI tools to determine if students are using AI to cheat.
But the detection tools, which are largely effective, do flag false positives roughly 2% of the time. For students who are falsely accused, the consequences can be devastating.
On today’s Big Take podcast, host Sarah Holder speaks to Bloomberg’s tech reporter Jackie Davalos about how students and educators are responding to the emergence of generative AI and what happens when efforts to crack down on its use backfire.
Read more: AI Detectors Falsely Accuse Students of Cheating—With Big Consequences