Teaching in the Age of AI: Probability and Proof

There is a growing debate about how schools should respond to student use of AI. There is a collective need to figure out not if but how AI fits into education in 2026. Many districts are taking the approach of banning its use. In some cases detection tools are being used to identify and punish students for “using AI.” Before we rush to firm conclusions, it is important to pause and examine what these tools actually do and how they impact students and schools.
AI detection tools do not identify authorship. They compare patterns. These systems are trained on samples of AI writing, and AI writing itself was trained on large amounts of human writing. In other words, both are drawing from patterns that originated with people. When a detector flags a piece of writing, it is not proving who wrote it. It is saying the text resembles writing it has seen before. Sometimes those guesses match who actually wrote the text, and sometimes they do not, because these tools generate estimates rather than evidence.
AI detectors are not verifiable truth machines, particularly at this stage of their development. They rely on probabilistic pattern matching, not direct evidence. That means they calculate likelihood, not certainty. Their outputs are inference, not proof. This distinction matters.
We are at a moment when it is especially risky to confuse inference with evidence, particularly when the consequences can permanently affect students in a world where mistakes are increasingly public and difficult to undo. If an accusation cannot withstand basic scrutiny or due process, including the standards typically expected in disciplinary or legal settings, we should carefully reconsider how and why those judgments are being made.
Most people would agree that submitting AI-generated work as one’s own is wrong. Plagiarism is plagiarism, with or without AI.
At the same time, using AI to brainstorm, explore ideas, clarify thinking or improve expression is not cheating. In principle it is no different from using a dictionary, spell-check or other widely accepted learning tools.
If schools are concerned about students using AI inappropriately, the answer is not unreliable detection followed by punishment. The answer is education. AI is already embedded in the systems used to assign, monitor, track and evaluate students. Pretending it should not also be taught as a learning tool is neither realistic nor helpful. And it ignores history.
We have encountered moments like this before.
Typewriters, where penmanship was once used to confirm authorship. Calculators, which were once a job title before becoming a device that allowed more complex problems to take center stage rather than simple arithmetic. Computers and spell-check, which triggered similar fears before expectations were realigned around what students were actually meant to learn and how that learning could best be assessed.
Each time the real work was not control but instruction: teaching people how to use powerful new tools responsibly. In each case those tools ultimately became standard and expected.
AI is not inherently safe or unsafe. Like any powerful tool, it can be used responsibly or carelessly. What matters is how we frame and teach its use. If we spend valuable instructional time on detection strategies that are unlikely to hold up under scrutiny, we risk losing time that could instead be used to teach students how to use this technology ethically and effectively.
As we plan for the future of learning, safety should remain the guiding principle. In the long run we can rely on wisdom from the past and remember: safety doesn’t cost, it pays.
Comments (1)
Leave a comment
When commenting, please keep in mind we are a small non-profit focused on serving our community. Our commenting policy is simple:
- Common sense civility: we’re all neighbors, but we can disagree.
- Full name required: no anonymous comments.
- Assume the best of your neighbors.
Great article! Hopefully, we will use AI as an opportunity to reconsider and rethink what skills we are teaching kids in schools to prepare them for effective 21st citizenship. I pray we are not still asking kids to memorize long lists of facts and dates, as I did in in my 20th century education, as if rote memorization is meaningful learning. Critical thinking, media literacy, collaboration, real-world problem solving, ethical use of powerful technologies, and courageous leadership are the skills that kids will actually need to succeed in a world of increasing automation and AI.