Why AI Gives Education Ontic Reflux

By. J. Owen Matson, Ph.D.
The emergence of AI in education has triggered something stranger than a new toolset or a flare-up of plagiarism panic or even the latest pedagogical gold rush toward whatever counts as “engagement”—it’s started to gnaw at the basic philosophical floorboards beneath the whole project of modern education, which (and this is a thing we mostly inherit without noticing) was architected on an Enlightenment humanist model of the learner as a singular, autonomous self, endowed with interiority, whose voice, judgment, and cognitive depth could be cultivated and credentialed through performances—writing, reasoning, problem-solving—that weren’t just evidence of personhood but the very means of constructing it.
That this structure has held for centuries, often invisibly, is part of what gives it its force—its capacity to feel like common sense even as it filters an entire pedagogical universe through the assumption that thought is private, originary, human, and above all, individual. Which is where AI arrives not as another machine for teachers to wrangle or a content tool with sketchy grammar but as an aporia: As a humanist project, education must equip learners to engage AI. But to teach students to think with AI is, at some level, to teach them that thought no longer originates from a sovereign humanist self.
And still, paradoxically or recursively or maybe just uncomfortably, the gestures themselves begin to change. Because when students engage AI—whether through feedback or planning or just poking at ideas to see what emerges—they’re no longer performing cognition as a solitary act. What unfolds is a recursive, co-modulated process in which human and machine responses become mutually inflected, tangled in a real-time epistemic ecology that Katherine Hayles names a cognitive assemblage: a system in which cognition is distributed, co-emergent, meaning-producing across technical and human nodes that aren’t symmetrical but also aren’t separate.
The problem isn’t just that AI exceeds the classroom’s design specs—it’s that the system can’t integrate it without tearing open the metaphysics it still performs daily: that learning is an interior act, that expression maps cleanly to agency, that the mind appears most fully in solitary speech. So AI gets confined to antiseptic functions—spell-checking, bias-sanitizing, prescriptive prompting—which preserve the appearance of individual mastery while short-circuiting the dialogic instability that might actually make thought happen. Even “Socratic” use cases are engineered for control, where the unknown stays safe and the machine simulates dialogue just well enough to protect the self-image of the learner as a bounded subject who authors thought alone.
And so the paradox deepens. Because to actually teach with AI—to inhabit its presence as a co-cognitive agent—would mean abandoning the Enlightenment subject that education has long silently assumed as its guiding project.
Comment on LinkedIn