What We (Don't) Talk About When We Talk About AI in Education: A Posthumanist Response to Audrey Watters

In her recent essay, What We Talk About When We Talk About AI in Education, Audrey Watters argues that the term “AI in education” functions less as a technical descriptor than as an ideological placeholder—a mythological shorthand that evokes futures of frictionless learning or automated despair, depending on the narrative arc. The essay is classic Watters: sharply skeptical of hype cycles, attentive to the long historical continuities beneath new technological buzzwords, and insistent that the language we use around AI says more about marketing than about machines. She shows how vague and overdetermined uses of the term “AI” allow EdTech companies and pundits to speak with authority without offering clarity, to sell solutions without defining the problems they supposedly solve.

Watters’ work has long been essential, vital, in exposing how EdTech wraps old dreams of behavioral control in the language of innovation, and how its claims to transformation often obscure deeper continuities with systems of surveillance, compliance, and managerialism. But where Watters critiques the discourse around AI, I want to press further into the epistemic terrain beneath it. Because AI in education is not just a story we tell—it is a systemic condition, a reshaping of the very environment in which cognition, relation, and subject formation take place.
Yet AI enters education through a frame that cannot register this transformation. It is treated as a mechanism for content creation and scaled personalization—tools for accelerating a model of learning already defined by standardization, optimization, and control.. It is treated as a mechanism for content creation and scaled personalization—that is, for accelerating a model of learning already defined by standardization, optimization, and control. And this logic does not arise simply from ideology; it fills the vacuum left by the absence of an alternative epistemology.In short, AI doesn’t disrupt the dominant model of learning—it completes it.
To fully understand the current discourse around AI in education—and its profound limitations—we must recognize that we are dealing with not just one but multiple layers of epistemic blindness. These blindspots are not accidental; they are structured into the history of educational technology, into the managerial rationalities that govern its design, and into the humanist assumptions about the learning subject—assumptions that prefigure what learning is, and how it becomes legible. This essay traces three interwoven layers of epistemic constraint:
- AI as epistemic actor: While framed as a tool for content delivery or personalization, AI functions as a cognitive environment—restructuring how learning takes place, how knowledge becomes visible, and how subjectivity is constituted. This transformation is largely unrecognized because it exceeds the instrumental logic through which AI is implemented.
- Constructivism’s exclusion: Even constructivist theories—often treated as progressive correctives—were never epistemologically compatible with the optimization-focused design of EdTech. They require uncertainty, emergence, and relational meaning-making—conditions structurally foreclosed by systems built for standardization and control.
- Critique’s constraint: Even critical discourses, like those offered by Watters, often remain within a humanist, discursive frame. They challenge the narratives but not the epistemic architecture. In doing so, they risk reproducing the very enclosure they seek to resist.
Together, these layers constitute a closed loop of epistemic foreclosure, in which learning is redefined from within a framework that cannot account for emergence, alterity, or cognitive transformation. To break that loop, we must not only critique the systems and their language—we must develop an epistemology capable of thinking with the complexity of learning in hybrid human–machine environments.
To fail to recognize either side of this equation—the structural exclusion of constructivist epistemologies or the epistemic conditions AI now generates—is to miss what is at stake. And to miss both is to forfeit the opportunity entirely. Because learning is not waiting for theoretical consensus. It is already taking place within emergent, hybrid cognitive systems, shaped by machinic agents, algorithmic interfaces, and recursive feedback loops. No amount of old-school humanist protest will reverse this shift. The question is not whether AI will be in education—it already is. The question is whether we will develop an epistemology capable of meeting the cognitive conditions it produces, and of preserving what is most vital about learning: not its measurability, but its capacity to transform.
And this is where Watters’ critique, for all its historical and ideological value, shows its limit. By focusing on the myths and language surrounding AI, it risks reproducing a humanist epistemology in which the learner remains ontologically intact: a subject surveilled, misrepresented, or disempowered, but not reconstituted through human–machine interaction. What her critique leaves unexamined is the deeper question: What happens when cognition itself is reorganized—not merely surveilled or shaped, but fundamentally restructured—by algorithmic scaffolding, data legibility, and interface design?
To address that question, we need more than discursive critique—we need an epistemology capable of accounting for distributed cognition, relational emergence, and the hybrid co-construction of meaning. We need a vision of learning that acknowledges the classroom as a cognitive assemblage, where agency and thought are shared across human and nonhuman nodes. We need to reframe the role of the educator—not as a content manager, but as a relational steward, holding curricular purpose while sustaining epistemic openness and unpredictability.In this sense, the very premise of Watters’ essay—its suggestion that “AI in education” is too vague, too mythologized, too slippery to define—is itself a symptom of critique constrained by its own epistemic frame. Because there are, in fact, ways to define what AI is doing in education—but doing so requires moving beyond discursive skepticism and toward an account of cognition as distributed, emergent, and co-constituted across human and machinic systems. AI should not be understood merely as a delivery mechanism or ideological artifact, but as an epistemic environment—one that reconfigures the cognitive conditions under which learning emerges: dialogically, relationally, and interdependently, within AI–human assemblages. In this context, knowledge is not transmitted or retrieved, but co-constructed as an emergent property of systemic interaction between human and technical cognitive agents.
To remain suspended in humanist critique is to misread ambiguity as unintelligibility—rather than recognizing it as a signal of epistemic transformation.The task now is not to invent new concepts, but to take seriously the epistemological reality AI has already made visible, and to think with the tools we now have.
And here, we return to Watters—not to reject her work, but to reposition it. Her genealogies of EdTech’s myths are not just critiques of narrative—they are raw material for an infrastructure-level analysis of how power reorganizes education. If her insights are grounded in a more robust epistemological understanding of cognition—not as internal processing, but as a system of relations and conditions—her work could help us chart a new course. Not just what AI in education says, but what it does. Not just what it hides, but what it makes thinkable—and what it quietly forecloses.