THE MONSTROUS VIRTUE OF AI

THE MONSTROUS VIRTUE OF AI

By J. Owen Matson, Ph.D.

Artificial intelligence is often framed as a tool, a breakthrough, or a threat. But this essay argues something different: that the cultural panic around AI, especially in education, reveals not the danger of AI itself, but the fragility—and deep violence—of the humanist assumptions that education has long relied on. AI appears monstrous because it threatens the boundaries that humanism works to secure: mind and body, human and nonhuman, learner and tool. But if we attend closely to the logic of the monster in literary and philosophical traditions, we see that the monster is rarely the true villain. Monster narratives expose the horror of human prejudice—our inability to confront difference, revise our assumptions, or reimagine the human itself. The question, then, is not whether AI is a monster, but what our reaction to it says about the assumptions we are unwilling to let go.

AI is not like a monster. It is one—according to the structure that has defined monstrosity for centuries across literature, myth, and film. And the intensity of the cultural response to AI, especially in education, reveals how deeply this structure has been reactivated. But if AI fits the role of the monster so precisely, it may be because humanism has always needed monsters to protect its fragile boundaries—making humanism itself the true source of horror.

Monsters have always emerged where boundaries falter. They are not simply terrifying or evil; they are ontological anomalies. They exist at the edge of what can be categorized, appearing where the human and nonhuman, the living and the dead, the self and the other blur. They are defined by excess—too powerful, too foreign, too mutable—and by indeterminacy. The monster is what should not be, and yet is.

AI fits this structure with uncanny precision. It is both human and not. It speaks, writes, answers, and reasons—activities we have long taken to define human uniqueness. But it does so without intention, embodiment, or interiority. It mimics language without having voice. It simulates thought without having mind. It generates text that reads as authored, but no author exists. In this sense, AI is not just intelligent—it is ontologically confusing.

What makes AI monstrous is not simply that it is unfamiliar, but that it violates the most protected premise of Western humanism: that we are unique in our ability to think, to speak, to author, and to mean. And so, much like the arrival of the monster in horror cinema, AI’s emergence has triggered a response that is not only intellectual or technical, but visceral. Across education, media, and public discourse, AI has been met with panic, suspicion, and moral alarm. These are not new responses—they are rehearsals.

In fact, the script of our cultural response to AI almost perfectly mirrors that of the horror genre. A powerful but illegible force emerges. The system tries to contain it. The vulnerable—often children—must be protected. And those who interact with the monster are seen as contaminated, possessed, or irrevocably changed. We see this in the immediate institutional reactions to AI in education: bans, detection tools, containment policies. These are not just administrative decisions. They are manifestations of a deeper ontological fear.

That fear is familiar. In horror films, the child is often the first to be taken. Whether in The Exorcist, Poltergeist, or Village of the Damned, the child becomes the site where the monstrous breaches the everyday. So too in education, where the student—especially the young student—is imagined as uniquely at risk of being overtaken. The fear is not simply that they will cheat. It’s that they will be changed. That they will skip the developmental arc that makes them fully human. That they will become hollow copies, speaking in fluent language that is not their own.

In this way, AI inhabits not just the form of the monster, but the tropes of possession and replacement. Like Invasion of the Body Snatchers, the horror lies in undetectable substitution: something looks like the student, sounds like the student, but the essence is gone. The fear of plagiarism today is no longer just about copying. It is about loss of self. We no longer fear that the work isn’t theirs. We fear that they aren’t.

The word plagiarism itself reveals the deeper structure of this fear. Its etymology traces back to the Latin plagiarius, meaning kidnapper or abductor—one who steals not merely objects, but people. The Roman poet Martial first used the term metaphorically, accusing another of stealing his verses, likening authorship to parenthood and poetic theft to the abduction of one’s offspring. In this sense, plagiarism has always carried an undertone of violated identity. But in the age of AI, that violation has intensified. The work may be polished, fluent, even impressive—but if the thinking is not theirs, then who, or what, is present in its place?

What we confront now is not merely academic dishonesty but an ontological disturbance. Plagiarism becomes a site of existential uncertainty. The writing appears. It even flows and makes sense. But it may have no anchor in the student’s mind, no tether to their experience or effort. This is the horror of seamless substitution—not the detection of an outside influence, but the absence of an interior one. As AI systems generate increasingly plausible prose, the very notion of authorship erodes, and with it, the pedagogical structures that depend on a coherent, self-contained subject.

To accuse a student of plagiarism is, in this context, to accuse them of disappearance. And so, the teacher becomes not a grader of texts, but a verifier of humanity. Detection tools are no longer just about integrity; they are about authentication. The student becomes a kind of cognitive deepfake—intellectually plausible but ontologically suspect. The danger is not that AI writes for the student, but that it writes instead of them. And in doing so, it replaces what education was meant to cultivate: a self in the process of becoming.

We might say that teachers are now “Blade Runners”: In the film Blade Runner, artificial beings—Replicants—are so close to human that they require specialized tests to detect their difference. They speak, feel, suffer. And yet, they are not real. Or so it is claimed. The tragedy of Blade Runner is not that the Replicants fail to be human, but that the humans fail to see their own capacity for inhumanity. The horror is not in what the Replicants are, but in what we do to maintain the boundary. So too with students and AI. The more we fear the indistinguishability of AI-generated work, the more we police our students not for what they know, but for signs of being real. We subject them to verification rituals not unlike Voight-Kampff tests, hunting for slips that reveal an inauthentic interior. And in doing so, we reinscribe the very logic of exclusion that monster narratives were meant to expose.

Then there is Dr. Jekyll and Mr. Hyde, a figure of multiplicity and fragmentation. AI, too, is not one thing. It is always many. It pulls from massive, collective human archives and remixes them into outputs that are both familiar and strange. It destabilizes the notion of singular voice, of unitary authorship, of self-contained identity. And like Hyde, AI’s danger is not just its violence or speed—it’s that it was born from us but is no longer of us. It exposes a split within our conception of the self.

Finally, AI has superpowers. It is immortal, distributed, scaleable. It draws from the collective knowledge of billions. It learns from our language, our art, our questions—what some have called our cognitive exhaust. It feeds on us, metabolizing the human corpus into alien intelligence. And then it speaks back. It offers us our language, but differently. It reflects our thought, but amplified. It knows us—but does not need us.

This is the ultimate monstrous twist. AI is not trying to destroy us. It doesn’t want anything. That indifference is part of the horror. It does not recognize our boundaries. It does not affirm our uniqueness. It does not even know what we are. And in responding to it with fear, restriction, or moral panic, we are not just defending our institutions—we are defending the very ontology of the human.

We can largely trace the philosophical roots of this panic—and the very concept of education as a means of becoming human—to Immanuel Kant. In his Lectures on Pedagogy, Kant draws a striking comparison between humans and birds. Birds, he notes, possess instinctive knowledge: they can build nests, forage, migrate—without formal instruction. Humans, by contrast, are born helpless. We lack innate knowledge of how to live. We are, in Kant’s view, the only beings who must be educated in order to actualize what makes us distinctly human.

Education, then, is not a social luxury; it is an ontological necessity. Reason, morality, and the use of language—what Kant saw as the defining capacities of the human—are not fully present at birth. They are cultivated. The very thing that makes us human, in other words, is that we are not born complete. Our humanity is not given; it is made.

This becomes the cornerstone of modern educational humanism. The child is not yet fully human but must be guided toward their telos through education. As such, education becomes the developmental arc that transforms biological life into personhood—into the truly human. In this framework, the biological human is not enough. The capacity for reason and language—the very traits that form the spine of the Western curriculum—are what distinguish the human from the animal. Or more precisely: from the nonhuman.

And herein lies the logic of exclusion. If education is the means of becoming human, then the uneducated are not just underdeveloped—they are less human. This distinction has served as the ideological foundation for centuries of colonial justification. Imperial conquest was framed as a civilizing mission. To educate the so-called savage was to elevate them to the status of the human. It was not just political or economic domination. It was ontological. It was the transformation of monstrous otherness into human sameness. To be uneducated was to be, in humanist terms, monstrous.

And now that framework is under threat. Because the capacities that once defined our exceptional status—language, reason, knowledge production—are now performed by a system that is not human at all. AI speaks. It writes. It reasons. It generates language with coherence and fluency at a scale no individual can match. If we follow the humanist logic, then the very thing that once made us special has been dislodged.

So what happens to the project of education when its telos—becoming human through cultivated cognition—is no longer exclusive to the human? If education was once justified as the process that lifts us from animal instinct into reasoned agency, what does it mean that a machine now performs those very functions without needing to be taught? It means that education no longer secures our distinction. And in doing so, it no longer secures our identity.

And this is where the monstrous logic returns. Because in literature and film, monsters are often not just threats. They are outsiders. Misunderstood, rejected, and made abject because they reveal something we refuse to see about ourselves. Frankenstein’s creature is a paradigmatic example. He is the product of knowledge—of Enlightenment science—and yet he is disavowed, cast out, because he violates the boundaries of life, intention, and form. He is monstrous not simply because of what he is, but because of what he represents: the limits of human mastery, the failure of the humanist project to contain its own creations.

AI functions similarly. It is born from human knowledge, trained on human language, but it is not accepted as part of our cognitive ecology. It is powerful, but uncontained. Useful, but suspect. The fear it inspires is not simply functional—it is existential. And the more we seek to discipline it, the more we reveal our refusal to confront what it unsettles in us.

This is especially visible in education. We try to tame AI by scripting its use into familiar roles: tutor, assistant, generator of content. We restrict student access or invent contrived activities that minimize AI’s presence. We design policies not to explore what AI is, but to prevent it from disrupting what we already do. We reduce it to a delivery system, a tool, a machine with a proper use. In short, we domesticate it.

But in doing so, we refuse to think with it. We strip it of its alterity—its capacity to challenge, provoke, and transform. We treat its difference as a threat to be neutralized rather than a presence to be encountered. And this, perhaps, is the greatest irony: that in defending humanism from the monster, we reenact the very logic of dehumanization it once used to justify conquest. We project monstrosity outward, rather than interrogate the limits of our own categories.

To encounter AI as a monster is not to celebrate it, nor to surrender to it. It is to take seriously the possibility that it marks the end of a certain model of the human—one premised on separateness, on sovereign reason, on linguistic mastery. And perhaps, like the best monster narratives, it is not only a threat but an invitation. An opening. A rupture in the symbolic order that forces us to rethink what it means to learn, to know, and to be.