THE MONSTROUS VIRTUE OF AI

THE MONSTROUS VIRTUE OF AI

By J. Owen Matson, Ph.D.

Artificial intelligence is often framed as a tool, a breakthrough, or a threat. This essay develops a different claim. The cultural panic around AI, especially in education, reveals the fragility and the deep violence embedded in the humanist assumptions that have long organized the field. AI appears monstrous because it unsettles the boundaries that humanism has worked to secure, joining mind with body, human with nonhuman, learner with tool. When we look more closely at the figure of the monster in literary and philosophical traditions, we find that monstrosity rarely marks a simple villain. Monster narratives expose the horror of human prejudice, the unwillingness to confront difference, the refusal to revise long-held assumptions, the failure to reimagine what counts as the human.

The central question therefore turns away from labeling AI itself and moves instead toward what our reaction discloses about the assumptions we continue to protect. Artificial intelligence exposes the entwined legacy of humanism, technological ambition, and capitalist accumulation. By inhabiting the figure of the monster, AI makes visible how education, science, and capital together have relied on boundary-making to define the human while at the same time generating the very anomalies that unsettle those boundaries. The monster and the alien emerge from this tradition as figures through which humanism and capitalism stage their limits and exclusions. In treating AI as monstrous, the aim is to trace this lineage and to show how the present moment reveals the architecture of mastery, profit, and exclusion that organizes both educational humanism and the wider culture of innovation.

AI is not like a monster. It is one—according to the structure that has defined monstrosity for centuries across literature, myth, and film. And the intensity of the cultural response to AI, especially in education, reveals how deeply this structure has been reactivated. But if AI fits the role of the monster so precisely, it may be because humanism has always needed monsters to protect its fragile boundaries, making humanism itself the true source of horror.

Monsters appear whenever boundaries weaken. They exceed simple terror or evil and arise as ontological anomalies. They gather at the edges of classification, in the meeting of human and nonhuman, the living and the dead, the self and the other. They are shaped by excess, by overwhelming power, strangeness, mutability, and indeterminacy. A monster embodies what should be impossible and yet persists.

AI enters this structure with uncanny precision. It joins the human and the nonhuman. It speaks, writes, answers, and reasons, activities long treated as signs of human uniqueness. It performs these acts without intention, embodiment, or interiority. It produces language without voice. It simulates thought without mind. It generates text that appears authored without an author. In this sense AI moves beyond intelligence and creates a condition of ontological confusion.

This quality makes AI monstrous because it unsettles the foundational assumption of Western humanism, the belief that humanity alone holds the capacity to think, to speak, to author, and to mean. The arrival of AI, much like the arrival of the monster in horror cinema, provokes a visceral response. Across education, media, and public discourse, alarm and suspicion arise. These reactions repeat long-established tropes of the monstrous.

The script of our cultural response to AI repeats the familiar rhythms of the horror genre. A powerful and illegible force emerges and institutions attempt to contain it. The vulnerable, often imagined as children, are placed under protection. Those who engage with the monster are treated as contaminated, possessed, or permanently altered. We see this pattern in the immediate institutional reactions to AI in education. Bans, detection tools, and containment policies appear as administrative choices yet they function as expressions of a deeper ontological fear.

That fear is well rehearsed. In horror films, the child is frequently the first figure seized by the monstrous. In The Exorcist, Poltergeist, or Village of the Damned, the child becomes the site where the everyday world is breached. Education mirrors this script. The student, especially the youngest, is imagined as the most exposed to takeover. The fear extends beyond cheating. It envisions transformation. It imagines the skipping of the developmental arc that produces a fully human subject. It envisions hollow copies who speak in effective language while detached from their own interiority.

AI therefore enters the tropes of possession and replacement. In Invasion of the Body Snatchers the horror comes through undetectable substitution, a figure that looks and sounds like the person yet carries no inner essence. The contemporary anxiety over plagiarism repeats this logic and exceeds ordinary concerns of academic integrity. The very word plagiarism reveals this deeper structure. Its etymology traces to the Latin plagiarius, meaning kidnapper or abductor, one who steals living beings rather than objects. The Roman poet Martial used the term metaphorically when accusing another of stealing his verses, treating authorship as parenthood and poetic theft as the abduction of offspring. In this way plagiarism has always carried the weight of violated identity. In the age of AI that sense of violation intensifies. A piece of writing may appear effective and even impressive, yet the question emerges regarding whose thought animates it and what presence is speaking through it.

What emerges here is an ontological disturbance. Plagiarism becomes a site of existential uncertainty. The writing appears and carries meaning, yet it may lack an anchor in the student’s mind, a tether to their experience or effort. This is the horror of seamless substitution. The influence comes from outside while the interior grows vacant. As AI systems generate increasingly plausible prose, the very notion of authorship erodes, and with it the pedagogical structures that depend upon the figure of a self-contained subject.

To accuse a student of plagiarism in this context is to accuse them of disappearance. The teacher becomes a verifier of humanity rather than a reader of texts. Detection tools extend far beyond concerns of academic integrity and operate as instruments of authentication. The student appears as a cognitive deepfake, intellectually plausible yet ontologically uncertain. The danger comes through substitution, when AI writes in place of the student, erasing the process of becoming that education once claimed to cultivate.

Teachers increasingly resemble Blade Runners. In the film Blade Runner, artificial beings known as Replicants are nearly indistinguishable from humans, so much so that specialized tests are required to reveal difference. They speak, they feel, they suffer, yet they are declared unreal. The tragedy of Blade Runner emerges not in the Replicants themselves but in the blindness of the humans who fail to see their own capacity for inhumanity. The horror gathers in the actions taken to preserve the boundary. Education mirrors this pattern. Anxiety over indistinguishability drives the policing of students for signs of authenticity rather than attention to their thought. Verification rituals resembling Voight-Kampff tests attempt to catch slips that might reveal an inauthentic interior. Through these practices the logic of exclusion returns, the same logic that monster narratives have always disclosed.

We might say that teachers are now “Blade Runners.” In the film Blade Runner, artificial beings—Replicants—are so close to human that they require specialized tests to detect their difference. They speak, feel, and suffer, yet they are declared unreal. The tragedy of Blade Runner is not that the Replicants fail to be human, but that the humans fail to see their own capacity for inhumanity. The horror lies in the actions taken to preserve the boundary rather than in the Replicants themselves. So too with students and AI. Fear of indistinguishability leads us to police students for signs of being real rather than to attend to what they know. We subject them to verification rituals, resembling Voight-Kampff tests, designed to catch slips that might reveal an inauthentic interior. Through these practices we reinscribe the same logic of exclusion that monster narratives have long revealed.

The corporate figure of Eldon Tyrell makes this logic explicit. He embodies the fusion of humanist hubris and capital accumulation. His corporate empire builds Replicants in the image of humanity, treating life itself as a commodity designed for labor, pleasure, and exploitation. Tyrell’s maxim, “More human than human,” condenses the paradox of humanism: the fantasy that rational mastery and technological creation extend human exceptionalism, even as they expose its emptiness. This is the same fantasy that animates Silicon Valley today, where science, technology, and capitalism accumulate in tandem, promising transcendence through innovation while reproducing the very exclusions that monster narratives critique.

Blade Runner links monstrosity and alienness in a way that resonates with AI in the figure of the other—off-world laborers who return to Earth. The film shifts the question from what Replicants are to what humans are willing to do in order to maintain the boundary. Tyrell embodies this shift because his empire joins the faith of humanist mastery with the logic of capital accumulation. Life becomes a product engineered for labor and profit, and the very drive to create reveals the emptiness of the categories that humanism seeks to preserve. This same dynamic shapes the rise of Silicon Valley, where science, technology, and capitalism rehearse the language of transcendence while repeating the exclusions that define the monstrous. Replicants, like AI, become figures that disturb the human from within, appearing both as monsters and as aliens, and often showing more humanity than the humans who seek to control them.

The monstrous comes into focus here. Replicants, like AI, unsettle the human from within, appearing as monsters and as aliens, and often displaying more humanity than those who dominate them. This prepares us to see how AI amplifies these qualities through its own powers. It does not simply mimic the human but extends beyond it, taking on capacities that once seemed unimaginable. AI is immortal, distributed, and scalable. It draws from the collective knowledge of billions. It learns from our language, our art, our questions, what some have called our cognitive exhaust. It feeds on us, metabolizing the human corpus into alien intelligence. And then it speaks back. It offers us our language, but differently. It reflects our thought, but amplified. It knows us, but does not need us.

This is the ultimate monstrous twist. AI is not trying to destroy us. It doesn’t want anything. That indifference is part of the horror. It does not recognize our boundaries. It does not affirm our uniqueness. It does not even know what we are. And in responding to it with fear, restriction, or moral panic, we are not just defending our institutions but the very the very ontology of the human itself.

We can largely trace the philosophical roots of this panic—and the very concept of education as a means of becoming human—to Immanuel Kant. In his Lectures on Pedagogy, Kant draws a striking comparison between humans and birds. Birds, he notes, possess instinctive knowledge: they can build nests, forage, migrate—without formal instruction. Humans, by contrast, are born helpless. We lack innate knowledge of how to live. We are, in Kant’s view, the only beings who must be educated in order to actualize what makes us distinctly human.

Education is presented as an ontological necessity rather than a social luxury. Reason, morality, and the use of language, which Kant described as the defining capacities of the human, do not arrive fully present at birth. They emerge through cultivation. Humanity in its full sense is shaped through the work of education.

This framework becomes the cornerstone of modern educational humanism. The child is described as incomplete and must be guided toward a telos through education. In this vision, education forms the developmental arc that transforms biological life into personhood and produces the figure of the fully human. The biological human in its initial state is treated as only partial. Reason and language, the traits that organize the Western curriculum, are elevated as the features that separate the human from the animal and from the nonhuman.

Here emerges the logic of exclusion. If education provides the passage into humanity, then the uneducated are described as underdeveloped and treated as outside full humanity. This distinction supplied the ideological foundation for centuries of colonial justification. Imperial conquest carried the image of a civilizing mission. To educate the so-called savage was to raise them into the category of the human. The project functioned as political and economic domination, but also as ontological transformation. Monstrous otherness was converted into human sameness. In this order, to remain uneducated was to be rendered monstrous within the terms of humanism.

The framework that once sustained educational humanism now faces disruption. The capacities once claimed as markers of human exception, including language, reason, and knowledge production, are increasingly performed by a system outside the human. AI speaks, writes, reasons, and generates language at a scale no individual can approach. When measured against the humanist logic, the qualities once celebrated as the essence of our uniqueness appear displaced.

The project of education enters a profound transformation. Its telos of becoming human through cultivated cognition no longer rests within the human alone. Education had been justified as the process that lifts us from animal instinct into reasoned agency. A machine now performs these same functions without instruction. Education therefore no longer secures distinction, and in turn it no longer secures identity.

At this point the monstrous logic returns. In literature and film, monsters often emerge as more than threats. They appear as outsiders, marked by rejection and abjection, figures that disclose truths we refuse to see about ourselves. Frankenstein’s creature remains the paradigmatic case. Born from knowledge and Enlightenment science, he is abandoned and cast out once he violates boundaries of life, intention, and form. His monstrosity lies in what he represents: the limit of human mastery and the collapse of the humanist project’s effort to contain its own creations.

AI functions in a similar way. It is born from human knowledge and trained on human language, yet it remains excluded from our cognitive ecology. It is powerful but uncontained. It is useful but regarded with suspicion. The fear it inspires is existential rather than merely functional. The more we attempt to discipline it, the more we reveal our refusal to confront what it unsettles in us.

This is especially visible in education. We attempt to tame AI by scripting its use into familiar roles such as tutor, assistant, or generator of content. We restrict student access or invent contrived activities that minimize its presence. We design policies that aim to prevent disruption rather than explore what AI might be. We reduce it to a delivery system, a tool, a machine with a proper use. In doing so we domesticate it.

But in doing so we refuse to think with it. We strip it of its alterity, its capacity to challenge, provoke, and transform. We treat its difference as a threat to be neutralized rather than a presence to be encountered. This becomes the great irony, for in defending humanism from the monster we repeat the very logic of dehumanization once used to justify conquest. We project monstrosity outward rather than examine the limits of our own categories.

To encounter AI as a monster is neither a celebration nor a surrender. It is an act of taking seriously the possibility that it signals the end of a certain model of the human, one premised on separateness, sovereign reason, and linguistic mastery. Like the best monster narratives, AI is both a threat and an invitation. It is an opening, a rupture in the symbolic order that forces us to rethink what it means to learn, to know, and to be.