Will the Humanities Listen to the Sciences on the Topic of Machine Learning?

J. Owen Matson, Ph.D.
Recently, I’ve written about how the sciences often misrecognize what the humanities offer in understanding AI. But the inverse is also true. There are strands within the humanities that treat AI with suspicion so deep that even studying it feels like a betrayal. This post outlines seven patterns of resistance I have seen in attitudes toward AI. It's by no means exhaustive account, nor does it represent the humanities in general terms.
To be clear, some of the resistance outlined below stems from a residual belief in Enlightenment humanism—the notion of the human as an autonomous subject, and language as a reflection of interior depth. While few scholars today embrace that framework explicitly, its assumptions often shape how AI is dismissed.
- Interpretation as Sacred Ground
For some, interpretation is a moral or existential act—tied to intention, feeling, personhood—the very notion of the human. The idea that a machine might generate language worth interpreting seems to threaten this view. The result is a boundary drawn too early, before the interpretive work even begins. - Suspicion of Quantification
The humanities have long challenged the myth of objective knowledge. But that critique can solidify into refusal. Even when statistical modeling is central to how AI systems shape meaning, it’s written off as epistemologically irrelevant—or worse, corrupt. - Anxiety about Instrumentalization
AI often arrives in education through the language of efficiency, productivity, or cost reduction. This leads some to reject AI wholesale. But resistance to its institutional uses shouldn’t block inquiry into the systems themselves, especially where they reshape the conditions of knowledge. - Mistaking Output for Intent
LLMs produce fluent text, so they’re judged by human (or humanist) standards—voice, authorship, originality. When they fall short, the text is dismissed as hollow. But this misreads the genre. LLMs are not authors, so calling its outputs "fake" is a category error. - Fear and Technological Determinism
Sometimes, there's also a kind of self-defeating fear that AI will destroy the humanities. But this critique reasserts a form of technological determinism that the humanities itself have long interrogated as reductive and fatalistic. - Theoretical Exhaustion as Fad Fatigue
Some may see AI as another passing trend. But platforms and infrastructures don’t pass. Ignoring them won’t make them go away. It only guarantees we won’t be part of the conversation. - Critique Without Engagement
Marxist critics often see AI as an engine of commodified, abstracted labor. That analysis is vital. But when critique forecloses engagement, it excludes even forms of Marxist intervention.
The Humanities Belong Here
AI reshapes language, knowledge, and meaning. That means the humanities aren’t outside this conversation. They’re at its center, if they choose to be.
Comment on LinkedIn