Why We Need Post-Disciplinary AI Studies

J. Owen Matson, Ph.D
Something is beginning to emerge—tentatively taking shape with all the awkwardness of a structure trying to name itself while also constructing its foundations—inside the university, where several systems—SUNY's initiatives in AI and Society, for instance—are building entire departments devoted to AI Studies, and what makes this different from prior waves of tech-and-humanity discourse is the way these departments are being conceived at what I would call a post-disciplinary or trans-disciplinary level, which means not just multiple disciplines gathered around a theme, as in the usual multidisciplinary approach, but something harder and messier, premised on what I see as the recognition that no existing epistemology—no field’s sense of what counts as knowledge or value or consequence—is adequate on its own, because AI scrambles the conditions under which those determinations get made in the first place.
Each field, from philosophy to computer science to education to economics to media theory, comes equipped with its own epistemic architecture—its own criteria for what constitutes understanding—and the difficulty is that AI destabilizes the coordinates of that architecture in ontological and ethical ways that resist stable framing, and which no single method, even in its expanded or hybrid form, can fully absorb. Because whether the object of study is computation or interpretation, there is always a moment where the disciplinary tools begin to wobble and what appears instead is a kind of methodological remainder—the thing that exceeds the field’s protocols for truth.
Meanwhile, the business world has long had its own epistemology—one that equates knowledge with profitability, and evidence with quantifiable uptake, and success with market saturation—and this logic, which feels efficient because it is recursive, has a tendency to collapse meaning into performance indicators and reduce consequence to a kind of risk calculus where what matters is not what happens but what might be defensible if it does.
And so AI development proceeds according to a familiar sequencing—performance, then quantification, then justification, then funding or acquisition—in which ethical or social questions appear only once the apparatus is already operational, arriving as compliance, branding, or late-stage concern. That sequencing is never neutral. It encodes the worldview that intelligence is abstraction, cognition is performance, judgment is noise, and value is whatever survives optimization.
Which is exactly why post-disciplinary departments in AI Studies matter—they begin upstream. They allow epistemologies to collide without protective silos, starting with the awareness that productive epistemic and intra-disciplinary dissonance is the condition under which new languages for intelligence might emerge. This messiness is a necessary a wager: that rethinking intelligence requires rethinking the architectures of knowledge itself.
Comment on LinkedIn