Rethinking AI Integration: Toward a Cognitive Ecology of Higher Education

By J. Owen Matson, Ph.D.
John Warner’s recent piece, Hasty Lurches Toward an Uncertain AI Future, offers the sort of plainspoken clarity that one increasingly learns not to expect in discussions of AI and higher education. His central concern is not so much with artificial intelligence itself as with the institutional enthusiasm that surrounds it (1). Universities, he suggests, are treating AI not as a technology to be understood but as a solution to be implemented. Warner’s tone is appropriately incredulous: vast systems are being rolled out, policies rewritten, funds reallocated—all in service of a technology whose long-term educational value remains unproven and whose basic terms are still, in many quarters, poorly understood. The real scandal, he implies, is not AI’s potential but higher education’s willingness to outsource its epistemic responsibilities to it—and to do so in the name of innovation.
This caution is timely. And yet I find myself unsettled by the ease with which even the call for caution can find itself trapped in the very logic it seeks to resist. When AI is approached primarily as an object of assessment—as something to pilot, monitor, and eventually scale—its integration is not being questioned so much as proceduralized. The dominant epistemology remains intact: managerial rationalism, metrics-driven evaluation, and the fantasy that educational practice can be reduced to something like efficiency (2, 3). In this frame, AI becomes either a threat to be mitigated or a tool to be optimized. What rarely gets considered is that the integration of AI into education might require something more fundamental than improved safeguards or slower timelines. It may demand a confrontation with the epistemic architecture of the university itself—a recognition that AI reconfigures not only how we teach or assess, but how we come to know, decide, and relate within institutional life. The central question, then, is not how quickly we move, but what kind of thinking we move with.
It is tempting to imagine that educational technology should answer to pedagogical goals. But those goals, too, are often shaped by the systems they are meant to guide. Learning outcomes, impact studies, and best practices are not neutral instruments; they are artifacts of a long history of accountability regimes (4). What AI threatens to accelerate, then, is not a break with the past but its logical conclusion. To put it differently: the problem is not that we are moving too fast. The problem is that we are moving according to a logic that has already foreclosed the possibility of thinking otherwise.
This is why I believe the integration of AI into education must be designed from the standpoint of epistemic humility. Not as a concession to risk or a strategy of delay, but as a pedagogical commitment. It matters less whether students become “AI fluent” and more whether they are invited to reflect on what it means to engage with a cognitive other whose operations are neither fully transparent nor fully human (5, 6). The most valuable use of AI in education may lie not in automation but in estrangement—in the way it prompts us to reconsider the assumptions that underwrite our own knowledge practices. But such estrangement can only be productive if institutions are willing to suspend the demand for immediate results and allow ambiguity, friction, and disagreement to re-enter the educational space.
Warner is right to be skeptical of large-scale institutional partnerships that prioritize corporate timelines over pedagogical reflection. But the alternative is not simply to slow down. It is to ask whether the frameworks we use to evaluate educational tools are themselves part of the problem. If AI is to be integrated responsibly, it must be done not through instrumental measures but through practices that foreground reciprocity, responsiveness, and the capacity to remain in uncertainty (7, 8). What follows does not attempt to provide a comprehensive model for AI integration; rather, it aims to articulate the epistemological reorientation necessary to approach AI not as a tool to be managed, but as a phenomenon that reconfigures how knowledge, judgment, and institutional meaning are formed.What might it mean, then, for institutions to approach AI not as a technological fix, but as a site of epistemic co-emergence? One starting point lies in reframing AI literacy—not as a suite of operational competencies or content-specific applications, but as a practice of epistemic AI literacy: the capacity to inhabit and reflect upon the new cognitive conditions AI produces (9). This form of literacy does not begin with skill, but with attunement: to the modes of knowledge AI makes possible, the assumptions it encodes, the authority it simulates, and the interpretive gaps it inevitably opens.
This would mean shifting the emphasis from teaching students to use AI effectively to cultivating their ability to think with it critically—contextually, recursively, and reflexively. Rather than treating AI as a neutral presence or passive resource, it becomes a cognitive actor in the learning process. Students would be invited not to adopt AI, nor to reject it, but to enter into a dialogic exchange with it, one in which interpretation is always unfinished and understanding is always co-constituted (10).
An epistemic AI literacy would therefore attend to the genre conventions and discursive rhythms of AI-generated output, the predictive logic that governs its structure, and the affective impressions its fluency tends to obscure. But it would go further: it would position AI not as a simulation of human thought, but as a provocation—an unstable, partial, and affectively charged cognitive agent that disrupts our habits of knowing. The goal is not to master AI as a tool, but to encounter it as a threshold—where cognition is reassembled, and where meaning emerges through distributed entanglement rather than individual insight (11).
For institutions, this reframing demands more than curricular redesign. It requires an epistemic reorientation. Classrooms must become spaces where uncertainty is not treated as failure, but as the medium of thought itself. This might mean designing writing pedagogy that foregrounds revision, friction, and metacognitive dissonance over polished outcomes; or assignments where students interpret, annotate, and interrogate AI responses as part of a shared interpretive ecology. It also means treating faculty not as end-users to be trained, but as co-emergent cognitive agents engaged in theorizing the very systems they are asked to navigate (12).
There is, to be sure, no single framework for integrating AI in this way. But the most serious misstep is to frame the choice as one between acceleration and resistance. The real work lies in cultivating conditions under which thinking-with becomes an ordinary, structured part of learning—not as a pedagogical innovation, but as an epistemological necessity (13). Within such a model, AI is not an aid to cognition, but a site in which cognition takes place.
This is the premise of the cognitive intraface: that what we call thinking does not originate in isolated minds, but emerges through dynamic, affective entanglements between human and nonhuman intelligences (14). Epistemic AI literacy, then, is not about fluency, compliance, or control. It is about learning to inhabit the in-between: the recursive movement, the dialogic asymmetry, the moment of co-emergence in which something like meaning takes shape (15).
If AI is to have a place in higher education, it must not be treated as a solution to pedagogical inefficiencies or an object of policy compliance. It must be understood as a force that reshapes the very conditions of knowing. If institutions can meet that force with attentiveness and epistemic humility, they may discover that the future of education does not lie in preserving what cognition once was—but in co-constructing what it is becoming (16).
Institutions as Cognitive Ecologies: Designing for What Exceeds Design
To approach AI integration only at the level of classroom practice is to overlook the deeper institutional question AI raises: not just what technologies are used, but how knowledge, meaning, and interpretive agency are structurally distributed across a university system. AI inflects the full architecture of higher education—not simply instruction, but admissions, advising, learning platforms, assessment, advancement, and procurement. These domains, often treated as operational or logistical, are in fact key sites where cognition is externalized, bureaucratized, and increasingly automated.
It would be impossible to offer a comprehensive model for AI integration across such varied systems—and attempting to do so risks reinscribing the very managerial rationality this work seeks to resist. This section does not propose a solution. Instead, it aims to name the epistemological shift that must precede any meaningful engagement with AI at the institutional level. Without that shift, even well-intentioned initiatives will likely reproduce the instrumental logic of optimization, coherence, and oversight that treats AI as a fix to administrative strain rather than as a force that reconfigures the very conditions under which decisions are made and meaning takes form.
Most universities do not operate as unified, rational systems. They are composed through historical layering, procedural inheritance, and often-opaque networks of software, policy, and professional habit. AI now enters this environment not as a neutral tool, but as a mediator of cognition—shaping what institutions can see, what they can track, what they can count, and what they presume to know. Crucially, many of these shifts occur beneath the threshold of institutional awareness. Meaning is increasingly processed through infrastructures that appear seamless and authoritative while obscuring their interpretive premises.
This is why epistemic AI literacy must be understood not only as a student outcome, but as an institutional reflex. It is a matter of governance, yes—but also of cognition. It asks how a university thinks, and whether it can register the conditions and consequences of its own thinking. The relevant question is not how to centralize AI strategy, but how to support institutional forms of interpretive attunement: practices that make visible the recursive, affective, and often asymmetrical processes by which knowledge is produced and acted upon within distributed systems.
This requires, paradoxically, a design ethos oriented toward what exceeds design. Universities must be willing to create infrastructures that acknowledge their own partiality, their own blind spots. This might include dialogic platforms that surface contradiction rather than suppress it; recursive governance processes that return to prior assumptions rather than optimize them away; or accountability structures that recognize slowness, ambiguity, and dissonance not as inefficiencies, but as signs that cognition is unfolding beyond legibility.
In such a model, the institution is no longer imagined as a machine to be tuned, nor as a container for outputs, but as a co-emergent cognitive ecology—a system shaped by human and nonhuman agents, by rules and their breakdowns, by technologies and the silences they produce. The university does not become “AI fluent.” It becomes epistemically receptive to the shifting architectures of its own thinking.
This reframing resists the impulse to solve institutional fragmentation with yet another integrative system. Instead, it asks whether the task of integration is itself in need of rethinking. The goal is not seamlessness. It is not even alignment. The goal is to support the emergence of meaning across a terrain that cannot be fully mapped—to design with an awareness of what exceeds design, and to treat that excess not as an error, but as a condition of knowledge itself.
Endnotes
- John Warner, "Hasty Lurches Toward an Uncertain AI Future," Inside Higher Ed, 2024. Warner critiques the unreflective enthusiasm with which institutions adopt AI, framing the problem as not merely technical but epistemological.
- See HolonIQ, "Education in 2030," 2023. This widely cited forecast envisions large-scale AI integration as inevitable and desirable, emphasizing economic scalability over epistemic reflection.
- The Educause Horizon Report (2023) promotes institutional AI readiness through centralized strategy, risk mitigation, and alignment with workforce demands, largely overlooking how AI reshapes the premises of cognition and pedagogy.
- OECD, "Artificial Intelligence in Education: Challenges and Opportunities for Teaching and Learning," 2021. Offers global benchmarks for AI implementation in education, privileging metrics and systems integration over critical epistemology.
- N. Katherine Hayles, Unthought: The Power of the Cognitive Nonconscious (Chicago: University of Chicago Press, 2017). Provides a foundational account of distributed cognition and the nonhuman dimensions of thinking—a key departure from human-centered models.
- Karen Barad, Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning(Durham: Duke University Press, 2007). Barad’s theory of intra-action informs the argument that cognition emerges from relational entanglement, not from discrete agents.
- Bruno Latour, Politics of Nature: How to Bring the Sciences into Democracy (Cambridge: Harvard University Press, 2004). Latour’s critique of representationalism and his call for recursive institutional design underlie the essay’s critique of seamless integration.
- Cathy O’Neil, Weapons of Math Destruction (New York: Crown, 2016). Offers a detailed critique of the social and ethical risks posed by algorithmic systems, particularly in education and public policy.
- Ruha Benjamin, Race After Technology: Abolitionist Tools for the New Jim Code (Cambridge: Polity, 2019). Explores how AI systems encode historical biases and offers a political lens for understanding technological design.
- David Theo Goldberg and Cathy N. Davidson, The Future of Thinking: Learning Institutions in a Digital Age(Cambridge: MIT Press, 2010). Their framing of learning institutions as distributed and interpretive cognitive ecologies supports the broader institutional argument made here.
- Max Tegmark, Life 3.0: Being Human in the Age of Artificial Intelligence (New York: Knopf, 2017). Often cited in AI policy discourse to underscore existential risks, though the current essay resists the instrumental urgency this framing tends to invite.
- Tara McPherson, "U.S. Operating Systems at Mid-Century," Differences 11, no. 3 (2000): 88–128. A cultural reading of the infrastructures of computation, illuminating how seemingly neutral systems carry racialized and gendered logics.
- This essay’s concept of the "cognitive intraface" draws on posthumanist epistemologies to reframe AI not as a tool but as a site where cognition is co-constituted through human–nonhuman entanglement.
- For institutional metaphors that treat universities as machines to be tuned or systems to be optimized, see the strategic plans of institutions cited in Educause (2023) and HolonIQ (2023).
- The phrase “thinking-with” appears throughout as a contrast to "thinking-about" or "thinking-through"—terms which imply objectification or instrumentalization. It is inspired in part by Haraway’s notion of "staying with the trouble."
- The risk of epistemic foreclosure—the premature closure of inquiry under the guise of technological certainty—is a recurring concern across this piece. It echoes themes in Hayles (2017), Barad (2007), and Latour (2004), among others.
Works Cited
Barad, K. (2007). Meeting the universe halfway: Quantum physics and the entanglement of matter and meaning. Duke University Press.
Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim Code. Polity Press.
Davidson, C. N., & Goldberg, D. T. (2010). The future of thinking: Learning institutions in a digital age. MIT Press.
Educause. (2023). 2023 Horizon report: Teaching and learning edition. https://library.educause.edu/resources/2023/4/2023-horizon-report-teaching-and-learning-edition
Haraway, D. (2016). Staying with the trouble: Making kin in the Chthulucene. Duke University Press.
Hayles, N. K. (2017). Unthought: The power of the cognitive nonconscious. University of Chicago Press.
HolonIQ. (2023). Education in 2030. https://www.holoniq.com/ed2030
Kaplan, S., & Narayanan, A. (2023). AI Snake Oil [Blog]. https://aisnakeoil.substack.com
Knox, J. (2020). Artificial intelligence and education in China. Learning, Media and Technology, 45(3), 298–311. https://doi.org/10.1080/17439884.2020.1745423
Latour, B. (2004). Politics of nature: How to bring the sciences into democracy. Harvard University Press.
McPherson, T. (2000). U.S. operating systems at mid-century: The intertwining of race and UNIX. Differences: A Journal of Feminist Cultural Studies, 11(3), 88–128.
O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown Publishing Group.
Organisation for Economic Co-operation and Development (OECD). (2021). Artificial intelligence in education: Challenges and opportunities for teaching and learning. OECD Publishing.
Selwyn, N. (2019). Should robots replace teachers? AI and the future of education. Polity Press.
Su, Y. (2023). Emerging AI policies and the futures of higher education. Journal of Educational Policy Futures, 41(2), 214–229.
Tegmark, M. (2017). Life 3.0: Being human in the age of artificial intelligence. Knopf.
Warner, J. (2024). Hasty lurches toward an uncertain AI future. Inside Higher Ed. https://www.insidehighered.com
Williamson, B., Eynon, R., & Potter, J. (2020). Pandemic politics, pedagogical challenges: Educational research confronts COVID-19. British Educational Research Journal, 46(4), 731–738. https://doi.org/10.1002/berj.3662