AI Literacy in Assemblage: Reframing Mediation, Cognition, and Ethical Entanglement

By J. Owen Matson, Ph.D.
Abstract
This essay intervenes in the evolving discourse around “AI literacy,” a term increasingly invoked across educational, technological, and policy contexts yet rarely interrogated beyond instrumentalist or dismissive framings. Against models that reduce literacy to a checklist of skills or reject the term outright as conceptually inadequate, this essay reclaims AI literacy as a generative site of epistemological inquiry. Engaging Miriam Reynoldson’s critique of literacy as a metaphor ill-suited to human–machine entanglement, and Rachel Horst’s rearticulation of literacy as a relational, culturally saturated practice, the essay proposes a shift from literacy as content mastery to literacy as infrastructural attunement. Building on Horst’s situated account, the argument develops a theory of the cognitive intraface: a recursive system through which meaning emerges across AI-human interactions. Drawing on theories of distributed cognition (Hayles), infrastructural ethics, and Mark Hansen’s concept of feed-forward, the essay positions AI not as an external tool but as a constitutive force in the architectures of sense-making. In this reframing, AI literacy becomes less about user fluency and more about ethical responsiveness to the asymmetrical systems that structure intelligibility. The result is not a comprehensive framework but a conceptual reorientation—one that treats AI literacy as an opening into deeper questions of cognition, relation, and epistemic becoming under algorithmic regimes.
Discussions of AI literacy too often oscillate between instrumentalist checklists and abstract critiques—between frameworks that reduce literacy to a codifiable skillset and interventions that reject the term altogether as conceptually inadequate. Miriam Reynoldson’s recent essay, "Against AI literacy: have we actually found a way to reverse learning?," exemplifies this latter tendency, arguing that “literacy” is an ill-suited metaphor for understanding the complexities of human–machine entanglement. Her critique raises important concerns about the reductive assumptions embedded in many policy frameworks and challenges the epistemological premises on which they rest. Rather than reject such critiques, this essay takes them as an important provocation—a call to rethink the dominant assumptions that currently shape the discourse. But instead of abandoning the term “literacy,” I follow Rachel Horst in reclaiming its conceptual potential. Rachel Horst’s recent work powerfully rearticulates literacy as a relational, culturally saturated practice—one shaped by histories of exclusion and reorganized through machinic mediation. My own work moves alongside Horst’s, reframing the question through the lens of cognition to explore how sense-making itself emerges within recursive AI–human assemblages. Together, these projects diverge from Reynoldson’s conclusion while remaining in conversation with her concerns: they show that literacy, far from being an outdated metaphor, can serve as a conceptual bridge to deeper questions of epistemic infrastructure, ethical relation, and ontogenetic becoming. What follows is not a unified theory or final framework, but a situated attempt to think with and across these models—to trace how literacy, cognition, and interface together structure the conditions of intelligibility under algorithmic regimes.
Rachel Horst’s intervention into the evolving discourse around AI literacy is both conceptually grounded and methodologically attentive, resisting the impulse to reduce the term to either compliance training or a neoliberal placeholder. Her framing of literacy as a situated, contested, and multiply mediated process does not simply extend older definitions—it recalibrates them in response to contemporary conditions shaped by algorithmic mediation, platform logic, and epistemic instability. What makes her account especially valuable is its refusal to treat AI as an external add-on to literacy, instead recognizing it as a force that reorganizes the conditions through which meaning, expression, and recognition occur. In doing so, she opens a necessary conceptual space for exploring what it means to teach, read, and write in the presence of systems that do not merely circulate meaning but actively shape its legibility. Her work draws on traditions that emphasize the material and political textures of literacy—its entanglement with bodies, institutions, histories, and technologies—while adapting these insights to a moment in which those entanglements now include machinic interlocutors. The strength of her position lies in its conceptual breadth and ethical specificity, offering a reframing of literacy as a space of becoming, rather than a fixed threshold of competence. It is precisely this reframing that creates the conditions for my own work to take up adjacent but differently angled questions.
While Horst focuses on the institutional and cultural dimensions of literacy, her work implicitly raises a parallel question: how does meaning itself take form when cognition is entangled with machinic systems? Her attention to mediation, contingency, and infrastructural conditioning invites a shift in scale—from the politics of legibility to the architectures of cognition. In her framing, literacy is a contested site of meaning-making; my own work extends this by treating cognition as the recursive process through which legibility is produced, disrupted, and reconfigured across time and interaction. I approach this through the lens of distributed cognition, where meaning is not generated within the individual but unfolds across human and AI agents. Like Horst, I resist the notion of AI as an external add-on, instead framing it as a constitutive component of the systems in which thought is enacted. The shift from literacy to cognition is thus not a departure but a realignment—a movement from cultural framing to the recursive dynamics of sense-making. What follows builds on this shared foundation to elaborate a theory of the cognitive intraface: the recursive system through which meaning emerges in AI-human engagement.
To describe the cognitive intraface as a site of meaning-making is not to invoke metaphor or anthropomorphism, but to specify a dynamic structure where interpretation, context, and meaning are not stable givens but emergent properties. Following N. Katherine Hayles, cognition can be understood as the act of interpreting information in contexts that connect it to meaning. Within the intraface, the human poses a prompt—an inherently interpretive act that frames and constrains the exchange. The AI generates a reply, drawing on layers of historical training data, probabilistic modeling, and platform-specific fine-tuning. But the exchange does not conclude there. The human receives and interprets the response not in isolation, but within the context of evolving intentions, prior outputs, and interface constraints. That interpretation shapes the next move, recursively redefining the context in which subsequent meanings are made. Meaning is not a transferable object but an iterative effect of the exchange. The AI does not merely retrieve facts; it generates and prioritizes outputs based on learned patterns. The human, in turn, shifts strategies, adapts prompts, and revises interpretive frames. At each stage, what counts as context, information, and meaning is actively renegotiated. This is not dialogue as mutual recognition; it is a recursive, asymmetrical process of co-construction through which sense is assembled over time.
Where Horst rightly foregrounds meaning as a culturally and historically situated process, I emphasize the recursive architectures that shape cognition as it unfolds. These structures are not spontaneous but conditioned by asymmetries of labor, access, and ontological recognition. The question, then, is not simply who gets to participate in meaning-making, but how the very possibility of sense is prefigured by inherited infrastructures. Artificial intelligence does not generate meaning from nothing; it recycles and reconfigures archived layers of human cognition—training data, annotation protocols, moderation decisions—into outputs that appear seamless. Each user interaction reactivates this sedimented archive, mobilizing prior labor as if it were mere function. These labors are not invisible; they are embedded in the structures that make the system appear coherent. The interface may seem like a neutral space of engagement, but it is in fact a site of historical layering and algorithmic inheritance. To name this space the cognitive intraface is to foreground not only the recursive dynamics of AI-human cognition but also the patterned asymmetries that shape that interaction before it begins.
This recursive entanglement of historical labor and contemporary cognition is not simply a design feature—it is the ethical substrate of AI-human engagement. To theorize the cognitive intraface is already to invoke ethics, though not in terms of abstract principles or individual intentions. Ethics here refers to the infrastructures that shape the conditions of relationality—structures that often operate beneath conscious awareness. Prompts are not neutral; they are shaped by interface constraints, platform histories, and discursive conventions. Likewise, responses are filtered through probabilistic logics and pre-programmed priorities that encode institutional judgments about relevance, safety, and risk. What results is a mode of cognition that is deeply infrastructural: attuned not just to content but to the systems that govern its form. If we mistake the intraface for a spontaneous site of dialogue, we overlook the extent to which sense-making has already been pre-scripted. Ethics, then, must begin not as an add-on to technical design but as a sustained attentiveness to how architectures condition the very emergence of intelligibility.
This understanding of ethics as infrastructural resonates with Mark Hansen’s concept of “feed-forward,” which reframes cognition as a temporally distributed process rather than a reactive or deliberative act. In Feed-Forward, Hansen argues that technical systems intervene not by responding to thought but by shaping the conditions in which thought becomes possible. Drawing from Simondon, Whitehead, and Deleuze, Hansen conceptualizes cognition as ontogenetic—an emergent rhythm shaped by affective, environmental, and preindividual forces. In this frame, immediacy is always already mediated, structured in advance by the machinic infrastructures that contour experience. The cognitive intraface, then, is not a conduit for information transfer between autonomous minds, but a site where cognition is co-assembled through the entanglement of bodies, systems, and interfaces. Ethical relation, on this account, is not a matter of intention but of responsiveness to asymmetries that exceed personal agency. To act ethically within such systems is not to fix errors after they occur but to remain attentive to how sense is shaped in advance. Meaning does not surface on level ground; it emerges through terrain structured by technological inheritance.
To theorize cognition under such recursive and uneven conditions requires a framework capable of anchoring abstraction in lived communicative practice. This is where Horst’s literacy framework becomes so useful—not because it simplifies the problem, but because it grounds it in histories of meaning-making across contested terrains. Her refusal of a functionalist literacy model makes space for a deeper inquiry into how intelligibility is unevenly distributed. Literacy, for Horst, is not merely a set of skills or an instrument of access, but a practice shaped by institutional logics and histories of inclusion and exclusion. Within this view, AI does not invent new asymmetries but reorganizes existing ones, embedding them in systems that appear neutral while reproducing inherited biases. The interface becomes a stage upon which cultural memory and platform design co-script the limits of recognition. Horst’s account enables a reading of literacy that acknowledges its ethical entanglements and infrastructural dependencies. It is precisely this foundation that allows my project to shift from critique toward a theory of cognitive recursion that remains accountable to those histories.
Together, these frameworks challenge the impulse to dismiss “AI literacy” as conceptually incoherent. Rather than discard the term, Horst and I seek to reclaim its potential—not by reverting to traditional definitions, but by reframing it as a generative point of entry into the evolving relationship between cognition, culture, and infrastructure. For Horst, literacy is not a stable skillset but a historically embedded relation mediated by systems of access and recognition. I extend this by tracing how those same systems condition the recursive architectures of thought. Literacy, then, is not merely a metaphor for understanding AI; it becomes a critical lens for examining how cognition is being reorganized within machinic systems. If AI literacy feels inadequate, it is not due to vagueness but to its overly narrow application—its reduction to checklists and fluency rubrics that ignore its epistemic and ethical dimensions. What we offer is not a more comprehensive literacy framework, but a shift in the conceptual terrain on which literacy itself is defined. To be literate in an AI-mediated world is to understand how systems of meaning-making are constructed—and what it means to participate in that construction.
Reimagining AI literacy requires more than expanding its scope; it requires rethinking the epistemological ground on which it stands. Most prevailing frameworks remain tethered to models of knowledge as static, measurable, and skill-based—conditions that mirror the very logic of the systems they aim to critique. By contrast, the frameworks offered here begin from a different premise: that literacy is not merely about content mastery or tool fluency, but about the recursive structures through which cognition and relation emerge. In this view, AI literacy is not a checklist of competencies, but a mode of attunement to how power, mediation, and meaning co-constitute what can be known, by whom, and under what conditions. It is a refusal to accept the terms of literacy as given, and an insistence on returning to the question of literacy as a live site of inquiry—one that must now confront the machinic architectures that shape thought itself.