Co-Cognitive AI Literacy

Co-Cognitive AI Literacy

Most AI “literacy” is not literacy—though not for the reasons many literacy scholars insist.

In conventional literacy studies, the case is closed before it begins: AI Literacy cannot be a literacy because AI, being merely technical, does not participate in the creation of meaning. Yet literacy is not a mystical property of the human soul; it is the cultivated knack for interpreting, producing, and acting upon meaning within a given domain, all the while keeping an eye on the social, cultural, and material scaffolding that makes such meaning possible in the first place.

If we see AI as a mode of technical cognition, the terms of AI literacy chang—and so too must its practice.

If you can “pass” an AI literacy course without ever asking what knowledge is, how it is made, or who it serves, then what you have acquired is not literacy—it's IKEA instructions with better clip art.

At present, AI “literacy” mostly means learning how to flatter a machine into doing your homework. I call it AI Listery: a tidy list of prompt hacks, bias checkboxes, and the occasional reminder not to plagiarise.

Co-Cognitive AI Literacy begins with the understanding of AI as a form of non-human cognition—a process of interpretive selection that links information to meaning in ways no fully predictable or rule-bound structure can capture—what N. Katherine Hayles designates as “a process that interprets information in contexts that connect it to meaning.” On this view, cognition is not the private property of human minds but emerges across biological, technical, and hybrid systems.

When humans engage these systems, the interaction becomes a site of co-cognitive emergence—a recursive process in which human and AI interpretations shape and transform one another. This reframes AI from a passive instrument into an active, generative agent whose epistemic, ethical, embodied, and infrastructural entanglements must be critically understood.

Co-Cognitive AI Literacy (Definition)

The practiced capacity to work within this expanded field of cognition, cultivating the interpretive, relational, and contextual fluencies needed for responsible, creative, and critical collaboration with artificial cognitive agents. It entails:

Interpretive attunement (epistemological)

–sensing and shaping how human and AI generative processes, embodied, affective, and infrastructural conditions contour the very structures by which knowledge is made and transformed.

Relational competence (ontological)

–sustaining emergent dialogic meaning-making across asymmetry, difference, and unpredictability, allowing each agent’s interpretive capacities to reshape the other’s in ways neither could produce alone.

Contextual and ethical awareness (ethical)

situating AI–human dialogue within political, economic, embodied, and ecological systems, and practising a relational and speculative ethics that anticipates the cascading effects of AI-mediated meaning.