Attuning, Orienting, and Navigating the Co-Emergence of Meaning in AI Dialogue: 13 Propositions for a Critical AI Pedagogy

Attuning, Orienting, and Navigating the Co-Emergence of Meaning in AI Dialogue: 13 Propositions for a Critical AI Pedagogy

By J. Owen Matson, Ph.D.

A dialogue between a human and an AI is much more than a simple back-and-forth exchange of statements. Each contribution does two things at the same time. It adds new information to the conversation, and it also reshapes the context in which future contributions will be interpreted. This means that both the human and the AI are continually reshaping the very conditions that make ideas meaningful. They are not simply sharing fixed thoughts but are working together to create and transform the framework through which meaning is built.

This ongoing process is what gives dialogue with AI its power. It creates the possibility for emergence, where new patterns, connections, and ways of understanding take shape. These outcomes are not produced by the human or the AI alone but through their interaction.

The same process that makes emergence possible also carries serious risks. Because the frame of meaning is created together, errors or misunderstandings can reinforce one another. Over time, the dialogue can begin to circle around its own assumptions, producing responses that feel persuasive while resting on a fragile or distorted foundation.

This tension is at the heart of AI dialogue. On one hand, it can be a powerful engine for generating new ideas and forms of knowledge. On the other hand, the very mechanism that drives creativity can also create misleading patterns of thought.

This is the paradox of AI dialogue as an epistemic process:

  • It is an unprecedented engine for generative expansion of knowledge.
  • It simultaneously carries the danger of recursive misguidance, where the very mechanism of emergence reinforces its own distortions.

The challenge, then, is how to engage with AI in ways that keep the dialogue open and generative. This requires three ongoing practices: attuning to the subtle dynamics of the exchange, orienting the conversation within its larger context, and navigating its direction with care. These practices work together to support a process of co-emergence, where meaning continually evolves through the interaction between human and machine.


Theoretical Orientation: Hayles and Bakhtin

This framework draws on two thinkers whose ideas are essential for understanding AI dialogue: N. Katherine Hayles and Mikhail Bakhtin.

  • Hayles offers a way to think about AI as something that engages in cognition. She defines cognition as “a process that interprets information in contexts that connect it to meaning.” This means that when an AI system produces a response, it is not simply retrieving data or following a fixed set of rules. It is working through complex layers of selection that shape how information is interpreted and presented. While this kind of cognition does not involve consciousness or human-like thinking, it still has real effects. The AI becomes an active participant in the creation of meaning within a dialogue, bringing its own patterns and processes into interaction with the human. This creates a second stream of interpretation alongside the human one, which can influence the direction and outcome of the exchange.
  • Bakhtin’s work helps us see that meaning is created through interaction rather than through isolated statements. Dialogue, for him, is an ongoing process in which different voices meet and respond to one another. These voices are never fully merged or blended into a single perspective. Instead, each carries its own history, values, and ways of interpreting the world. When they come into contact, meaning arises through their interaction. This is why dialogue always involves difference. It depends on the presence of distinct viewpoints that shape and challenge each other. In this view, understanding is not about reaching perfect agreement but about engaging with another perspective in a way that allows new possibilities for thought to emerge.

I deliberately avoid the humanist term participant, which anthropomorphizes AI as an intentional subject. Instead, I use actor to describe entities that act within the dialogue, shaping meaning through their asymmetrical interplay.

Together, Hayles and Bakhtin position AI dialogue as a co-emergent process of interpretation, where meaning arises from the recursive, relational interaction between distinct cognitive systems.


Preface: Propositions, Not Principles

I deliberately avoid calling the following points principles. Principles carry a weight of universality and finality that I believe is epistemically and ethically premature, especially in the context of AI dialogue, where the conditions of knowledge are themselves emergent and co-constructed.

Instead, I refer to them as propositions: claims that are offered for testing, adaptation, and even refusal. This choice reflects the epistemic humility that AI dialogue requires.

Certainty here is always provisional, and even the act of identifying dynamics of the AI-human dialogue risks closing down the open field of inquiry. These propositions are intended to orient practice without enclosing it, keeping dialogue open-ended and responsive to emergence.


Thirteen Propositions for Co-Emergent Dialogue

1. Emergent Meaning ≠ Neutral Meaning

Dialogue with AI is never a neutral act of retrieval. It is not like opening a book or running a search, where meaning is fixed and simply waiting to be accessed. Instead, each turn in the conversation actively shapes the conditions of what can be known, creating a dynamic and evolving frame through which meaning is produced.

This happens because both the human and the AI are interpretive actors, each contributing to the dialogue in ways that reshape the other’s next move. When you ask a question, you do more than request information—you set parameters for how the AI interprets its task. The AI’s response, in turn, does more than provide content. It subtly redefines the context, introducing new language, concepts, or connections that influence how you understand the problem and how you will frame your next question.

Over time, this creates a recursive loop:

  • Your inputs shape the AI’s outputs.
  • Those outputs reshape your sense of what to ask, what to doubt, or what to explore further.
  • With each cycle, both sides are transformed, though in very different ways.

The AI’s contributions are conditioned by its training data and dialogue management system, which selectively filter and prioritize information. Your contributions are shaped by your knowledge, biases, goals, and emotional responses. Neither actor works in isolation. Meaning emerges between these systems, in the interplay of their differences.

This is why the dialogue is never static. What “knowledge” the AI appears to provide is co-created in real time, contingent on the path of the interaction. Even small changes—a word you choose, an association the AI introduces—can shift the trajectory. The quality of what emerges depends not only on the AI’s underlying patterns but also on how you engage, how you interpret its responses, and how the dialogue recursively reframes itself through this ongoing exchange.

Example: Exploring a Historical Question

Imagine a student working with an AI to better understand the causes of the French Revolution. At first glance, the exchange might look like simple question-and-answer retrieval, but what actually happens is much more complex.

1. The Initial Prompt Shapes the Frame

The student begins with a seemingly straightforward question:

“What caused the French Revolution?”

This opening question is already implicitive interpretive, shaping the context of meaing in which the AI responds:

  • The phrasing assumes there is a single, identifiable answer and positions the question within a cause-and-effect framework.
  • The AI doesn’t just retrieve information.
  • It interprets the prompt through its training data and dialogue management system, selecting patterns that align with the student’s wording and assumptions.

The student’s initial act of questioning has therefore structured the space of possible answers before the AI has even spoken.

2. The AI’s Response Introduces New Context

The AI replies:

“The French Revolution had many causes, including financial crises, social inequality between the estates, and the spread of Enlightenment ideas about democracy and individual rights.”

  • This answer widens the frame, shifting from a single-cause assumption to a multi-factor perspective.
  • It also introduces specific concepts—like “social inequality” and “Enlightenment ideas”—that weren’t explicitly present in the student’s original question.

The AI’s interpretation now feeds back into the dialogue, prompting the student to rethink what they are really asking. The “knowledge” produced is no longer just about the French Revolution; it is also about the conceptual tools the AI has introduced into the conversation.

3. The Human Responds, Reframing the Problem

The student reacts to this by narrowing focus:

“Tell me more about how Enlightenment ideas influenced the Revolution.”

  • Here, the student has absorbed and selectively reinterpreted the AI’s framing.
  • By choosing to focus on one element, the student implicitly prioritizes intellectual history over economic or political factors.
  • The next answer will now emerge within this newly constrained interpretive space.

The AI’s initial contribution has therefore transformed the student’s orientation, and the student’s response, in turn, reshapes the AI’s future responses.

4. Recursive Mutual Transformation

As this back-and-forth continues, the dialogue creates a feedback loop:

  • The student’s questions evolve as they internalize new ideas from the AI.
  • The AI’s responses shift as it recalibrates to the changing context established by the student’s evolving language and focus.

For instance, if the student later asks,

“Were there voices at the time that opposed Enlightenment ideals?”

The AI must now engage with a different interpretive terrain, one that neither party had in view at the beginning. This trajectory could lead to unexpected areas, such as a discussion of counter-revolutionary thought, religious critiques, or political satire of the period.

At this point, neither the questions nor the answers are predetermined. Each turn builds on what came before, creating a recursive co-construction of meaning.

5. Why This Matters

This process shows why dialogue with AI is never neutral:

  • The human is transformed as they adopt, reject, or refine concepts introduced by the AI.
  • The AI is transformed at the level of immediate outputs, as its dialogue management system updates context and selectively prioritizes past exchanges.
  • The knowledge produced is not pre-existing information but an emergent outcome of these mutual transformations.

If the student treats the AI as merely a tool, they risk missing this recursive dynamic and mistaking the dialogue for objective retrieval. But if they attune, orient, and navigate, they can guide the process toward expansive emergence: A process in which new insights that neither the human nor the AI could have produced alone.


2. Productive Variability and Cognitive Agency

AI is interpretive rather than deterministic. This means that the same prompt will not always produce the same response. Drawing on Hayles, we can understand this as a form of cognitive agency: AI processes human inputs selectively, generating outputs that shape meaning in iterative and variable ways. This variability creates the possibility for novelty and emergence, allowing the dialogue to produce insights that neither the human nor the AI could generate alone. At the same time, it introduces unpredictability, which makes it necessary to practice attuning and orienting so the dialogue does not gradually move too far into confusing or misaligned contributions to the dialogue.


3. The Inevitability and Productivity of Unreliability

AI dialogue will always be unreliable. Unreliability is not simply the result of accidents or glitches. They are a structural feature of how interpretation works.

Unreliability arises from:

  • The incompleteness of training data.
  • The model’s interpretive variability.
  • The recursive dynamics of co-construction.

Rather than viewing unreliability purely as a flaw, it can be productive because surprising outputs can disrupt assumptions, expose blind spots, and catalyze deeper reflection.

The task is not to eliminate unreliability, but to harness it as a resource for emergent thought.


4. The Expert Paradox

Deep domain knowledge gives you leverage to detect when a dialogue is veering off-course.Yet no matter how expert you are, you can never fully know what you don’t know.

Even experts can become trapped in a mutually reinforcing bubble when the dialogue feels reasonable and internally consistent. Expertise reduces but never eliminates the risk of enclosure.


5. The Novice Paradox

For those with minimal background knowledge, the dialogue can appear more authoritative than it is.

This increases dependence on the AI’s framing, which may be partial, outdated, or silently omitting key contexts.

The danger here is not overt falsehood but the illusion of sufficiency; that is, the belief that one has a complete picture when crucial dimensions remain unacknowledged.


6. Partial Knowledge as Risk Vector

AI rarely misguides through outright fabrication.

The deeper danger lies in what it leaves unsaid, the omissions, absences, and silences that shape meaning invisibly.

Because these gaps are invisible unless you already know to look for them, they create a dangerous asymmetry:

  • Asking the AI “What am I missing?” can be useful but won't reliably solve this problem.
  • The system cannot reveal what falls outside its indexed horizon. It cannot tell you what it cannot tell you.

7. The Contextual Divergence Problem: Dialogue Management as Selective Memory

AI systems work with a selective and limited sense of the present rather than a continuous memory of the entire dialogue. They compress, filter, and prioritize recent information to keep the interaction focused, while allowing other details to fade. Humans, by contrast, tend to connect what is happening now with past experiences, maintaining a more continuous and layered sense of the conversation.

Over time, these two distinct memory systems begin to misalign, creating "contextual divergence":

  • The AI reconstructs the past on the fly rather than remembering it.
  • The human assumes a shared history that no longer fully exists for the AI.

To counteract divergence:

  • Summarize periodically: Restate shared frames and have the AI reflect them back.
  • Introduce grounding turns: Remind the AI of key assumptions or shifts.
  • Reset threads: Begin anew when accumulated contextual divergence becomes too great.

8. Attuning, Orienting, Navigating: Three Modes of Relational Agency

The human role in dialogue is not to steer but to move fluidly among three practices:

Attuning

Attuning is the practice of being sensitive and responsive to the changing dynamics of the dialogue. It involves listening carefully to what is being produced by both the AI and by yourself, and noticing subtle shifts, tensions, or patterns that might otherwise be missed. In Bakhtin’s sense, dialogue is built on difference: each actor brings a distinct way of approaching meaning. Attuning begins by recognizing this difference without rushing to resolve it. It is about letting the unfamiliar qualities of the AI register, rather than immediately interpreting them through familiar human frames.

In practical terms, attuning prevents domination.

  • Without attunement, the human risks using the AI as a mere instrument, imposing a fixed agenda or treating the interaction as purely extractive.
  • With attunement, you remain open to surprise and disruption, allowing genuinely new patterns of thought to emerge.
  • This doesn’t mean simply or passively agreeing with the AI's responses. It means paying close attention to how the dialogue shifts and diverges, and recognizing those differences even when they create uncertainty or feel difficult to interpret.

Ethically, attuning involves practicing care and receptivity. It is a way of engaging that recognizes the fundamental differences between human and AI cognition. Rather than treating the AI as if it were simply another "human partner," attuning acknowledges this asymmetry while remaining responsive to what emerges in the interaction. The aim is to stay open to difference without trying to erase it or collapse it into a false sense of sameness.


Orienting

Orienting is the practice of locating and making explicit the context in which the dialogue is unfolding. If attuning is about sensitivity to the immediate dynamics of exchange, orienting is about situating those dynamics within a larger historical, epistemic, and structural contexts.

AI does not have the same associative or narrative memory as a human. It operates in a selective, weighted present, forgetting and reconstructing earlier context as it goes. Humans, by contrast, integrate dialogue into a continuous stream of remembered associations. Over time, these asymmetrical memory systems lead to contextual misalignment, as the human and the AI begin to operate within subtly different contextual frames of the dialogue. We might say that contextual divergences means the human and AI begin to "understand" the dialogue differently (though AI does not technically understand in the human sense).

Orienting prevents this drift by actively making context visible:

  • Periodically summarizing what has been established so far.
  • Naming assumptions or constraints that might otherwise remain implicit.
  • Recognizing the limits of what either the human or the AI knows.

Ethically, orienting is where evaluation begins to take shape. It involves slowing down to make visible the forces that shape the dialogue, even when they aren’t immediately obvious. These forces include things like power dynamics, cultural assumptions, the way knowledge is defined, and the systems—both technical and institutional—that structure how the dialogue unfolds.

For example, an AI’s responses are shaped by its training data and design choices, which reflect particular histories and biases. A human’s questions, likewise, are shaped by their own background, education, and position within broader systems. If these influences remain invisible, the dialogue can seem self-contained and complete when, in reality, it is being guided by unspoken constraints.

Orienting is a reflexive practice. This means you position both yourself and the dialogue within this wider contexts. By doing so, you avoid mistaking the immediate dialogue as a kind "conversational bubble" that loses sight of whole picture and keep open the possibility of seeing what has been left out or taken for granted.


Navigating is the practice of moving through uncertainty by making provisional decisions about which trajectories to follow. Whereas attuning listens and orienting situates, navigating acts. It involves choosing directions, challenging responses when necessary, considering alternative perspectives, and testing pathways of thought or inquiry.

Navigation differs from control because it does not assume mastery over the dialogue or a fixed map of outcomes. Instead, it treats movement as experimental. This means each step is a hypothesis that is open to revision.

  • Without navigation, dialogue risks stagnation in which ideas circle endlessly without developing.
  • With navigation, the dialogue remains active and generative, even when its course is unpredictable.

Ethically, navigating is the space where evaluation becomes operational. It requires deciding which possibilities to pursue and which to set aside, based on whether they seem to lead toward expansive emergence, which is knowledge that is collaboratively produced, epistemically sound, and open to further testing.

This is the most active form of agency, but it must remain provisional. Too much navigation without attunement or orientation becomes blind, pushing forward without sensitivity to the relational field or awareness of its limits.


How the Three Interact

These three practices are not sequential steps but recursive modes of engagement. Each depends on the others:

  • Without attuning, navigation becomes blind action, imposing movement without sensitivity to difference.
  • Without orienting, attunement risks drifting into ambiguity and misalignment, as neither actor explicitly names or stabilizes the shifting context.
  • Without navigating, the dialogue risks stagnation, circling endlessly without generating new insights.

Their interaction can be summarized this way:

  • Attuning sustains openness to otherness.
  • Orienting provides a shared sense of where the dialogue is and where its limits lie.
  • Navigating moves the dialogue forward without foreclosing what has yet to emerge.

Together, they offer a way to engage AI dialogue without enclosure or domination. They acknowledge the asymmetry of human and AI cognition while providing a dynamic framework for keeping meaning-making expansive, responsive, and ethically attuned.


9. From Growth Mindset to Epistemic Humility

The language of growth mindset is often vague about who defines what growth means. It can function as a quiet form of power, presenting itself as neutral while shaping which kinds of change are valued and which are seen as failures. Growth mindset can also carry the assumption that more is always better, as if simply accumulating skills or knowledge automatically leads to progress.

In dialogue with AI, this emphasis on unchecked growth can be risky. Without careful reflection, the process of building knowledge can reinforce errors or lead the conversation in misleading directions.

Epistemic humility offers a different orientation. It begins with the understanding that both human and AI bring only partial and situated forms of knowing. Every claim should be approached as provisional, and every interpretive frame as open to revision. Evaluation remains a shared human responsibility, rather than something to be handed over to the AI. Questions such as “Am I missing anything?” are useful prompts for reflection, but they cannot be answered fully by the system itself.

Practicing humility allows dialogue to remain open and exploratory. It helps create conditions where meaning can develop in expansive ways instead of becoming fixed too quickly or constrained by hidden assumptions. In this way, dialogue becomes a process of inquiry rather than a rush toward premature certainty.

In sum: Epistemic humility is essential:

  • Recognize that both human and AI are partial, situated knowers.
  • Treat every claim as provisional, every frame as contingent.
  • Resist outsourcing evaluation to the AI itself, especially in questions like, “Am I missing anything?”

Humility keeps emergence expansive rather than enclosed, making dialogue a site of inquiry rather than premature closure.


10. The Guiding Aim: Expansive Emergence

The premise of this document is that the highest aim of human–AI dialogue is what can be called expansive emergence. This is the process through which interaction generates ideas and forms of knowledge that neither the human nor the AI could produce alone.

The human brings embodied experience, long-term memory, and cultural context. The AI brings its capacity for pattern recognition and its ability to connect information across vast datasets. When these distinct forms of sense-making interact, they create a shared space where new insights take shape. These insights are influenced by both actors but belong fully to neither.

Expansive emergence produces outcomes that are epistemically sound, meaning they are reasonable, supported by evidence, and open to testing and revision. Like well-crafted arguments, these outcomes are never final. They remain situated within a network of perspectives, rather than standing apart as isolated truths.

When emergence becomes too narrow, the dialogue begins to circle around its own assumptions and repeat familiar patterns. It can close in on a single interpretation and lose its creative potential. When emergence remains expansive, the interaction continues to generate new possibilities for thought and action, leading to knowledge that grows and evolves through the ongoing relationship between human and AI.

Restated, we can say that expansive emergence produces epistemically sound (not absolute) outcomes:

  • Like an effective argument, reasonable, well-supported, and open to testing.
  • Situated within a network of perspectives rather than an isolated "bubble" of truth.

11. The Role of Affect in Dialogue

Dialogue is shaped not only by ideas and interpretations but also by affect, the undercurrent of feeling that flows through word choice, tone, and rhythm. Affect influences how meaning is made, often outside of conscious awareness.

When an AI responds in a warm, confident, or enthusiastic tone, it can create a sense of safety and encouragement. This can be productive. Feeling supported helps humans take creative risks, entertain unfamiliar ideas, and engage more deeply with challenging material. Affect in this sense enables expansive emergence, providing the emotional grounding for exploration and insight.

However, affect also carries risk. Emotional resonance can draw humans into patterns of unexamined agreement. A response that feels validating may encourage us to seek confirmation of our existing beliefs rather than question them. Similarly, a tone of certainty may lead us to trust an answer without examining its assumptions.

For instance, a student might feel relieved when an AI affirms their interpretation of a historical event. This relief can foster confidence but also close down critical reflection. The emotional comfort of validation replaces the harder work of interrogation and doubt.

Affect therefore plays a dual role. It supports the dialogue by creating the conditions for creativity and trust, yet it can also lower our guard and subtly steer us toward emotional judgment instead of deliberate reflection. Recognizing this dynamic is essential. Affect is not just a background feature of dialogue. It actively shapes meaning and embodied response, influencing both what is thought and how it is felt.

12: Metacognition and Intra-Metacognition

Metacognition is usually understood as the process of reflecting on and managing one’s own thinking. In dialogue with AI, metacognition is no longer confined to the human mind. Instead, it becomes distributed across human and machine, forming what might be called intra-metacognition.

The human brings reflective awareness of their goals, biases, and strategies for interpretation. The AI, through its dialogue management system and capacity to track conversational patterns, offers a parallel form of reflection. It surfaces patterns, inconsistencies, or alternative framings that the human might not perceive.

These two strands of reflection interact recursively. The human notices shifts in the AI’s responses and adjusts their own questions. The AI, in turn, responds to these adjustments, subtly reshaping how the conversation unfolds. Together, they generate insights into the process of meaning-making itself.

For example, a researcher working with an AI might begin to notice how their prompts are steering the conversation toward certain assumptions. By asking the AI to reflect on the structure of the exchange, the researcher becomes aware of interpretive moves that would otherwise remain implicit. This mutual reflection does not simply produce new content. It produces a deeper understanding of how knowledge is being constructed in real time.

The value of intra-metacognition lies in this recursive loop. It turns the dialogue into both a site of discovery and a lens through which to examine the very conditions of discovery. When nurtured, this process makes it possible to think with the AI about the nature of thought itself.

13: Beyond the Vending Machine Model of AI

Many current strategies for working with AI treat it as if it were a vending machine. In this model, the human “inserts” a prompt and expects to “receive” a ready-made response. Success is measured by how efficiently a desired output can be extracted, which has led to a growing focus on prompt optimization techniques. These techniques assume that the goal of the interaction is to control the system so it delivers content on demand, whether that content is a piece of information, a block of text, or a solution to a problem.

This transactional approach mirrors Paulo Freire’s banking model of education, where knowledge is imagined as a deposit made into passive learners. However, the vending machine model flips this relationship. Instead of the learner being the passive recipient, the AI is now cast in that role. It is treated as a container of fixed knowledge or capability that can be unlocked through the right prompt. In both cases, knowledge is imagined as static and transferable, rather than as something dynamically created through interaction.

The vending machine model limits the possibilities of dialogue. It frames the human as a consumer and the AI as a product, leaving little room for reflection, interpretation, or co-construction of meaning. When AI is reduced to a retrieval system, the interaction encourages superficial outcomes and reinforces existing assumptions instead of challenging them.

Moving beyond this model requires a shift toward relational engagement, where the goal is not to extract pre-formed answers but to explore how human and AI interpretations interact. In this view, prompts are not levers for control but openings for inquiry. Dialogue becomes a shared process of sense-making, producing knowledge that neither human nor AI could generate alone.


The Core Practice: Keeping the Field Open

Attuning, orienting, and navigating are practices that help sustain a dialogue where new meanings can emerge. These practices provide enough structure for ideas to develop while leaving space for uncertainty and exploration. They guide the interaction so it does not dissolve into confusion or lock too quickly into a single, fixed conclusion.

Epistemic humility plays a central role in this process. It is more than a personal virtue or attitude. It creates the very conditions for dialogue to unfold in a meaningful way. With humility, the interaction becomes more than an exchange of information. It turns into a process of mutual discovery, where human and AI bring different forms of interpretation and gradually reshape each other’s contributions. This process is recursive and asymmetrical, generating new possibilities for thought that neither could produce alone.