Co-Learn: Simon Brookes’ Experiment in Open, Emergent Dialogue

Co-Learn: Simon Brookes’ Experiment in Open, Emergent Dialogue

By J. Owen Matson, Ph.D.

One of the questions that keeps me up at night is how we design educational technologies that can actually encounter the unexpected. Most EdTech systems are built to optimize for clarity, efficiency, and alignment—which are valuable in their place, but deadly when they become the only values. When every interaction is pre-validated as either correct or incorrect, when the outputs of dialogue are immediately scored, tagged, or remediated, what space remains for the kinds of learning experiences that disrupt our assumptions about what counts as knowledge?

This is a problem EdTech never really asks, perhaps because emergence—those moments when meaning exceeds the system’s expectations—tends to get flagged as error, rerouted into familiar pathways, or flattened into alignment. Most platforms are trained to recognize only what they already know how to measure, so anything genuinely new is treated as noise rather than as a signal to learn from.

Enter Gert Biesta, patron saint of productive educational chaos. Biesta speaks of the beautiful risk of education—the idea that teaching becomes meaningful precisely when it refuses to guarantee outcomes. Learning, in this view, happens sideways, often in spite of the system rather than because of it, and usually when everyone involved has temporarily forgotten about measurable progress. These are moments that cannot be heatmapped or scaffolded or turned into a dashboard metric. You realize something happened only after your old frameworks no longer fit. Its pedagogical meaning only becomes visible after it has disrupted the very conditions under which meaning was being tracked in the first place.

And this has everything to do with AI. The dominant EdTech model—scaffolded prompts, adaptive hints, performance metrics that translate interpretation into progression—is exquisitely tuned to detect deviation without ever being capable of learning from it. This is why so many AI tutors come dressed in the aesthetics of dialogue as “Socratic.” In practice, this means they ask questions whose answers they already know, guiding you toward a prevalidated endpoint under the guise of open inquiry. Your role is to feel as though you discovered the learning goal through free exploration. The system asks what you think of the poem, then gently pulls you back toward the theme it was programmed to reinforce. It praises your insight when it aligns with the lesson plan. It routes divergence into remediation.

Which is why I was startled—genuinely, in the way that reveals how little surprise most systems permit—by the hour I spent on Co-Learn, a project developed not by a startup but by Simon Brookes, Executive Dean and Professor of Education at the University of Portsmouth. Simon, as far as I can tell, still believes that a question should be a structure for encountering the unfamiliar rather than a shortcut to predefined relevance.

I uploaded one of my own papers and spent the next hour in a kind of expansive, dialogic unspooling—no curriculum, no measurable outcome, no sense of where the conversation was supposed to end. For me, this has always been the most meaningful use case for AI: an open-ended dialogue in which thought unfolds in real time, with no pre-scripted destination. It’s also the use case EdTech has largely ignored in favor of models optimized for content creation and over-determined instructional delivery.

The interface itself was strikingly different: dialogic, recursive, and open, strangely attuned to conceptual rhythms without defaulting to summary. And the strangest part of all—the part that reads like sales copy except it isn’t—is that there’s nothing to sell. No freemium tier. No upgrade path. No UX funnel engineered for retention metrics. You just open it and dialogue. It dialogues back. And together, you make the road by walking.

#co_learn | Owen Matson, Ph.D.
Many ask me what a dialogic exchange with AI would look like. Simon Brookes built it. One of the questions that pesters me well into the night—somewhere between a philosophical dilemma and indigestion—is how we might design educational technologies that are capable of encountering something they didn’t see coming. Which is a bit like asking your GPS to take you somewhere it doesn’t recognize, then act surprised when you end up in a hedge. Most EdTech platforms, to put it charitably, have been engineered for the intellectual equivalent of airport security: streamlined, efficient, and deeply suspicious of anything unusual. If a student’s thought veers off the mapped route, the system treats it either as a threat or a clerical error. This allergy to the unexpected has become something of a design philosophy. The platforms scan for familiar patterns, reward alignment, and swiftly re-route any deviation toward remediation, which is a bureaucratic way of saying back where you started, but slightly ashamed. Anything that can’t be parsed by the existing categories gets flagged as confusion and quietly deleted from the ledger of learning. The result is a digital pedagogy that treats novelty like a malfunction and confusion like a design flaw. Enter Gert Biesta, patron saint of productive educational chaos. Biesta speaks of the beautiful risk of education—the idea that teaching becomes meaningful precisely when it refuses to guarantee outcomes. Learning, in this view, happens sideways, often in spite of the system rather than because of it, and usually when everyone involved has temporarily forgotten about measurable progress. These are moments that cannot be heatmapped or scaffolded or turned into a dashboard metric. You realize something happened only after your old frameworks no longer fit. And here, inevitably, comes AI—offering what looks like dialogue but behaves more like digital ventriloquism. Many so-called “Socratic” AI tutors ask questions to which they already know the answer, ushering you gently toward pre-approved insights. You are invited to explore, so long as you do so within the velvet ropes of the curriculum. Divergence is treated with great warmth, before being discretely neutralized. Which is why I found myself slightly stunned—genuinely, like catching a stranger humming Adorno in a shopping mall—by Co-Learn, a project built not by a venture-backed firm but by Simon Brookes, a professor of education who seems genuinely unconvinced that students are optimization problems. I uploaded an essay and was met not with a rubric or prompt, but with actual dialogue—recursive, conceptual, and gloriously unconcerned with arriving anywhere in particular. And the most unthinkable part of all—there is no product. No freemium tier. No UX funnel calibrated to gently extract your data soul. Just a conversation. You speak, it speaks back. You begin, and together, something begins again. #Co_Learn https://lnkd.in/gEGyNC6X