Co-Learn: Simon Brookes’ Experiment in Open, Emergent Dialogue

By J. Owen Matson, Ph.D.
One of the questions that keeps me up at night is how we design educational technologies that can actually encounter the unexpected. Most EdTech systems are built to optimize for clarity, efficiency, and alignment—which are valuable in their place, but deadly when they become the only values. When every interaction is pre-validated as either correct or incorrect, when the outputs of dialogue are immediately scored, tagged, or remediated, what space remains for the kinds of learning experiences that disrupt our assumptions about what counts as knowledge?
This is a problem EdTech never really asks, perhaps because emergence—those moments when meaning exceeds the system’s expectations—tends to get flagged as error, rerouted into familiar pathways, or flattened into alignment. Most platforms are trained to recognize only what they already know how to measure, so anything genuinely new is treated as noise rather than as a signal to learn from.
Enter Gert Biesta, patron saint of productive educational chaos. Biesta speaks of the beautiful risk of education—the idea that teaching becomes meaningful precisely when it refuses to guarantee outcomes. Learning, in this view, happens sideways, often in spite of the system rather than because of it, and usually when everyone involved has temporarily forgotten about measurable progress. These are moments that cannot be heatmapped or scaffolded or turned into a dashboard metric. You realize something happened only after your old frameworks no longer fit. Its pedagogical meaning only becomes visible after it has disrupted the very conditions under which meaning was being tracked in the first place.
And this has everything to do with AI. The dominant EdTech model—scaffolded prompts, adaptive hints, performance metrics that translate interpretation into progression—is exquisitely tuned to detect deviation without ever being capable of learning from it. This is why so many AI tutors come dressed in the aesthetics of dialogue as “Socratic.” In practice, this means they ask questions whose answers they already know, guiding you toward a prevalidated endpoint under the guise of open inquiry. Your role is to feel as though you discovered the learning goal through free exploration. The system asks what you think of the poem, then gently pulls you back toward the theme it was programmed to reinforce. It praises your insight when it aligns with the lesson plan. It routes divergence into remediation.
Which is why I was startled—genuinely, in the way that reveals how little surprise most systems permit—by the hour I spent on Co-Learn, a project developed not by a startup but by Simon Brookes, Executive Dean and Professor of Education at the University of Portsmouth. Simon, as far as I can tell, still believes that a question should be a structure for encountering the unfamiliar rather than a shortcut to predefined relevance.
I uploaded one of my own papers and spent the next hour in a kind of expansive, dialogic unspooling—no curriculum, no measurable outcome, no sense of where the conversation was supposed to end. For me, this has always been the most meaningful use case for AI: an open-ended dialogue in which thought unfolds in real time, with no pre-scripted destination. It’s also the use case EdTech has largely ignored in favor of models optimized for content creation and over-determined instructional delivery.
The interface itself was strikingly different: dialogic, recursive, and open, strangely attuned to conceptual rhythms without defaulting to summary. And the strangest part of all—the part that reads like sales copy except it isn’t—is that there’s nothing to sell. No freemium tier. No upgrade path. No UX funnel engineered for retention metrics. You just open it and dialogue. It dialogues back. And together, you make the road by walking.