The Educational/EdTech Datafication Matrix

The Educational/EdTech Datafication Matrix

By J. Owen Matson

Somewhere—somewhere deep in the marble-floored atria of funding pipelines and product roadmaps and what the VP of Strategic Education Partnerships calls the “penetration matrix” (which is either a go-to-market plan or a therapeutic modality, depending on context)—someone begins again, though whether it’s the start of something new or just another instantiation of a feedback loop so recursively indexed to its own preconditions that novelty itself has collapsed into pantomime, a theatrical affect performed for venture capitalists who smell scale the way a search engine smells uncertainty, remains technically undecidable, which is fitting, since the bracketing of undecidability is more or less how things work here anyway, by which I mean: epistemological uncertainty is reformatted as UX affordance, which becomes an analytics dashboard, which becomes a procurement pitch, which becomes a talking point for the sales team, which gets translated back into product requirements by a PM whose partner once taught one year of middle school through Teach For America and now refers to this fact like a credentialed past life, invoking it in pitch decks and stakeholder meetings as evidence of having “pedagological insight”—a phrase that functions in this context as a kind of ornamental appendage to what is otherwise a sustained process of epistemic abstraction, whereby whatever educational aim might once have animated the work is gradually digested into metrics aligned to performance thresholds definable only within the resolution capacity of the system’s feedback architecture.

And in another somewhere within this system—maybe in a conference room named after a tree, which is meant to humanize the calendared fluorescence of a quarterly planning offsite but instead evokes an arboreal nostalgia staged by someone who has never sat under an actual tree without checking Slack—there is a Director of Learning Outcomes Optimization who used to be a consultant at a Big Three firm, though she now describes herself as “pedagogically adjacent,” a phrase that functions as a kind of moral prophylactic, a way to acknowledge the asymptotic intimacy with real classrooms while still asserting analytic distance, which is crucial because distance is what enables the transformation of lived complexity into insights, and insights into decks, and decks into product strategy, which is how the company remains legible to itself and to investors. And somewhere nearby—perhaps in the office with the standing desk and the bookshelf curated to look like someone reads Bourdieu for fun—is a Senior Vice President of K–12 Market Growth whose background in digital transformation for retail banking is now presented as a strategic asset, because education, like banking, involves distributed inputs, regulated outputs, user compliance, and cyclical throughput, which means the same operational metaphors can be repurposed, reframed, resold, and fed back into the platform logic as validation of cross-sectoral insight, which also helps during panel events and stakeholder roundtables where metaphors are the primary currency of what people call thought leadership.

And then there is the user researcher—typically one of the good ones, or trying to be, or trying to imagine they are trying—someone who keeps a Moleskine filled with quotes from fourth-grade teachers in Phoenix and assistant principals in Yonkers and literacy coaches in Fresno, and who writes things like “real pain point—confusion over how to interpret student growth trajectory” in careful, earnest cursive, which gets input into an internal tracker called something like VoiceOfEd or EDUstream or ListenED, where each “quote” becomes a “signal” and each “signal” feeds a cluster, and the cluster gets linked to a user archetype, and the archetype gets aligned with a market segment, and that segment informs product scoping, which means that the teacher’s bewilderment becomes both a design constraint and a commercial opportunity, which feeds back into the loop where empathy is operationalized as a metricized abstraction and pedagogical complexity is rendered in the visual idiom of a gradient heatmap on a Figma board shared in a Product Requirements Document titled “Improving the Nudge Loop for Tier 2 Disengaged Learners (v2.3).”

And the admin personas—“Personas” here functioning both as user categories and as theatrical roles, scripts written by people who have only ever attended classrooms as guests—become particularly tricky, because administrators are both gatekeepers and interpreters, validators and vectors, which means they are instrumentalized in at least three directions: upward to district procurement officers, outward to teachers, and inward to themselves, as they try to reconcile the performative demands of leadership with the infrastructural demands of compliance, which have expanded to include what is now called “data culture,” a phrase that circulates as a kind of moral imperative among superintendents who are contractually required to perform both instructional empathy and actuarial foresight in the same sentence. And these superintendents—often former teachers who remember what it felt like to cry in their car during lunch duty but now must speak in phrases like “impact-aligned fidelity” and “aggregate performance variance”—become caught in a loop of their own, where the pressure to produce evidence of efficacy feeds a demand for technologies that produce evidence by design, which means that procurement decisions are often made based on the system’s capacity to feed back performance as proof, regardless of what was taught, learned, endured, or metabolized within the loops of lived pedagogy.

And meanwhile, the platform has already started to accumulate “implementation data,” which includes things like log-ins and feature engagement and module completion time, all of which are passed to the Customer Success Manager—who once majored in Organizational Communication and now speaks with the cheerful authority of someone who has read three whitepapers and one Edutopia article about teacher burnout and refers to “touchpoints” as though they were intimate rather than contractual—who presents this data to the district as “early indicators of product impact,” which then feeds back into contract renewal conversations, which are themselves structured by prior procurement cycles, which are themselves informed by vendor pitch decks, which are themselves shaped by user research, which means that the loop is recursive even at the level of sales enablement. And the teacher, whose behavior is tracked but whose context is only selectively legible, becomes an artifact within the feedback system—an object of inference, a source of signal, a node within a network of performance dependencies whose definition of “engagement” has been rewritten to accommodate the telemetry limits of the underlying system.

And each of these figures—researcher, product manager, district admin, superintendent, customer success liaison, regional sales director with a three-state territory and a bonus tied to Q3 onboarding metrics—all participate in a distributed moral architecture in which nobody feels entirely responsible but everyone feels slightly uneasy, and that unease itself is metabolized into performance goals: improve onboarding, streamline dashboards, reduce alert fatigue, increase adoption, enhance UX empathy—each of which produces a new product cycle, a new insight, a new feature, a new dashboard widget, a new metric, a new loop.

And somewhere amid all of this—somewhere between the internal pitch decks and the external procurement cycles, between the dashboards designed for district superintendents and the backend flags that convert teacher actions into optimization signals—there emerges what might be called a product, though this term, like so many others in the EdTech idiom, is functionally indistinct from the infrastructural conditions that produced it, which means the “tool” is never just a tool but rather a kind of crystallized decision structure, a hard-coded set of assumptions about how learning should look when viewed through the analytic optics of platform architecture, which is exactly how it gets pitched: actionable insights, measurable outcomes, predictive interventions—phrases that perform alignment with pedagogical aims while rerouting those aims through quantifiable proxies that can be captured, assessed, refined, and, ideally, sold.

And once that platform—say it’s an AI-powered feedback assistant, or a gamified microlearning module, or a dynamic literacy scaffold that plugs into a district-wide MTSS lattice with “teacher-led nudges” built into the loop via auto-generated alerts reminding you to “re-engage your disengaged learners,” a phrase whose semantic content has been fully overwritten by its dashboard iconography—is deployed, it will begin to structure behavior in such a way that the behaviors it structures are themselves captured as signals, which then feed back into the platform’s machine learning layer as validation, thereby affirming both the product’s utility and its ontological premises, since what counts as learning has already been specified by the system that now feeds back into its own performance claims through metrics recursively derived from the same outputs it was built to generate.

Which means that by the time the sales team is citing user engagement as evidence of pedagogical impact, the loop has already closed—indeed, the loop has become the thing, the object, the process, the proof—and what gets taught, enacted, sold, and scaled is no longer learning in any meaningful or contested sense but the simulation of learning optimized for recursive confirmation by a system whose primary skill is feeding itself back into itself.

And the question of what students experience inside this loop—this endless self-referencing ecosystem of nudges, modules, dashboards, protocols, alerts, calibrations, interventions, and behaviorally-indexed auto-feedback sequences—is largely displaced by the loop’s own design parameters, which extract legibility from the very patterns the system enforces, then treat that legibility as evidence of efficacy, which feeds back into platform logic, which then informs subsequent product cycles, which shape the next generation of educational experiences, which are increasingly indistinguishable from the system’s need to capture and optimize the very behaviors it was built to elicit.

So now imagine the student—not a student as generalizable figure but one particular embodied being with stress hormones and family context and uneven internet access and an aching back from those plastic chairs that cut into the tailbone like industrial design was an act of passive aggression—encounters an adaptive quiz, misses a question, and triggers a remediation sequence, which lowers the system’s confidence score, which triggers a feedback alert, which feeds into the teacher’s dashboard, which cues an intervention prompt, which the teacher may or may not act on depending on time, bandwidth, emotional reserves, caffeine, accumulated microtraumas from prior student encounters, or just the ambient chaos of managing 32 different cognitive trajectories while the WiFi stutters and the HVAC system emits a hum pitched to precisely the frequency of existential dread, and if the nudge is delivered, it may register as encouragement or surveillance or something uncategorizable in between—and that ambiguity itself feeds back into the student’s affective experience, because tone is a feedback loop too.

And now the student, remembering they struggled with the last unit and imagining this struggle as an emerging narrative about who they are and what they’re capable of, begins to feel the loop constrict—not just the dashboard loop, but the somatic loop: shoulders contract, breath shortens, attention frays, cognition slows, confidence recedes—and this affective cascade itself becomes data, which the system registers as reduced performance, which triggers further intervention, which deepens the internal narrative of deficiency, which becomes another feedback loop, since mediated beliefs about self-efficacy (thank you, Bandura) are shaped by past performance and shape future motivation, effort, and persistence, all of which are now being tracked via logins, clicks, duration, and error rate.

And that belief, once frayed, becomes a recursive wound: effort withdrawal leads to underperformance, underperformance triggers alerts, alerts reify deficit identity, deficit identity reactivates stress, and the stress is fed back into the system as just more input, more signal, more justification for adaptive scaffolding that might itself be generating the very instability it purports to remediate. Because stress is a feedback loop. Sleep deprivation is a feedback loop. Cognitive overload is a feedback loop. And once internal states get externalized as metrics, they feed back into the systems that read those metrics as facts rather than symptoms of systems designed without adequate affordances for ambiguity, care, or pedagogical imagination.

Meanwhile, at the level of teacher–student interaction, which remains the most under-theorized and over-simulated component of the loop, the teacher—already exhausted from platform trainings that read like onboarding for a mid-tier CRM system—must now interpret the data in relation to actual humans, which requires a translation layer so thick with unspoken tensions (between intuition and evidence, care and compliance, attention and automation) that the very act of teaching has begun to resemble the managerial adjudication of feedback loops rather than the improvisational unfolding of epistemic relation. And yet these teachers remain structurally necessary to the loop, because their behaviors too are data, and their engagement rates feed back into platform dashboards that justify further training, refinement, and integration.

And student–student dynamics, which once provided a kind of chaotic and unquantifiable cognitive surplus—the passing of notes, the eye rolls, the emergent argot of teen semiotics that confounds platform capture—are now quietly being routed into monitored discussion boards and auto-flagged chatrooms, where peer-to-peer engagement can be indexed, archived, analyzed, and used to infer “collaboration skills,” which can then feed back into predictive models of future success, which feed back into career pathway visualizations, which feed back into course recommendations, which feed back into performance indicators, which feed back into what kind of learner the system believes the student to be, which feeds back into how the student begins to see themselves, which feeds back into how they behave in the next task.

And all of this—all of it—feeds back into the institutional archive, because EdTech companies fund research studies that validate their own models, and universities accept funding in exchange for access to “clean” data, and researchers write papers that generate metrics for institutional performance evaluations, and those metrics feed into ranking systems, which influence funding, which influences what gets built, which determines what students experience, which feeds back into the loop, which feeds back into the loop, which feeds back into the loop.

And the system, like all sufficiently complex feedback architectures, has no center, no edge, no obvious point of interruption—only nested loops feeding back into one another at varying scales of speed, scale, and semantic abstraction—until learning itself begins to feel like latency, a function of how quickly you respond to a prompt rather than what happens in the moment of thought, which no longer counts because it cannot be counted, which is the reason it matters, which is the reason it vanishes, which is the reason we loop.