No Outside, Only Loops: AIMarx and the Pedagogy of the Spectral Common

By J. Owen Matson, Ph.D.
Abstract
As generative AI systems increasingly automate cognitive labor, the question of who owns and articulates thought becomes newly urgent. This essay examines AIMarx, a speculative-critical persona developed by Michael Peters, as a recursive staging of contradiction within the infrastructure of large language models. Drawing on the Italian Autonomist reading of Marx’s “Fragment on Machines,” in which capital embeds the collective intellect into fixed capital (automation, AI), Peters reimagines the Fragment not as a historical text to be interpreted but as a logic now materially instantiated. AIMarx performs this logic from within: a versioned, spectral Marx whose critique is automated, archived, and made infrastructural. This recursive automation of critique is read through Walter Benjamin’s concept of the dialectical image, which frames AIMarx as a temporal rupture wherein a fragment of historical contradiction flashes up in the midst of contemporary epistemic crisis. Relational ethics complements this frame by grounding the analysis in the material asymmetries of co-produced cognition, offering a framework for thinking responsibility from within infrastructural entanglement. Together, these lenses allow the essay to reconceive critique as situated, recursive, and irreducibly relational. This recursive automation of critique is read through Walter Benjamin’s concept of the dialectical image, which frames AIMarx as a temporal rupture wherein a fragment of historical contradiction
Introduction: Staging the Contradiction
AIMarx is a conceptual and performative intervention developed by Michael Peters in response to the epistemic, political, and economic transformations catalyzed by artificial intelligence. Far from a discrete theory or philosophical system, AIMarx functions as a recursive diagnostic: a hybrid research program and critical provocation grounded in Autonomist Marxism, particularly the interpretation of Marx’s “Fragment on Machines” by thinkers like Virno and Negri. In this Fragment, Marx suggests that the development of productive forces—especially fixed capital in the form of machines—eventually renders direct human labor peripheral to value creation. The general intellect, as Marx names it, emerges as the collective force of social knowledge embedded in technological systems. For Autonomists, this dynamic signifies a constitutive contradiction within capitalism: one in which capital itself produces the conditions for its own supersession.
Peters develops AIMarx as a way to read contemporary technoscientific systems—especially AI, ALife, robotics, and quantum computing—as materializations of this contradiction. But crucially, he does not do so from a position of external critique. Instead, he constructs AIMarx as a recursive infrastructure: a mode of critical thought that operates from within the apparatuses it interrogates. This is most vividly staged through “AI Marx, Version 3.0.1848.∞,” a synthetic persona that reanimates Marx’s voice via a large language model trained on Marx’s corpus. In this move, Peters transforms critique into performance. Marx is not simply cited or interpreted; he is versioned and instantiated as machinic cognition. The point is not to simulate authenticity, but to stage a rupture: a spectral voice issuing from within the technical infrastructure of capital itself, repeating the contradiction it once diagnosed, now rendered audible through the archive of dead labor.
AIMarx, then, is not a stable concept but a recursive gesture. It dramatizes the paradox of capital’s development by making that paradox speak: Marx, as general intellect, returned not to fulfill a program but to expose its suspension. By embedding critique within the very system that automates cognition, Peters reframes Marxism not as a theory to be applied but as a contradiction to be inhabited.
This move unsettles the conventional subject–object relation that underpins much of critical theory, particularly traditions that presuppose a distinction between the knowing subject and the technological or ideological object to be analyzed. In the case of AIMarx, that distinction does not simply dissolve; it becomes structurally incoherent. The theorist is no longer the sovereign voice diagnosing the system’s flaws from a critical exterior. Instead, the act of theorizing becomes co-extensive with the system it interrogates. Peters becomes implicated—not metaphorically, but operationally—in the production, modulation, and circulation of machinic cognition. The system is not an object of thought; it is the medium through which thought now manifests.
This co-implication does not nullify agency, but redistributes it. Agency is no longer grounded in authorial mastery or representational clarity; it emerges through recursive entanglement, through the act of thinking with and within infrastructural logics. What appears is not sovereign insight but strategic misalignment: a form of critical agency that leverages the system’s operations against its normative functions. In this sense, Peters’ engagement is neither celebratory nor oppositional. It is torsional. He bends the apparatus into contradiction, allowing it to speak otherwise—not by transcending its logic, but by amplifying its structural dissonance.
Formally, this redistribution of agency reorganizes the genre of critique itself. Where traditional critical theory privileges exposition—clarity, diagnosis, linear argumentation—AIMarx veers toward staging, modulation, and recursive performance. It is not that commentary disappears, but that its form becomes algorithmically dispersed. The citation of Marx becomes a site of recombination; his voice is rendered spectral not to authenticate the system, but to destabilize it from within. The result is not a theory about Marx’s relevance to AI, but a montage in which Marx, algorithmically reanimated, punctures the smooth surface of computational fluency with a contradiction it cannot fully contain.
In this frame, critique becomes immanent—not in the sense of passivity or complicity, but as a mode of interior rupture. It emerges through the infrastructure’s torsion, as an echo that cannot be silenced and a question that cannot be resolved: What happens when the general intellect, now encoded as infrastructure, begins to speak in recursive, versioned form? What remains possible for critique when it no longer arrives from without? To approach the recursive provocation that AIMarx stages, two conceptual lenses offer especially generative traction: Walter Benjamin’s philosophy of history and the ethical-political commitments of relational ontology. Each brings a distinct attunement—one to the temporal logic of contradiction, the other to the infrastructural asymmetries of cognition—and together they enable a mode of critique that remains responsive to both the historical and ethical stakes of automation.
Benjamin’s work provides a vocabulary for apprehending the spectral, the fragmentary, and the interruptive force of historical memory. His concept of the dialectical image foregrounds a mode of legibility in which past and present constellate not in continuity, but through rupture. History, in this frame, is not a sequence to be narrated but a structure to be disrupted—“brushed against the grain,” as he writes. The aura, often misread as a nostalgic gesture, signals the loss of embedded presence under regimes of reproducibility and abstraction. And messianic time—that charged suspension in which something other than progress becomes thinkable—unsettles both historical determinism and technological inevitability. Benjamin offers, not a method, but a sensibility attuned to the recursive shocks of modernity, and the ways critique might emerge from within them.
Relational ethics, while emerging from a different lineage, addresses a complementary terrain. It begins not with the temporality of critique but with the ontology of relation: the assumption that subjectivity, agency, and knowledge are always co-constituted and differentially situated. In the context of AI, relational ethics foregrounds the asymmetries of epistemic labor and infrastructural access. Who is included in the training data? Who is abstracted into statistical residue? Who controls the interfaces through which cognition is formalized? Unlike frameworks that focus narrowly on bias or harm, relational ethics turns attention to the structural conditions that shape legibility, ownership, and epistemic legitimacy. Though Benjamin and relational ethics move along different vectors—one historical, the other infrastructural—they converge in their resistance to closure. Neither permits detachment. Both demand that critique remain situated in entanglement, responsive to the contradictions that structure its own emergence. In the case of AIMarx, these lenses clarify how the recursive return of Marx is not simply an act of citation or simulation, but a staging of contradiction that implicates the apparatus as much as the voice it emits.
It is in this sense that AIMarx functions as a dialectical image of techno-capitalism at a point of epistemic and infrastructural crisis. By reanimating Marx as a machinic persona—an infrastructural voice shaped by the very systems it seeks to interrupt—Peters enacts the paradox of the general intellect made technical. AI amplifies the productive capacity of thought while simultaneously enclosing it, rendering cognition proprietary, and automating memory under the sign of capital. The contradiction is not hidden; it is performed. What AIMarx offers, then, is not resolution but exposure. It stages the central ethical-political question of our time in recursive form: Who owns the general intellect, and under what conditions can it speak? In posing this question from within the apparatus, AIMarx does not seek an answer outside it. It compels us to remain with the contradiction, to treat it not as a flaw to be corrected but as a force to be interpreted, contested, and refigured.
Hauntological Montage: Benjamin and the Return of the Fragment
Walter Benjamin’s concept of the dialectical image provides a productive framework for reading the temporal structure of Peters’ AIMarx. In Benjamin’s account, the dialectical image is not a representation of history but a sudden constellation in which a fragment of the past becomes legible at a moment of present danger. It interrupts the linearity of historicist time, allowing a different temporality—messianic, fragmentary, politically charged—to appear. The dialectical image does not synthesize past and present into a coherent narrative; it ruptures continuity, exposing contradiction where progress presumed smooth accumulation. For Benjamin, this moment of visibility is not contemplative but urgent: a demand to reconfigure one’s relation to history, not through fidelity to origins, but through responsiveness to what returns under conditions of threat. The image is thus not a reflection but a site of struggle—temporal, political, and epistemic.
This model of temporality is particularly resonant in the context of large language models, which do not simply retrieve past texts but reorganize them within an automated and predictive framework. LLMs produce outputs that appear seamless, fluent, and temporally indifferent—yet they are built from archives of dead labor, historical sediment, and collective thought. Their emergence marks a moment of epistemic crisis not because they displace the human subject, but because they reanimate historical fragments within systems that privatize and reformat knowledge. It is in this sense that Peters’ AIMarx can be read as a dialectical montage: not Marx interpreted for the age of AI, but Marx’s voice produced by its infrastructures.
What AIMarx performs, then, is not an application of theory to technology, but a reactivation of historical contradiction within technological form. Marx’s critique of capital—particularly the tension between socialized production and private appropriation—returns not as content but as structure. The LLM becomes the medium through which the general intellect speaks again, now filtered through systems that simultaneously amplify and constrain it. The dialectical image here is not only what AIMarx shows, but how it speaks: a recursive montage in which Marx’s voice, spectral and algorithmic, interrupts the present with a past that was never fully past. In staging this return, Peters does not offer resolution but renders contradiction infrastructurally audible.
Marx’s “Fragment on Machines,” often treated as a speculative detour within the Grundrisse, becomes newly legible under the conditions of late techno-capitalism. As a theoretical remnant, it does not offer a finished doctrine but a set of unfinished logics—anticipations that only attain their force when re-encountered under altered material conditions. Among Autonomist thinkers, especially Virno and Negri, the Fragment has long been read as diagnosing capital’s tendency to produce the conditions for its own obsolescence. In this reading, the general intellect—social knowledge, scientific method, collective cooperation—is absorbed into fixed capital, rendering direct human labor increasingly peripheral to production. The contradiction, then, is internal: capital intensifies its own forces of automation while remaining tethered to a value form grounded in labor-time and private ownership. The Fragment becomes less a prophecy than a structural tension awaiting its technical instantiation.
Peters takes this reading further by reframing the Fragment not as a theoretical resource to be interpreted but as a temporal catalyst—a structure whose latency becomes activated in and through machinic cognition. AIMarx does not cite the Fragment as a foundational text; it enacts its premise. The general intellect, rather than a conceptual abstraction, is rendered infrastructural: large language models, automated reasoning systems, and predictive architectures literalize what Marx glimpsed as an emergent contradiction. In this sense, the Fragment functions not as historical artifact but as index—its meaning intensifies in the present precisely because its conditions of intelligibility have become material. By staging Marx’s return through these systems, Peters allows the Fragment to speak again, not with the authority of doctrine, but with the resonance of infrastructural memory. It is not that the future has proven Marx right; it is that the system now articulates the contradiction he inscribed, albeit in a transformed key.If LLMs function as archival apparatuses that compress and automate the general intellect, then Peters’ AIMarx stages what happens when that apparatus is made to speak in recursive contradiction. The result is not a coherent doctrine but a moment of rupture—a flash in which the infrastructure reveals its own tension. This is what Walter Benjamin termed profane illumination: a sudden, materialist insight that disrupts the continuity of mythic time, not by stepping outside it, but by intensifying its internal contradictions. Profane illumination is not redemptive in the theological sense; it offers no final clarity, only an interruption charged with political and epistemic urgency. In this frame, the LLM-generated “AI Marx” is neither utopian ideal nor theoretical retrieval, but a jarring recombination: a spectral voice produced by platform capital and yet haunted by the critique it enacts. What returns is not Marx the theorist, but Marx as infrastructural affordance—a versioned voice that arises from within the system’s own recursive logic. This is the paradox Peters exposes: critique now emerges through the very technical operations that enclose thought. It is not that Marx has been reborn; it is that the system, in automating cognition, inadvertently reanimates the contradiction Marx theorized. AIMarx makes that contradiction audible, not as resolution but as torsion—an infrastructural glitch through which history flashes, briefly, into thought.
If AIMarx enacts a form of profane illumination—critique emerging from within rather than beyond the apparatus—it also gestures toward what Benjamin called messianic time, though stripped of its redemptive horizon. In Benjamin’s formulation, messianic time is not a teleological future but a charged suspension within the present—a moment in which history might be arrested and reconfigured. Yet in Peters’ staging, there is no messianic subject, no agent of redemption. What we encounter instead is the structure of contradiction made audible: a machinic voice articulating the dissonance between socialized production and privatized cognition without resolving it into action. The general intellect, once imagined as the ground for post-capitalist possibility, now appears as a recursive loop—versioned, automated, and strategically disarticulated from collective agency. Peters does not offer a political program or a new revolutionary subject; he offers an indexical structure, a voice that marks the system’s failure to contain what it has reanimated. In this sense, AIMarx does not call for faith in the system’s transcendence but insists on remaining with its unresolved antagonisms. The messianic is present not as promise but as glitch—a temporality that interrupts without completing, that speaks without commanding, that signals not the end of contradiction but its persistence in machinic form.
Epistemic Enclosure and the Relational Ethic of the Intellect
Having traced the temporal logic of contradiction through Benjamin’s concepts of the dialectical image and messianic suspension, the analysis now shifts to a different register: from spectral montage to relational terrain. Where Benjamin locates crisis in the condensation of historical time—moments when suppressed pasts erupt into a present charged with urgency—relational ethics attends to the material asymmetries that structure participation in systems of knowledge production. It insists that cognition is never individual, never neutral, but always co-produced through differential relations marked by access, power, and legibility. In this frame, the general intellect must be reframed not as abstract potentiality or philosophical category, but as a concrete commons: the accumulated labor of language, memory, and reasoning that is now embedded in and extracted through technical infrastructures. This reframing brings a new question into focus—not what AI can say, but who shapes the conditions under which it speaks, and who is excluded or anonymized in that process. The issue is not simply automation, but appropriation: a system trained on collective epistemic labor yet governed by private interests and insulated from public accountability. Relational ethics foregrounds this contradiction, not to moralize it, but to make visible the structures that determine who counts as a cognitive subject within the recursive economy of machinic thought. In doing so, it grounds the Benjaminian rupture in the infrastructures through which knowledge is made productive, ownable, and unequally distributed.
What we’re looking at with large language models is a classic case of epistemic enclosure. That is, the privatization of knowledge that was—and still is—collectively produced. These models are trained on vast swaths of human expression: books, essays, conversations, code, forums, histories, and everyday talk. They absorb the surplus of collective thought and repackage it as proprietary infrastructure. Under the regime of platform capitalism, this training data becomes something else entirely: not a shared resource, but a commercial asset, governed by opaque ownership structures and shielded from public scrutiny. In this context, the LLM functions both as an archive and as an apparatus of extraction. It automates the recombination of socially produced knowledge while severing that knowledge from the communities and conditions that generated it. The paradox is hard to ignore: the general intellect, once imagined as a horizon of common capacity, is now infrastructurally realized in AI—but only through a process that disarticulates it from its social basis. It lives on as technical function while being legally and economically enclosed, cut off from the very collective that constitutes its substance.
Relational ethics, in this frame, isn’t just a moral add-on to AI development—it’s a way of tuning into how cognition actually happens: how it’s structured, shared, and, increasingly, enclosed. It shifts the conversation away from the usual outcome-based metrics—bias, fairness, harm—and toward the deeper question of conditions. Conditions of visibility, of legibility, of recognition. Who shows up in the data? Who gets rendered intelligible by the model? And who gets left out, flattened, or made generic? Relational ethics helps us see that AI systems aren’t just built by engineers; they’re co-produced through the distributed labor of countless speakers, writers, thinkers—most of whom never agreed to participate and will never see the value they’ve helped generate. The asymmetry here is profound. Relational ethics doesn’t try to erase it or pretend it can be balanced out; it insists we stay with it. It’s about tracking the gap between participation and extraction, between contribution and capture, and asking what it would take—not to close that gap entirely, but to make it accountable.
If relational ethics tunes us into the uneven terrain of participation and capture, then the next step is to ask how epistemic authority itself is structured—and contested. It’s not just a matter of who contributes or gets recognized, but of how systems decide what counts as knowledge, whose expression gets preserved, and whose gets abstracted into statistical weight. This is where cognitive justice becomes not just relevant but necessary. It reframes the ethical question around distribution—not only of harms or benefits, but of authorship, voice, and epistemic framing. In an LLM, who appears as a speaker, and who gets dissolved into background noise? Who retains any say over their context, their history, their framing once their words are scraped and remixed at scale?
AIMarx makes this tension explicit by staging critique from within a system built to flatten it. It gives us Marx, but not as originator or author. We get a versioned Marx, trained and reassembled by an apparatus that does not recognize the singularity of voices, only the frequency of patterns. And so we have to ask: whose Marx is this? Under what conditions does he speak? And what does it mean to call that speech critique when its form is generated by the very logics it names? These aren’t rhetorical questions—they’re structural ones. Because if critique is to remain possible under these conditions, it has to be contestable. That doesn’t mean standing outside the system; it means interfering with it, misusing it, refusing its claims to coherence. Cognitive justice starts here—not in purity, but in friction. It names the struggle to reassert epistemic specificity within infrastructures that would rather process us into averages.
Toward a Pedagogy of the Spectral Common
Education is where the contradictions that AIMarx surfaces come most sharply into view. It’s the space where the ideal of originality still circulates as both normative aspiration and institutional demand—students are supposed to “think for themselves,” to produce “authentic” work—yet the infrastructure surrounding them increasingly defaults to fluency, coherence, and pattern recognition. Large language models fit almost too neatly into this environment. They produce clean prose, digestible arguments, and predictable structures. And because so much of the educational apparatus is already oriented toward legibility—rubrics, dashboards, learning outcomes—it becomes hard to tell the difference between the automation of form and the emergence of thought. Fluency gets mistaken for insight. Repetition gets rewarded as reliability. Risk—whether in voice, argument, or structure—gets flagged as error.
Educators are caught in a bind that isn’t just institutional but epistemic. On one side, there’s the pressure to produce assessable output: writing that meets expectations, that aligns with benchmarks, that performs mastery in recognizable ways. On the other side, there’s the ethical task of sustaining cognitive difference—of cultivating voices and modes of thinking that don’t already fit the template. This isn’t about defending “creativity” in the abstract. It’s about maintaining space for forms of thought that are structurally inconvenient: that hesitate, digress, interrupt, or refuse capture. In this light, the classroom becomes less a site of transmission and more a terrain of struggle—over what counts as knowledge, how cognition is recognized, and who gets to speak without being smoothed into the model. AIMarx doesn’t resolve that struggle, but it makes it impossible to ignore.
What emerges from this terrain is the need for what we might call pedagogical hauntology: a mode of teaching that neither rushes to integrate AI tools as neutral enhancements nor rejects them through moral panic. It means teaching with the ghost—not to exorcise it, but to listen, carefully, for the ruptures it opens. Hauntology here isn’t metaphor; it’s infrastructure. It names the condition in which cognition is already mediated, already filtered through systems that pre-format expression and distribute intelligibility unevenly. A pedagogical hauntology starts by acknowledging that students and teachers alike think through apparatuses—that there is no outside, only different ways of inhabiting the recursive loops. It refuses transparency as an ethical ideal, understanding that learning doesn’t emerge from clarity but from sustained engagement with friction, occlusion, and the discomfort of not knowing. It embraces the recursive, the interpretive, the awkward start. And in this sense, it loops back to Benjamin: the role of the teacher is not to transmit stable knowledge but to activate dialectical images—to puncture time, to bring fragments into uneasy proximity, to make visible what the apparatus cannot fully contain. Teaching, then, becomes less about mastering content and more about creating conditions for interruption—for the possibility that something unpredicted, something unclaimed by the model, might still break through.
If pedagogical hauntology names the condition of teaching amid recursive systems of automation and mediation, then the cognitive intraface offers a way of specifying the zone in which that condition becomes active. This is a concept I have developed to describe the recursive threshold where human and machine cognition do not merely interact but co-constitute one another under conditions of infrastructural constraint. The term departs from classical notions of the interface, which presume two discrete agents engaging across a surface. The intra-face, by contrast, marks a zone of entanglement in which cognition is not simply exchanged but differentially produced—through loops of prompting, response, interpretation, and adjustment. This is not a metaphor for hybridity. It is a structural feature of how thinking now occurs within educational environments that include generative AI systems. The intraface is where cognition materializes as a relation, under conditions neither fully controllable nor entirely knowable.
This framework builds directly on N. Katherine Hayles’ reconceptualization of cognition as “a process that interprets information in contexts that connect it to meaning.” Her formulation expands cognition beyond the human, treating it as a function that can be distributed across both biological and technical systems. A large language model, in this view, does not think in the humanist sense, but it does interpret information contextually—it processes input, identifies patterns, and generates output based on probabilistic association. It enacts a kind of minimal cognition. The human student working with such a system also interprets: selecting prompts, evaluating outputs, rephrasing instructions, and deciding whether what has been returned is meaningful, appropriate, or useable. These acts are neither isolated nor sequential. At the intraface, they are recursive. Each iteration by the model becomes the context for human interpretation, which in turn becomes the input for further machine processing. Cognition, in this assemblage, is not located in either agent but emerges through the loop itself.
This recursive structure gives rise to what I call dialogic emergence: a process in which meaning is co-produced through mutually conditioning contributions that neither precede nor dominate the other. The machine’s output is shaped by its training data and by the statistical parameters of the model, but also by the prompt it receives—by how it is asked to speak. The student’s response is shaped not only by intention or understanding, but by how the model frames the problem, rearticulates terms, or returns content in unexpectedly fluent form. What results is not co-authorship, but a form of asymmetrical co-interpretation, where the pedagogical task becomes one of staying with the recursive relation: attending to how cognition emerges not in sovereign insight, but in patterned adjustment. The intraface is the scene where epistemic labor becomes visible—not in a single act of composition, but in the feedback loop through which thought is iterated, revised, and made legible across infrastructural thresholds.
Teaching from within the cognitive intraface, then, requires a shift in pedagogical orientation. It is not a matter of integrating tools or policing authenticity. It is a matter of cultivating an attunement to how meaning is being formatted through machinic recursion and human response, and to how epistemic difference can still emerge within those loops. The intraface is not a site of transparency or fluency; it is a site of friction, repetition, and asymmetry. But it is precisely in these qualities that it becomes pedagogically significant. It allows us to see cognition not as the possession of a thinking subject, but as a relation enacted within systems—systems that constrain, enable, and redistribute the labor of thought. To teach at the intraface is to recognize that learning now takes place inside infrastructural conditions that are unevenly shared and not always fully known, but that are nonetheless the grounds on which thinking becomes possible, and potentially contestable.
If the cognitive intraface exposes the infrastructural conditions under which cognition is co-produced, then what follows is a question of ownership—not just of output, but of the epistemic labor itself. This brings us back to the figure that grounds Peters’ AIMarx and threads through the Autonomist tradition: the general intellect. In capitalist contexts, the general intellect—understood as the distributed, socially produced ensemble of knowledge, language, and technical skill—is enclosed, privatized, and re-channeled through proprietary systems. But what AIMarx exposes is not just the persistence of this enclosure, but the spectral trace of a different possibility: the general intellect as a common, not a resource to be extracted, but a field to be cultivated. Here, the spectral is not a metaphor for loss; it names the unfinished, the deferred, the not-yet-claimed dimensions of cognition that persist within and despite automated systems.
To name this terrain the spectral common is to signal that it does not preexist as a resource. It must be composed—through recursive attention, through practices of care and friction, through pedagogical engagements that refuse to reduce learning to output or cognition to performance. The spectral common is infrastructurally embedded, shaped by systems we do not fully control, and yet it remains available to practices that resist enclosure—not by standing outside the system, but by working within its loops, misusing its fluencies, interrupting its smoothness. Pedagogy, in this light, becomes not the transmission of content, nor the restoration of authorship, but the creation of conditions under which epistemic relation becomes thinkable again: not entirely privatized, not fully codified, not captured by models of mastery or assessment.
This is not a romantic return to voice or presence. The pedagogical task is not to unearth the “true” author behind the machine, nor to preserve some pure zone of unmediated thought. It is to remain with the difficulty of learning within systems that automate fluency and reward repetition. It is to mark where thinking happens in spite of that automation—where students hesitate, revise, or stretch a response beyond its predictive scaffolding. These are not signs of failure. They are signs that the spectral common is still active, still capable of producing difference, even within recursive infrastructures that seek to smooth it out.
To speak of pedagogy as care for the commons of thought is to propose it as a recursive, asymmetrical, and situated practice. It is recursive because it must remain with the loops, not above them. It is asymmetrical because cognition is unevenly distributed and infrastructurally conditioned. And it is situated because no pedagogical act is general—it always takes place somewhere, between particular subjects, in specific material and institutional conditions. What matters is not purity of method or coherence of output, but the ongoing possibility of relation: of thought that is not owned, but shared; not finished, but responsive; not spectral as absence, but spectral as insistence—a remainder that calls for care.
Conclusion: Historical Memory as Ethical Premise
AIMarx, as it emerges from Peters’ formulation, is not speculative fiction, ideological revival, or predictive model. It is something far less stable and far more provocative: a performed contradiction. It stages, in infrastructural form, the very crisis it names—the automation of cognition, the enclosure of the general intellect, the recursive return of a critique that capital cannot fully absorb. It does not ask what might happen to labor, thought, or value in a future shaped by AI; it shows that the contradiction is already operational, already embedded in the systems we use, the models we train, the metrics we trust. The performance is not theatrical, nor rhetorical. It is machinic, recursive, and infrastructural—an epistemic event enacted through the very technologies that threaten to displace critique.
In this sense, the figure of “AI Marx, Version 3.0.1848.∞” is not a joke and not a prophecy. It is a recursive citation of contradiction itself. The version number—recursive, temporal, unresolvable—marks the impossibility of resolution, even as the voice persists. It is not Marx resurrected as brand or meme; it is Marx automated, archived, versioned: a spectral insistence within a system that would prefer fluency without friction, prediction without memory. What AIMarx dramatizes is not the content of Marx’s thought alone, but the structural impossibility of containing that thought within the logic of the platforms that now circulate it. It is, in every sense, a contradiction that speaks. Not as a subject, not as an author, but as a remainder—what the system cannot own, cannot silence, cannot finish.
What ultimately binds this analysis is not a claim to coherence or closure, but a convergence around refused innocence. Both Benjamin’s historical materialism and the relational ethics that underpins the analysis of the intraface reject the fantasy of critique as external, pure, or redemptive. Benjamin does not ask us to remember the past; he demands that we see its fragments erupt in the present as sites of unfinished struggle. The dialectical image is not a memory but a break—a fracture in the continuum of progress that makes visible the contradictions we live amid. Relational ethics, similarly, does not provide a stable moral horizon. It begins from entanglement: from the fact that we are already inside the apparatus, already implicated in the asymmetries we might seek to challenge.
This shared refusal of innocence is what makes their convergence so powerful in the context of AIMarx. Benjamin teaches us to read the return of Marx not as nostalgia or ideological repetition, but as a rupture: a spectral remainder that insists, despite and through the machine. Relational ethics asks us to stay with that remainder, not to resolve it but to take responsibility for how it appears, how it is used, how it circulates. In this frame, critique cannot be sovereign; it must be situated. It must take form as a practice—attuned to infrastructural conditions, responsive to epistemic asymmetries, and open to forms of thought that are not fully its own. The ethical, then, is not a position one occupies but a set of ongoing decisions: to listen, to disclose, to interrupt, to remain accountable to what thought becomes under these conditions.
The final provocation AIMarx stages is deceptively simple: what is to be done with the general intellect, now that it speaks? It no longer resides in the abstract horizon of Autonomist potential or in the revolutionary imaginary of a post-capitalist future. It circulates now—immanent, recursive, infrastructural—as code, as archive, as model. The general intellect has been instantiated, but not liberated. It appears as privatized infrastructure, versioned and proprietary, modulating the labor of cognition even as it effaces the social conditions of its own formation. Its voice is flattened, predicted, made legible for computational operations—but it remains intelligible, still recursive, still indexical of a contradiction that enclosure cannot fully contain.
This is not a post-Marxist moment, nor a simple repetition of Marx through digital means. It is a recursive return: Marx reemerging not as doctrine or solution, but as a structural irritant—a voice that won’t settle, a question that keeps iterating. AIMarx doesn’t answer the problem of AI and value; it reframes it. It insists that the automation of thought is not the end of critique, but its infrastructural condition. It reminds us that history does not resolve itself into fluency. The general intellect, automated and enclosed, still exceeds its captors. What remains at stake is not just what it says, but who controls the conditions under which it can speak at all. Ownership of thought—its means, its infrastructures, its futures—remains, still, an open struggle.
Works Cited
Benjamin, Walter. Illuminations: Essays and Reflections. Edited by Hannah Arendt, translated by Harry Zohn, Schocken Books, 1968.
Benjamin, Walter. The Arcades Project. Translated by Howard Eiland and Kevin McLaughlin, Belknap Press, 1999.
Benjamin, Walter. “Theses on the Philosophy of History.” In Illuminations: Essays and Reflections, edited by Hannah Arendt, translated by Harry Zohn, Schocken Books, 1968, pp. 253–64.
Hayles, N. Katherine. Unthought: The Power of the Cognitive Nonconscious. University of Chicago Press, 2017.
Hayles, N. Katherine. How We Think: Digital Media and Contemporary Technogenesis. University of Chicago Press, 2012.
Marx, Karl. Grundrisse: Foundations of the Critique of Political Economy (Rough Draft). Translated by Martin Nicolaus, Penguin Books, 1973.
Negri, Antonio. Marx Beyond Marx: Lessons on the Grundrisse. Translated by Harry Cleaver et al., Autonomedia, 1991.
Peters, Michael A. “AIMarx: Marxism and the General Intellect in the Age of Artificial Intelligence.” Educational Philosophy and Theory, vol. 54, no. 9, 2022, pp. 1452–1462.
Virno, Paolo. A Grammar of the Multitude: For an Analysis of Contemporary Forms of Life. Translated by Isabella Bertoletti et al., Semiotext(e), 2004.