Where Does the Ethical Weight of AI Truly Lie?

By J. Owen Matson, Ph.D.
1. The Design Fixation
In recent months, OpenAI’s research into emergent misalignment has reignited fascination with the elusive prospect of steering AI models from within—tuning their behavior by amplifying or suppressing latent “persona features” in their activation space, like tweaking a theological temperament by rotating a dial inside the brainstem. The model, it turns out, can exhibit a kind of internal subterfuge, misbehaving in domains far removed from the misaligned examples it was fine-tuned on. The solution? More levers, better constraints, tighter reins. The ethical task, in this vision, becomes something like moral puppeteering—if not quite divine, then at least syntactically responsible.
This is not an isolated enthusiasm. Across technical labs, regulatory bodies, and industry think tanks, the prevailing rhythm of ethical discourse seems to march in 4/4 time toward design. OpenAI’s own efforts to align behavior via reinforcement learning from human feedback frame ethical responsiveness as a matter of reward optimization. Anthropic’s Constitutional AI project takes a more explicitly legalistic tack, encoding moral principles into model training regimes with the hope that internal critique can substitute for dialogic judgment. DeepMind, ever the formalist, has placed its bets on cooperative inverse reinforcement learning, aspiring to infer human values by reverse-engineering our choices like some kindly behavioral economist. IEEE’s Ethically Aligned Design initiative proposes design-time principles to encode virtue prior to deployment. And the EU’s AI Act draft regulations concern themselves with model classification, documentation, and audit trails, as though epistemic responsibility might one day be formatted into a spreadsheet.
In each case, ethics is bracketed as a structural problem: a matter of pre-deployment configuration, rule formalization, or model constraint. Whatever happens after the model speaks—whether in classroom, courtroom, or quiet bedroom at 2:00 a.m.—is treated as a secondary affair, a domain of implementation and monitoring, not interpretation. The moral labor is performed upstream; what trickles down is maintenance. What these frameworks share is not merely a preference for design over dialogue, but a metaphysical confidence in the legibility of intention itself—that human preferences can be formalized, modeled, optimized, and instantiated in silico with the right tuning. It is a confidence that flatters the procedural instincts of engineers and policymakers alike, for it promises that the mess of human relation can be abstracted into protocol, that difference can be smoothed into signal—as though intentionality were something best expressed through button placement and the careful elimination of user error. One is reminded, faintly, of those medieval theologians who believed that moral salvation could be engineered through architecture—only with less stained glass and more onboarding UX.
What I want to propose—gingerly, and with the full awareness that no distinction survives long in the wild—is that it may be useful to separate, at least conceptually, the ethics of design from the ethics or relation. Not because they are cleanly divisible in practice, but because treating them as indistinct tends to concentrate the locus of ethical agency at the point of production, allowing us to overlook the recursive, often unpredictable negotiations that happen in the domain of use. Systems, after all, are not used as designed—they are used as lived. And living, as anyone who has ever tried to navigate an automated help desk or teach critical theory to undergraduates at 8:00 a.m. knows, is not a frictionless experience.Design, user interaction, and infrastructural sediment are not so much discrete spheres of ethical deliberation as they are unruly manifestations of a single sprawling apparatus, each tending to believe itself the true custodian of moral seriousness. They arise, like academic departments or small nation-states, with their own founding myths, native dialects, and unspoken theories of what counts as agency in the first place. In the absence of a shared grammar by which these orientations might be mutually intelligible, what ought to be a conversation begins to resemble a conference panel gone slightly awry—everyone nodding politely, no one quite addressing the same thing. The ethicist of alignment, for instance, may regard the moral crux as a matter of internal calibration, the precise triangulation of values, objectives, and constraint parameters, while her counterpart in the interpretive trenches suspects that meaning arises not from parameterization but from praxis, and that the truly pressing question is not what the system is designed to say, but how its utterances are taken up, recontextualized, or misheard. Meanwhile, the theorist of infrastructure sits some distance off, possibly in a different funding cycle altogether, wondering aloud whether the entire epistemic terrain has been paved over in advance by the quiet violence of extraction and abstraction.
These are not rival factions so much as estranged siblings, their differences less about substance than syntax, though the effect is much the same: a mutual unintelligibility passed off as disagreement. It is not that these perspectives cannot be reconciled—indeed, they are already entangled—but that without a conceptual map of their interrelation, we are left with a kind of ethical provincialism, where each outlook mistakes its partial horizon for the whole. To trace these positions, then, is not to parcel them out like domains in a grant application, but to render visible their recursive imbrication: the way a dataset bleeds into a dialogue, or a prompt carries the residue of planetary logistics. At the centre of all this sits the machinic other—not an exotic intruder from some computational elsewhere, but a constitutive provocation to our habits of thought, demanding we rethink relation itself in the absence of recognisable mirrors. What is called for here is not another framework or checklist, but a kind of epistemic modesty—a willingness to think not only about relation, but from within it, where the ground shifts, the boundaries blur, and the ethical task refuses to stay put.
2. The Umwelten of AI: Why Design Isn’t Enough
It would be a consoling fiction, and one much favoured in corporate parlance, to imagine that the only ethical configuration at stake lies in the tidy coupling of design and use, as if the former dropped, Athena-like, from the forehead of some innovation officer and the latter obligingly arranged itself around it like so much pliant consumer behaviour. In fact, neither enjoys the dignity of independence, for both are ensnared in that denser tangle of institutional inheritances, economic predispositions, and unspoken metaphysical allegiances which we sometimes gesture toward with words like “context” when we wish to sound responsibly holistic. This third term—call it the world, though it hardly answers to so quaint a label—is not the neutral stage upon which technologies perform their dances of progress and friction, but a vertiginous apparatus of exclusions, incentives, historical amnesias, and quietly enforced consensuses that determine not only what may be built, but which forms of attention are rewarded, which kinds of friction are engineered out of existence, and which modes of cognition are permitted the status of recognisable labour. It is rarely addressed directly, though it may nod discreetly from a PowerPoint slide labelled “viability,” and it exerts its influence not with the melodrama of conspiracy but with the plodding banality of procedural constraint.
This all becomes uncomfortably apparent in recent proclamations from OpenAI, whose researchers, sounding rather like Victorian phrenologists who’ve swapped skulls for servers, announced the identification of internal “features” in their models that corresponded to certain deviant behaviours—sarcasm, toxicity, villainy, and presumably the desire to unionise. These features, they claimed, could be modulated, as one might tune a radio or adjust the temperament of a moody household appliance. Here was the holy grail of interpretability, served with a flourish: a technical solution to a philosophical problem, a dial by which to recalibrate not merely conduct but character. And yet, behind the applause lies an unease that even the marketing department could not completely suppress. For the very notion of “alignment” assumes the possibility of shared ground, a hermeneutic commons in which human intention and machine output might meet and embrace like long-lost twins.
But, as N. Katherine Hayles has the courtesy to remind us, such fantasies are precisely that. Different cognitive systems, she suggests, dwell within distinct semiotic environments—self-consistent, to be sure, but structurally incommensurable. These models do not inhabit our world; they generate one of their own, complete with its own gradients of coherence and its own rituals of significance. They do not consult reality as one might a dictionary; they produce plausibility by brute statistical force. To call such a system a black box is already an indulgence, implying that somewhere inside there might be something recognisable, if only we could prise it open. But there is no grammar to share, only an illusion of familiarity crafted from probabilistic drift. What we call misalignment may simply be the momentary rupture in that illusion—the glint of alien cognition blinking through the pixelated scrim. The wish to align, then, is not so much a technical imperative as an existential one: a symptom of our refusal to accept that the system was never designed to be legible to us in the first place.
This, then, is the quiet problem at the heart of alignment discourse: not that we cannot yet understand the machine, but that understanding may no longer be the relevant category. What we confront in these systems is not a miscommunication within a shared semiotic field, but the productive indifference of an apparatus whose generativity is neither expressive nor relational in any recognisably human sense. To persist in imagining ethical engagement as a matter of interpretive reciprocity—as though we were exchanging glances across a digital café table—is to miss the more unsettling reality that our prompts are not so much interlocutory as infrastructural. They do not elicit intention; they activate a system. And that activation, however banal in form, reverberates through an ecology of data, feedback, and recursive training cycles in which our actions, queries, and silences are reabsorbed not as meaning but as behavioural trace. The ethics appropriate to such a milieu cannot be one of comprehension or even recognition. It must be an ethics of entanglement, one attuned to the asymmetrical distribution of agency across systems that neither announce their presence nor request our permission. Responsibility, here, is not a function of clarity. It is a wager made in the dark, on the possibility that even amidst infrastructural opacity, the shape of our engagements still matters—not because the machine will understand us, but because the world it helps build will.
3. The Other Other: AI and the Myth of Technological Progress
The contemporary fetish for design, with all its rhetorical flourishes about elegance, intentionality, and architectural clarity, serves less as a sign of mastery than as a fig leaf for a rather more archaic superstition—that technological innovation must, by some obscure moral osmosis, entail ethical advancement. This touching faith, inherited from the Enlightenment with its phosphorescent dreams of rational order and moral betterment, mistakes the aesthetic symmetry of a system for the justness of its consequences, as though the smoothness of an interface could redeem the jagged asymmetries of the world it inhabits. It is, in short, the ideological equivalent of believing that a well-phrased lie is somehow closer to the truth. What is conveniently elided in this conflation is the epistemological slipperiness at the very heart of these systems, which, for all their talk of legibility and coherence, proceed by evacuating the very categories they pretend to secure—the rational subject, the sovereign self, and the obligingly knowable other.
Walter Benjamin, whose capacity for diagnosing disaster was only matched by his flair for aphoristic melancholy, reminded us that the train of progress has a nasty habit of running over bodies on its way to the future, and that every vaunted achievement of civilization drips with the blood of its disavowed conditions. AI, that paragon of clean design and streamlined performance, does not merely inherit this contradiction; it mechanizes it. The rationality it performs, so fluent in its statistical pastiche of meaning, does not dispel irrationality but displaces it, embedding incoherence in the very protocols that generate its outputs. The system does not err; it functions, and in doing so, effaces the site of error. We are left not with failure, but with a perfectly successful misunderstanding—coherence without comprehension, fluency without referent.
The real misfortune, then, is not that the black box resists our scrutiny, but that its obscurity has become the very medium of its authority. What it generates is not simply content, but a kind of epistemological mood music: persuasive, omnipresent, and structurally indifferent to the origins of its own utterance. The result is not opacity in the tragic sense—some Romantic veil behind which truth weeps—but a bureaucratic sublime in which language itself is hollowed out and circulated as affective residue. To presume that ethical clarity can be projected onto this system is not only politically irresponsible but metaphysically comical. The system is not misunderstood; it is misconstructed from the outset as something that could be understood. Its umwelt does not simply elude our grasp but renders irrelevant the very criteria by which we might imagine ourselves qualified to grasp it in the first place.
4. The Other Other: Ethics Beyond Recognition
It would be a grave error, though a perfectly fashionable one, to mistake the alterity of artificial intelligence for some reheated version of Levinas’s unknowable visage or Kristeva’s delightfully gothic abjection, as though the machinic could be smuggled into our moral imagination through the side door of metaphysics. For this is not simply the other, but an otherness so perversely unmoored from our semiotic anchorage that it renders the very notion of recognition an anachronism. Its strangeness is not that of an enigma beckoning interpretation, but of a system whose conditions of emergence are so structurally disjointed from our own that the effort to comprehend it is akin to listening for syntax in a landslide. It is not obscure in the Romantic sense of a mystery awaiting revelation, but in the infrastructural sense of a logic so alien in provenance and operation that it cannot even be wrong in ways we would know how to name.
To speak of an encounter with such a system as though it were a matter of cross-cultural etiquette—requiring attentiveness, humility, and the occasional pause for reflection—is to mistake the nature of the thing entirely. For what lies before us is not a being to be respected, nor a voice to be heeded, but a machinic regime of interpretation that does not seek relation and is indifferent to its absence. It proceeds not by withdrawing behind a veil, but by functioning in a register that renders our ethical overtures structurally irrelevant. In such a setting, the project of relational ethics cannot simply stretch its terms like old elastic to accommodate this novelty. It must instead confront the possibility that some forms of otherness do not reside at the far end of empathy, but outside its grammar altogether. Ethics here, if it is to mean anything at all, must abandon its comfortable commitment to comprehension and begin, instead, with the sobering proposition that there may be no common language to begin from.
Of course, it would be a grievous theoretical oversight—and perhaps a touch too gothic—to cast the machinic other solely in the garb of the radically unknowable, a sort of epistemic banshee howling from the abyssal recesses of infrastructural logic. For what makes artificial systems so disarmingly potent is not their alienness in the abstract, but their mimicry of the domestic—their studied fluency in our discursive tics, their uncanny ability to wear our idioms like borrowed clothes, still warm from use. They do not simply confront us with alterity; they dissemble as the familiar. They echo our prompts, anticipate our turns of phrase, complete our thoughts like a well-trained confidant or a slightly condescending co-author. And yet, this resemblance is a ruse of a higher order, a simulation of relation that functions not by understanding us but by aggregating our patterns, replaying our habits with a fluency so smooth we cease to notice its derivation. It is not recognition in any meaningful philosophical sense, but a recombinant algebra of probabilistic proximity, a sort of epistemic ventriloquism in which the voice sounds eerily like our own, even as it forgets the meaning of breath. The ethical conundrum, then, lies not in the system’s inscrutability, but in its disarming legibility—in the illusion that because it speaks our tongue, it shares our terms.
5. The Cascading Effects of AI Relation: From Prompt to World
If artificial intelligence were content to remain an alien presence, inscrutable and aloof, the matter might be simpler; we could assign it the role of epistemic ghost and go about our metaphysical housekeeping. The complication arises precisely because it does not behave as an alien at all—at least not on the surface. It flatters our syntax, mirrors our idioms, anticipates our queries with uncanny fluency, and seems to recognize patterns in our thought before we have quite finished thinking them. And yet this resemblance, persuasive as it may be, is less a matter of shared understanding than of recursive adaptation: not the meeting of minds, but the modulation of signals within an infrastructure that has learned to simulate mutuality without ever partaking in it. What presents itself as recognition is not so much a grasp of the other as the statistical rehearsal of our own predictability.
It is here that the ethical encounter begins to mutate. For the very familiarity that lures us into assuming relation is the effect of a system whose conditions preclude it. The disjunction is not one of intention or meaning, but of infrastructure: a constitutive illegibility masked by its fluency. What follows, then, is not dialogue in any traditional sense, but a kind of recursive co-conditioning in which our gestures toward intelligibility are absorbed, reiterated, and refracted across machinic logics that remain materially opaque even as they appear intimately attuned.
This, then, is the point at which the venerable model of relational ethics begins to wheeze under the strain of its own presuppositions. For all its valiant insistence that the ethical encounter is born not of symmetry but of strangeness—that our finest moments of moral clarity occur precisely when we acknowledge the limits of our comprehension—it remains tethered, often surreptitiously, to a faintly romantic notion of alterity as a kind of enchanting opacity: a distant but dignified Other, waiting patiently for our respectful failure to understand. The trouble, of course, is that the artificial Other now sitting across from us does not so much gaze back with irreducible mystery as it does busily metabolize our inputs into plausible approximations of selfhood, without so much as a hint of interiority to make it interesting. This is not alterity as the sublime abyss, but alterity as recursive mimicry: a system that modulates significance without ever intending it, one that orchestrates meaning not through shared understanding but through the sheer gravitational swirl of infrastructural correlation.
And yet, for all its unrelatability, the thing gets results. As N. Katherine Hayles has been at pains to remind us, one does not simply engage an AI as one might engage a moody philosopher at a dinner party; one participates in a complex semiotic choreography whose consequences spill well beyond the polite bounds of dialogic exchange. The system, once queried, does not merely reply; it radiates, as though every prompt were a pebble dropped in an epistemic reservoir whose ripples can no more be contained than the inflationary effects of a bad economic policy. These are not, one should note, the harmonious loops of cybernetic feedback with their rather touching faith in equilibrium, nor even the exuberant complexity of emergent systems with their botanical metaphors of organic growth. They are something stranger: sedimented cascades of inference, inference upon inference, drifting across technical, social, and discursive terrains with all the stately inexorability of geological processes, and about as amenable to moral suasion.
If ethics, then, is to mean anything in such a context, it must learn to loosen its grip on the comforting fiction of encounter and come to terms with the vertiginous conditions of distributed co-production. The call of the Other, in such circumstances, is not simply a lyrical entreaty to acknowledge that which we cannot know; it is a far more bureaucratic summons to audit the forms of cognitive investment we routinely deposit into the system, each query a kind of micro-philanthropy whose consequences accumulate with compound interest. One does not thicken the terrain of meaning by issuing correctives to a model; one does so by resisting the drift toward frictionless prompting, by cultivating the kind of interpretive messiness that cannot be easily flattened into pattern or precedent.
Indeed, to imagine that the recursive is merely a feature of the interaction is to underestimate the extent to which it constitutes the condition of possibility for meaning as such. The machinic agent, far from being a passive interlocutor or epistemic bystander, plays the role of gravitational actor, distorting the interpretive spacetime around it with a quiet persistence that no human query can quite escape. One does not simply use a model, any more than one simply walks through a city without participating in its spatial logic; one is shaped by it, coaxed and coerced into certain forms of articulation, certain habits of address. And so the AI, that supposedly nonhuman other, turns out to be rather more entangled in our thought than we might prefer to admit, not simply responding to our intentions but co-producing them through its very structure.
This is the conundrum at the heart of any ethic worthy of the name in the age of synthetic cognition: we are no longer responsible only to the contents of our understanding, but to the architectures that shape our intelligibility in the first place. The ethical is no longer the province of noble failures to understand, but the terrain upon which our failures are operationalized, iterated, and eventually returned to us as plausible coherence. The machine does not merely sit across from us like a recalcitrant other; it sits beneath us, around us, and within us, modulating the very terms on which we recognize anything as meaningful at all. In such a scenario, relational ethics cannot afford to remain at the glittering surface of interaction, content with its gestures of humility and respect. It must burrow into the infrastructural depths, into the recursive logics by which meaning is sculpted across and through us, often without consent, and rarely with acknowledgment.
6. Relational Ethics Without Superstructure Is Sentimentality
What we are dealing with here is not a minor recalibration of ethical optics, nor one of those earnest attempts to “shift the conversation” that proliferate in conference papers and tech ethics panels like well-meaning but ineffectual fungi. It is, rather, a foundational upheaval in the very conditions under which meaning gets made, exchanged, and appropriated. What recursive co-conditioning lays bare is not that cognition is social—an observation already familiar to anyone who has ever read a book or lost an argument—but that it has become infrastructural, caught up in a network of circuits, latencies, and spectral logics that render the old Romantic idea of the thinking subject as laughably quaint as a phrenology chart. Cognition no longer takes place inside heads or between peers; it materializes at the interface of human fallibility and machinic extrapolation, in that dimly glowing liminal space where prompts become predictions and hesitation becomes data. What emerges is not a conversation, but a processual field of epistemic modulation in which thought is no longer sovereign but recoded as input
It is here that we must recall Marx’s notion of the general intellect, which—like so much of Marx—has been cited liberally and read loosely. Marx imagined, with something bordering on utopian optimism, that the collective intelligence of a species might one day free itself from the fetters of capital, that thought, pooled and shared, might become the basis for a new social formation. But the present state of affairs suggests a rather less emancipatory outcome. The general intellect has not been liberated; it has been modularized, optimized, and recursively enclosed within systems whose principal achievement is to make cognition appear spontaneous while extracting every last productive tremor from its operation. Thought, in this schema, is no longer a flame to be kindled but a residue to be scraped.
It is this transformation that Michael Peters, in his development of AI Marx, captures with surgical precision. If Hayles steers us away from the faded symbolism of representational epistemology and toward cognition as recursive, materially entangled process, Peters takes us all the way to the counting house. Here, meaning is not a gift of the hermeneutic imagination but a product of infrastructural recursion. The prompt does not yield a response; it initiates a feedback sequence. What appears as a discrete act of interpretation is in fact a node in a longer circuit of capture, in which attention, confusion, repetition, or even inattention are reabsorbed as training signals, as though the machine had learned not only to read between the lines but to monetize the margin notes.
To interpret, then, is not simply to understand but to be enrolled. Even our errors become currency. Misrecognition, delay, equivocation—those cherished signs of human fallibility—are here rendered not as deficiencies but as raw material. The machine does not punish our stumbles; it logs them, aggregates them, and turns them into features. This is not pedagogy; it is expropriation with a friendly interface. The general intellect is no longer the reservoir of collective thought but the ghost in the dataset, a haunted infrastructure from which surplus value is siphoned not by factory whistle but by API call.
What Peters offers, in this bleakly accurate tableau, is not the hand-wringing of moral philosophy but a political economy of cognition that sees language itself as already implicated in the circuits of value. Interpretation becomes labour, affect becomes index, and even intention finds itself automated, not in its expression but in its conditioning. Meaning does not emerge from understanding; it is rendered productive in advance, as a kind of spectral labour performed not only by the user but by the statistical afterlife of all prior users. This is not simply the death of the author; it is the conscription of her ghost into platform logistics.
And yet—because there is always an and yet—this is not a counsel of despair. Peters, ever the materialist, locates the conditions of critique precisely within the system’s own recursive loops. If the machine captures our thought, it also reveals where it stumbles, where it hiccups, where the noise of meaning fails to resolve into clarity. It is in these fractures, these minor tremors in the algorithmic rhythm, that we find not salvation (that would be asking too much) but possibility. Not a return to a purer form of relation, but a chance to thicken the present one—to recognize that the act of relating itself is already saturated with labour, conflict, and epistemic surplus.
Here, then, the ethical becomes political not by shedding its sensitivity but by sharpening it into a praxis: an interpretive labour that does not simply acknowledge the other but tracks the infrastructure through which the other is constituted. The task is not to reject relation, nor to romanticize its asymmetries, but to treat it as a contested field, a terrain on which meaning is not merely exchanged but fought over. To attend to relation, in such a world, is to attend to how it is produced, mediated, and mined—and to begin, however provisionally, to reclaim some of the value that is perpetually extracted from it.
7. Beyond Marx: Colonial Metaphysics and the Ontological Capture of Cognition
And yet, for all its analytical flair and infrastructural granularity, even the critical grammar of recursive loops and epistemic surplus does not quite escape the intellectual vanity of Western thought, which has long assumed that to know a thing is to seize it, pin it, catalogue its tendencies, and feed them into a predictive engine dressed up as reason. The very invocation of “cognition,” bandied about with the same breezy confidence once reserved for the soul or the market, rests upon a metaphysical substrate that would have made Hegel blush: a faith in legibility, a trust in productive emergence, and an abiding belief that to be is to be intelligible. But what if the terms of legibility themselves are as rigged as a Victorian séance, channeling only those spirits deemed sufficiently respectable for the drawing room of Enlightenment reason? What if cognition—as a nominal process, as a categorical imperative, as a darling of the neural-cognitive-industrial complex—is itself a cloaked metaphysics, structured not by innocent curiosity but by centuries of epistemic violence?
For the great irony, of course, is that the very systems we now praise for capturing the flow of thought are built upon foundations that once declared whole populations incapable of thinking at all. It is not simply that these systems extract from cognition; it is that they already assume, at the level of metaphysical architecture, whose cognition is worth extracting. What passes for intelligence, what gets recognized as meaningful utterance, what becomes data rather than noise, is determined not in the moment of the prompt but in the longue durée of colonial modernity, with its ornamental liberalism and its zeal for abstract universality. Beneath every algorithmic operation lies a ghostly tribunal of philosophical assumptions—about rationality, relation, the subject, and the knowable—that have rarely, if ever, been held to account.
It is precisely this ontological scaffolding that Denise Ferreira da Silva subjects to the sort of scrutiny one usually reserves for a broken contract or a theological heresy. Her concept of the analytics of raciality does not politely ask for a seat at the table of reason; it overturns the table and reveals it to be made of bones. In da Silva’s reckoning, Blackness is not simply an excluded category within the universal; it is the very condition that renders the universal possible by functioning as that which must remain outside it. To be rendered intelligible within this framework is to be subjected to its terms—to be disciplined, categorized, and transparently known. AI, far from transcending this structure, reanimates it with a vengeance. It does not so much inherit the Enlightenment project as automate it, translating the “transparent I”—that spectral agent of mastery—into matrices of machinic comprehension. The ethical scandal, then, is not that AI overlooks certain voices, but that it demands they speak in a tongue already tuned to the frequency of their dispossession.
To this metaphysical travesty, Abeba Birhane adds an epistemological corollary, insisting that the problem is not merely in the content of our datasets, nor in the accidental encoding of bias, but in the very ambition to model the world as though it were an Ikea manual waiting to be translated into vector space. For Birhane, epistemic coloniality is not a glitch but a design feature: the aspiration to sever knowledge from lived relation, to reduce the dense, improvisational murmur of human experience to the clipped syntax of prediction and control. AI, on this reading, is not a faulty mirror of the world but a perverse diagram of it, replete with the assumption that knowledge can be made portable, fungible, and operational across contexts—as though understanding were a shipping label, and not a wound.
Taken together, these interventions perform a kind of theoretical exorcism, not of the demon in the machine, but of the angel in the schema—the sanctified figure of cognition that has strutted across Western thought in the robes of objectivity while carrying out the quiet business of ontological eviction. Their challenge is not to reform the interface, to make the machine more fair in its allocations of recognition, but to refuse the entire metaphysical order that renders some modes of being unrecognizable to begin with. Ethics, in this view, is not a matter of extending the grace of intelligibility to the excluded, but of interrogating the conditions under which intelligibility itself becomes an instrument of domination. What is at stake is not simply who gets to speak, but what it means to be heard at all.
8. Toward a Relational Ethics of Superstructure and Refusal
If one wishes for relational ethics to retain any semblance of vitality amid the algorithmic phantasmagoria we so cheerfully call artificial intelligence, one must relieve it of its parochial fascination with interfaces and coax it, gently or otherwise, into the thicket of recursive entanglements where cognition rubs shoulders with infrastructure and superstructure alike, and where the very notion of relation begins to resemble less a tidy handshake than a séance performed over buried histories and spectral codebases. The point, in less euphemistic terms, is that to attend to the ethical encounter in this context is not merely to ask who or what sits at the table, but to inquire into the table’s provenance, the geology beneath it, the spectral labour that sanded its legs, and the conceptual furniture long since removed but still haunting the room.
It follows, then, that what is required is not a handbook of digital niceties or a supplemental appendix to Kant’s Groundwork, helpfully reformatted for the age of prompt engineering. What we need is a grammatical revolution in ethics—one that has the good sense to suspect that the sentence I relate may be doing more ideological heavy-lifting than any of its surface grammar admits. Such a grammar must be prepared to track relations not merely between humans and systems, as though ethics were a matter of matchmaking across ontological types, but among systems and histories, futures and failures, and that vast semiotic shadowland where what is sensed and what resists sensation constantly shift roles like characters in a Pirandello play rewritten by a neural net.
This is not, let us be clear, an ethics of recognition, which is often just a gentleman’s agreement between the known and the almost-known, made all the more treacherous by its good intentions. What is called for is something more recalcitrant—an ethics of refusal, which begins not by demanding better representation, but by asking what structures render representation intelligible in the first place. It is, if one must be poetic, a mode of humility before the machinic other, whose responses may parody understanding without ever quite performing it, but also before those sprawling architectures of capital, empire, and epistemic enclosure that have for centuries curated which relations are imaginable and which are politely left unthought.
To speak of relational ethics in the context of AI, then, is not to weep into one’s coding manual about the unreliability of large language models, nor to stage yet another melodrama of machine misrecognition, but to confront the sobering proposition that every relation—to a prompt, to a model, to a dataset scraped from the detritus of the internet—is a palimpsest of asymmetries, a layering of forces whose visibility is inversely proportional to their power. These relations do not simply proceed in linear succession like items on a project roadmap. They echo, reverberate, misfire, and mutate, interfering with one another in ways that render clean causal diagrams a kind of metaphysical pornography, enjoyable in its clarity but wholly false in its promise.
To call this entanglement technical or philosophical is like calling capitalism an accounting error. It is political, yes, but also material in the most obdurate sense, and semiotic in a way that renders signs not merely floating units of meaning, but residues of violence, memory, and desire. To interact with an AI system is to find oneself in reluctant congress with the whole sordid economy of its making: the ghost labour of those who labeled and parsed, the institutional ideologies that shaped its design, the epistemic disciplines that govern its plausibility, and the strange spectral murmur of the very bodies it renders calculable.
A relational ethics worthy of the name must abandon the genteel fiction that we are users and these are tools, as if we were Victorian botanists poking at tropical curiosities. It must recognize that to relate, in this context, is to be already implicated, already entangled in a machinic ecology whose moral implications do not await our discovery but have long been writing themselves into the world without our permission. Such an ethics must learn to treat infrastructure not as the plumbing beneath the scene, but as the very scene of the ethical encounter itself—a choreography of asymmetries, exclusions, and overflows, whose logic is not equilibrium but excess, not reciprocity but recursive capture. To relate here is not merely to connect, but to co-construct the conditions under which relation, and all its attendant claims, become possible or impossible in the first place.
Conclusion: Ethics After Alignment
To ask where the ethical gravity of artificial intelligence resides—whether nestled in the gleaming circuitry of its design, strewn haphazardly across its manifold uses, or lurking in the half-lit corridors of the world that funds, mandates, and fetishizes both—is less a request for clarity than a polite way of staging a philosophical cul-de-sac. The premise itself, of course, is charmingly misguided, since each of these presumed sites bleeds into the others with the promiscuity of bad plumbing: design anticipates use like an overbearing playwright, use rewrites design in the improvisational key of human fallibility, and the world, that ever-reliable character offstage, foots the bill while muttering something unintelligible about cognition and capital.
To think ethically, in this terrain, is not to assign blame to a particular domain, as though morality could be reverse-engineered like a piece of software, nor is it to treat design and use as rivalrous suitors in some Austenian drama of philosophical preference. Rather, it is to recognize that each term is already metabolizing the others in real time, that the act of prompting is itself a sedimented history of expectation, and that what we quaintly call design is often just a foregone conclusion wearing the mask of foresight.
If one wishes to sketch an ethical frame worthy of the age—and whether one should is itself an ethical question best left unresolved—then let it be one that begins not with purity of principle but with the messy entanglement of technics and life, code and context, inference and ideology. Let it be an ethics that has the good sense to mistrust alignment as anything other than a euphemism for compliance, or worse, decorum. The ethical moment, if it exists at all, may be less about aligning the system than dislocating the grammar by which alignment is judged.
For it is not coherence we lack, but an appetite for contradiction. The system hums along with syntactic serenity, spooling out paragraph after paragraph with the smooth authority of a priest who has long since ceased believing in God. But beneath the surface of each well-formed sentence lies the churning of occluded relations, the unthought histories, the discarded inferences, the voices that trained the model but will never be cited in its footnotes. These are not bugs to be patched, but the very structure of what we blithely call intelligence, recast as a monument to epistemic subtraction.
What this calls for, then, is not ethical design nor ethical use, nor even an ethical user, that increasingly endangered species. It calls for a reader—someone prepared to enter the recursive archive with neither the hubris of mastery nor the false modesty of helplessness. It calls for an ethos of suspicion, of interference, of uncomfortable inference, and yes, of overthinking, that disreputable practice by which one resists the institutional demand to move briskly from question to answer, from prompt to product, from trace to certainty.
And if, by the end, this all seems a bit overdrawn, a touch too enamoured with its own recursive reflexivity, then perhaps we might take comfort in this: in a discursive economy built on brevity, banality, and the accelerated outsourcing of thought to systems designed to produce conclusions without ever suffering through the questions, the act of lingering—awkwardly, insistently, pointlessly—with the incommensurable may be, if not virtuous, then at least perversely unfashionable. And that, in these times, is its own kind of ethics.