When Writing Stops Thinking: Automation, Authorship, and the Ethics of Conceptual Rigor from Mark Twain to AI

By J. Owen Matson
Abstract
This essay examines the growing disconnect between language and thought in contemporary discourse, particularly in the context of EdTech, AI, and academic theory. Beginning with reflections on the author’s recent engagement with theoretical dialogue on LinkedIn—a platform marked by performance-driven visibility rather than conceptual depth—it traces how theoretical vocabulary has increasingly come to function as professional shorthand, signaling intellectual alignment while often bypassing the labor of thinking. Drawing on examples from education discourse and the historical figure of Mark Twain—whose engagement with the typewriter and notions of automatic writing challenged humanist ideas of authorship—the essay situates current anxieties around AI-generated language within a longer tradition of mechanical mediation. Rather than framing authorship as a question of human versus machine, the essay argues for conceptual rigor as the true index of intellectual integrity. It calls for a renewed attention to friction, difficulty, and specificity in writing—not as barriers to communication, but as signs that thought is actively being done.
Lately, I’ve found myself engaging in more theoretical conversations on LinkedIn—a platform not exactly known for conceptual rigor. It’s an uneasy space, suspended somewhere between the social and the professional, where personal branding often stands in for critical thought, and visibility is too easily mistaken for expertise. The platform logic rewards fluency over friction, affirmation over ambiguity. And yet, perhaps paradoxically, I’ve come to see theory as newly relevant here—less as a specialized academic pursuit than as a necessary language for making sense of AI and the systems reshaping how knowledge is produced, circulated, and claimed. But in this space—already precarious as a site for genuine academic exchange—I’ve also noticed the return of something familiar: the resurgence of theoretical vocabulary stripped of its conceptual weight. Academic jargon, untethered from the thinking it once demanded, now circulates as professional currency, signaling alignment more than inquiry. This essay is a reflection on that drift. Drawing on examples from EdTech and AI discourse—and returning to my own (early) academic work on Mark Twain and the history of automatic writing—I argue that when language begins to imitate thought without inhabiting it, writing risks becoming a kind of automated performance. The real question isn’t just who authors our texts, but whether our language still carries the marks of thinking: difficulty, precision, and the ethical commitment to say something that resists the ease of simulation.
What’s striking is how familiar this all feels. The performance of expertise through conceptual shorthand, the circulation of theory as brand rather than method—it’s not new. I first encountered this dynamic years ago in graduate school, and I now see it resurfacing with uncanny consistency in the discourse surrounding EdTech and AI. In academic writing, there was a familiar tendency—sometimes intentional, often habitual—to signal depth by substituting one term for another in a kind of semantic choreography. When a concept began to settle, it would be replaced by an adjacent framework, a differently inflected lineage, or a fashionable rewording, often without any sustained attention to what was actually being refigured. The result was a kind of smooth but hollow rhythm, like a simulacrum of thought—a surface of fluency that displaced real conceptual traction or grounding in anything that said something with consequence. It created the appearance of intellectual movement without requiring the genuine labor of defamiliarization or theoretical transformation. This maneuver became so embedded in certain modes of scholarly writing that it began to pass as a normative genre convention. Its effects were often more aesthetic than analytic, reducing theoretical language to the role of stylistic affiliation rather than an engine of inquiry. Over time, vocabulary that was originally meant to clarify or destabilize came instead to perform a sense of complexity without having to inhabit it.
Some of the clearest examples of this phenomenon appeared in the theoretical vocabulary that circulated widely in the late 1990s and early 2000s, which is when I entered graduate school and began taking on these concepts with greater seriousness. That period was marked not only by the saturation of certain terms across academic writing, but by heated debates over the role of theory itself—its legitimacy, its limits, and its place in disciplines that had long prioritized textual scholarship, historical method, or close reading. For many of us, especially those trained in literature and cultural studies, the work of learning theory was never merely about adopting a new lexicon; it involved navigating disciplinary anxieties, negotiating questions of philosophical legitimacy, and reckoning with the risk of speaking in a language that some colleagues saw as obscurantist or intellectually evasive.
But even as I was committing to understanding this tradition, I also saw how easily its vocabulary could be co-opted—how certain words began to appear everywhere, often detached from the arguments or lineages they originally named. Discourse, for example, had become a ubiquitous term by that time, but was rarely used with reference to Foucault’s account of historically situated systems of knowledge and power—systems that define what can be said, what counts as truth, and which forms of subjectivity are legible within a given epistemic frame. More often, the term was used loosely to mean “narrative,” “topic,” or simply “what people are saying.” Its conceptual edge had been dulled by overuse. The same thing happened with deconstruct, which in Derrida’s work was a way of demonstrating the instability of meaning itself—the way language undoes its own claims to coherence through the very act of signification. But in everyday academic usage, deconstruct came to mean little more than “analyze,” as in: take something apart and comment on its components. You could almost hear the concept groan under the weight of its misapplication. The word analysis already existed. It still works perfectly well.
To be clear, I understand that language evolves. Words shift in meaning across disciplines and over time, and sometimes that flexibility allows for productive adaptation. But that does not mean that words don’t matter. When theoretical terms are used to signal depth without engaging the structures of thought they belong to, something more than semantics is at stake. It becomes a question of ethical and intellectual integrity—of how we position ourselves in relation to traditions of critique, inquiry, and epistemic responsibility. Language is not just a container for ideas; it is a practice that shapes how thinking occurs and what kinds of thought are possible. If we use the vocabulary of theory without the discipline of theoretical work, we risk eroding the very grounds on which that work depends.
It should be said clearly that theory, like any serious discipline, requires a distinct and precise vocabulary. Specialized language is not a sign of exclusion but a necessary means of naming structures, distinctions, and relational dynamics that ordinary language cannot adequately express. And like any professional discourse—be it in law, physics, medicine, or education—its terminology becomes vulnerable to erosion through overuse, decontextualization, or institutional incentive. There is a well-documented case, for example, of an academic journal editor who accepted a completely fabricated article composed entirely of theoretical jargon arranged to sound persuasive. Although the text had no conceptual content, it simulated the cadence of intellectual seriousness well enough to pass through the peer review process. The case was often cited as a failure of gatekeeping, but it also reflected something more structural: the possibility that language might come to mimic the aesthetic of thought without undergoing the risk of thinking. With experience, one learns to recognize this phenomenon—to discern the difference between writing that is actively pursuing a concept and writing that is simply echoing one. That difference may seem subtle, but it matters immensely in professional and academic contexts where both clarity and integrity are at stake.
This same drift is everywhere in EdTech, where pedagogical language tends to circulate with great enthusiasm and very little resistance. Personalization often seems to mean that instructional content has been aligned with a student’s assessed reading level or recent performance—as if learning were mostly a matter of matching inputs to measured needs, like nutritional labeling for cognition. Engagement is commonly treated as the absence of distraction, which is a remarkably low bar, and one that quietly bypasses anything affective, relational, or epistemically meaningful about attention. Active learning has become a term of such elastic generosity that it can be used to describe nearly any situation in which students are not entirely passive—even if they’re just "actively" clicking through multiple-choice questions written by a machine. And student-centered learning, once a meaningful pedagogical orientation, is now often used as a kind of catchall phrase for anything involving students at all. In some settings, it appears to be just another term for personalization, as if routing content directly to a student’s device were sufficient to constitute a shift in agency or epistemic structure. In others, the bar is lowered even further: if the student is present and nominally active—clicking, scrolling, responding—it counts. The term has come untethered from any commitment to co-construction, inquiry, or transformation, and now functions more or less as an ornamental reassurance that something pedagogically progressive is happening. In practice, what often results is a content delivery system built around the individual student as end-point—what might more accurately be called a content-centered targeting model that simulates autonomy while preserving the most basic assumptions of instructional control.Though it continues to borrow the language of progressive pedagogy, this model often operates through the very logic it purports to displace—preserving a delivery model of instruction that centralizes agency and control, and reduces the student to a receiver of instructional content while gesturing vaguely in the direction of autonomy and reform.
We’re seeing a similar flattening in current discussions of AI and ethics, where terms like response-ability, reflexivity, relationality, and entanglement are increasingly invoked but rarely examined. These concepts emerge from traditions that understand knowing, acting, and learning as distributed processes—shaped through systems, relations, and often uneven structures of power. But in much of today’s usage, they appear more as rhetorical signals than as sustained theoretical commitments. Their value doesn’t lie in their complexity or trend appeal. It lies in their capacity to resist closure—to introduce friction. These are concepts that interrupt familiar habits of thought, that name conditions where agency, meaning, or accountability cannot be taken for granted. That disruption is their strength. Friction, in this context, is not an impediment but an index of conceptual seriousness. When these terms are used without that resistance—when they serve to reassure rather than reorient—they lose their generative force. They no longer provoke inquiry. They occupy the space where thinking might have begun.
In my own work, especially in the context of AI and education, I have tried to treat these concepts not as branding tools or signs of affiliation but as commitments to a form of thinking that is both epistemically and ethically accountable. When I write about response-ability, I am not referring to attentiveness or system flexibility, but to the ethical condition of remaining in relation to forces that are not fully known, not symmetrical, and not fully manageable. It is not an attribute of a system or a designer, but an orientation toward the ongoing shaping of perception and meaning in conditions we do not fully control. When I use the term intra-cognition, I do not mean to offer a metaphor for collaboration. I mean to describe a system in which cognition emerges through recursive interaction between human and machine agents—where no one component holds the full structure of meaning, and where learning is always relational, partial, and co-constituted. These concepts are not tools for improving UX. They are attempts to understand how cognition, ethics, and educational practice are being reorganized under conditions of increasing technical mediation.
These distinctions are not minor, especially in professional or academic forums where participating in discourse already carries risk. For many educators, researchers, and scholars—particularly those outside of traditional power structures—writing is not merely a means of sharing ideas. It is a demonstration of epistemic integrity, a contribution to the maintenance of academic discourse as a shared and public good. When the language of theory or ethics is used without the labor of thinking that gives it shape, it does not just mislead. It undermines the collective trust required to sustain meaningful intellectual work. In a field already saturated with instrumentalism, speed, and performance, we cannot afford to let our most important concepts become signs of depth that no longer think. If we are to take the future of learning, of systems, and of relational intelligence seriously, we must also take seriously the language we use to describe them. That means writing in a way that holds onto difficulty when difficulty is required, and not mistaking fluency for understanding.
Mark Twain, whose work I’ve studied closely, was one of the first major authors to adopt the typewriter. His decision was as much a technological curiosity as it was an occasion for satire. The early Remington model he used—clunky and opaque—prevented users from seeing their own writing as they typed. Twain complained about this frequently, and while his tone was often playful, his frustration was real. This machine, designed to mechanize writing, introduced a kind of epistemic blindness: a severing of the visual feedback loop that typically confirms the act of thinking through language. It forced the writer to proceed without the stabilizing presence of the text—a kind of imposed non-coincidence between thought and inscription. But Twain’s engagement with the typewriter went beyond user frustration; it paralleled his lifelong ambivalence about authorship itself and the idea that one’s thoughts originate from a singular, sovereign self.
In fact, Twain often described his ideas as coming to him automatically, unbidden, or fully formed—as though his mind were more conduit than creator. This wasn’t just rhetorical flourish or modesty. It aligned with a broader intellectual tradition at the time that gave serious attention to the idea of “automatic writing.” In the 19th century, the term referred to writing produced without conscious human intention. It was used in spiritualist contexts to describe the channeling of spirits, in psychological contexts to access the unconscious, and in technical contexts to describe the typewriter itself. Secretaries—almost always women—were also sometimes called “automatic writers,” a designation that situated them not as authors, but as intermediaries who received and transmitted thought on behalf of others. The recurrence of the term across these distinct contexts reveals something important: each instance challenges the humanist association between language, intellect, and the self-possessing subject who is imagined to generate meaning with full intentionality.
Twain’s embrace of the automatic, across both metaphysical and mechanical registers, foreshadows the crisis of authorship we now face in the age of AI. He undermined the idea that the writer is a master of thought, and instead presented himself as a kind of vessel—someone to whom ideas arrived, rather than someone who deliberately constructed them. The typewriter, in this sense, became both a literal and conceptual extension of this self-effacing authorship. It mechanized the gap between intention and inscription. Twain’s refusal to claim full ownership over his ideas was not a withdrawal of responsibility, but a recognition that thinking is not fully owned in the first place. This resonates with post-structuralist challenges to authorship, but it also cuts deeper: it stages the mechanical, gendered, and spiritual infrastructures that always already complicate the figure of the solitary human thinker. His work dramatizes the distributed nature of cognition long before we had a language for it.
That, for me, is the crucial bridge to our present moment. As AI-generated writing becomes increasingly common, we’re forced to revisit some of the same anxieties that surrounded automatic writing in its earlier forms. The worry is not just about plagiarism or originality, but about whether language still bears the mark of thought—whether it carries the friction, the difficulty, the situatedness that tells us a human was here, grappling with meaning. But we should be careful not to turn this into a moral panic about automation. After all, as Derrida argued, writing has never been co-present with thought. It always entails deferral, displacement, a certain kind of automatism. But Derrida wasn’t making a relativistic claim that anything goes. He was pointing to the necessity of attending to the conditions under which meaning emerges—conditions that become even more fraught when writing can be generated without struggle, intention, or awareness.
This is why I think the overuse of professional jargon—especially in spaces like EdTech, where the discourse is already unmoored from deep pedagogical knowledge—poses more than a stylistic problem. It becomes a form of thoughtless automatic writing. It simulates conceptual sophistication without requiring the intellectual labor of articulation. Words like “personalization,” “engagement,” and “student-centered” circulate with such ease that their meaning is no longer anchored in anything rigorous or contested. They don’t function as heuristic devices for inquiry; they function as tokens in a professional game, used to signal alignment with trends or values. When this kind of language becomes the default, writing itself starts to resemble a kind of automatic process: fluent, formatted, and largely disconnected from the thinking it purports to represent. And when writing loses that connection to thought, it ceases to do what theory, at its best, is meant to do—create the conditions for transformation, not just fluency.
This concern is especially pressing in public and semi-public academic spaces, where engaging in theoretical discourse is already a precarious act. For many, the risks of being misunderstood, dismissed, or branded as unserious are not abstract. In these contexts, the language we use does more than communicate ideas—it signals our legitimacy, our seriousness, our claim to participate in the discourse at all. And that’s precisely why the flattening of language through jargon is not a neutral phenomenon. It affects who gets heard, what counts as rigor, and how knowledge circulates. Consider the now infamous example of the academic journal editor who accepted a completely fake submission—crafted from dense jargon with no coherent argument—because it sounded “theoretical.” That wasn’t just a lapse in editorial judgment. It was an indictment of a broader academic culture in which sounding like theory had come to replace doing theory.
I’m not exempt from this dynamic. When I entered graduate school, I went through a phase—one I think many do—where I leaned heavily on theoretical language without always earning it. I wasn’t misusing the terms exactly, but I was certainly using them as shortcuts. I’d reach for the first term that approximated my point rather than struggling to find the most precise, most generative language available. That’s a form of conceptual laziness, and it took time, feedback, and self-reflection to recognize it. Learning the language of theory is not a linear process. It involves passing through phases of mimicry, overreach, and disorientation before arriving at something more grounded. But that process only works if we maintain a commitment to the difficulty of thinking, rather than the performance of fluency. And too often now, I see fluency rewarded at the expense of rigor.
When I first entered graduate school, I went through a phase I’ve since come to recognize in many others—a phase where theoretical language becomes something like a scaffold for legitimacy. I wouldn’t say I misused terms so much as reached for them too quickly, as if fluency itself could substitute for clarity. There’s a kind of cognitive laziness that can take hold in these early encounters with professional discourse: not because the ideas don’t matter, but because we haven’t yet developed the habits of precision that rigorous conceptual work demands. Rather than struggle to articulate something unfamiliar with fresh language, it’s easier to grab the first theoretical term that seems adjacent. That, I think, is one of the more subtle traps of learning the language of theory—not the risk of incomprehensibility, but the comfort of proximity. It’s a necessary stage, but only if one moves through it toward something more thoughtful and deliberate.
In other words, there's a tendency to employ overused language simply because it's easier; it takes less thought, and it's faster. And this habit for the easier word is not often conscious–in fact that's the point: It requires less thought. It's writing as a kind thoughtless reflex, we might even say a kind of "automatic writing." The latter term had particularly meaning for me in grad school. In fact, it became a central concern of my work on Mark Twain.
Now, stay with me here. This may see like a tangent, but I promise to bring it all back. It's what I do. I'm a professional. My goal here is to establish a very connection between the thoughtless use of language and automation–from typewriters to AI. A line that helps place concerns over AI in some very useful historical context.
When I first entered graduate school, I went through a phase I’ve since come to recognize in many others—a phase where theoretical language becomes something like a scaffold for legitimacy. I wouldn’t say I misused terms so much as reached for them too quickly, as if fluency itself could substitute for clarity. There’s a kind of cognitive laziness that can take hold in these early encounters with professional discourse: not because the ideas don’t matter, but because we haven’t yet developed the habits of precision that rigorous conceptual work demands. Rather than struggle to articulate something unfamiliar with fresh language, it’s easier to grab the first theoretical term that seems adjacent. That, I think, is one of the more subtle traps of learning the language of theory—not the risk of incomprehensibility, but the comfort of proximity. It’s a necessary stage, even a productive one, but only if one moves through it toward something more deliberate—toward language that risks the awkwardness of specificity rather than hiding in abstraction. That awkwardness, I would come to learn, is often the only evidence that thought is actually happening.
In other words, there’s a tendency to default to overused language not because we intend to cut corners, but because the overfamiliar term arrives before the harder one. It requires less work, less attention, and less vulnerability. And because it often happens without conscious decision, it feels like writing—until you read it back and realize you’ve said very little. That’s what I mean when I talk about a kind of thoughtless reflex, a pattern of default that mimics reflection without undergoing it. You could call it a kind of “automatic writing”—and in fact, I did. The term became central to my work on Mark Twain, not only because of its historical resonance, but because it helped me frame the uneasy relationship between language, thought, and authorship itself. Automatic writing isn’t just a metaphor here. It names a real and layered history—one that offers surprising insight into our current moment of concern over AI-generated prose.
Now, stay with me. I know it may sound like a tangent, but I promise to bring it back. This isn’t a detour—it’s the path. I know what I'm doing. I'm a professional. My aim is to draw a very specific and very real line between the thoughtless automation of language and the broader question of authorship and agency, from typewriters to AI. And that line begins, for me, with Twain. He was one of the first major American authors to adopt the typewriter, and his decision was as much about curiosity as it was about satire. The early Remington model he used—clunky and opaque—prevented users from seeing their writing as they typed. Twain complained about this often. While his tone was characteristically playful, the frustration was real. The machine severed the visual feedback loop that usually confirms the act of composing language—a kind of imposed blindness between intention and inscription. And that mechanical interruption, as I argued, mirrored Twain’s broader ambivalence about authorship itself: the question of whether thoughts really begin in the sovereign self, or whether they arrive from somewhere less settled. His engagement with the typewriter and automatic writing thus informed a broader interrogation of humanism.
Twain was one of the first major authors to adopt the typewriter. His decision was as much a technological curiosity as it was an occasion for satire. The early Remington model he used—clunky and opaque—prevented users from seeing their own writing as they typed. Twain complained about this frequently, and while his tone was often playful, his frustration was real. This machine, designed to mechanize writing, introduced a kind of epistemic blindness: a severing of the visual feedback loop that typically confirms the act of thinking through language. It forced the writer to proceed without the stabilizing presence of the text—a kind of imposed non-coincidence between thought and inscription. But Twain’s engagement with the typewriter went beyond user frustration; it paralleled his lifelong ambivalence about authorship itself and the idea that one’s thoughts originate from a singular, sovereign self.
In fact, Twain often described his ideas as coming to him automatically, unbidden, or fully formed—as though his mind were more conduit than creator. This wasn’t just rhetorical flourish or modesty. It aligned with a broader intellectual tradition at the time that gave serious attention to the idea of “automatic writing.” In the 19th century, the term referred to writing produced without conscious human intention. It was used in spiritualist contexts to describe the channeling of spirits, in psychological contexts to access the unconscious, and in technical contexts to describe the typewriter itself. Secretaries—almost always women—were also sometimes called “automatic writers,” a designation that situated them not as authors, but as intermediaries who received and transmitted thought on behalf of others. The recurrence of the term across these distinct contexts reveals something important: each instance challenges the humanist association between language, intellect, and the self-possessing subject who is imagined to generate meaning with full intentionality.
Twain’s embrace of the automatic, across both metaphysical and mechanical registers, foreshadows the crisis of authorship we now face in the age of AI. He undermined the idea that the writer is a master of thought, and instead presented himself as a kind of vessel—someone to whom ideas arrived, rather than someone who deliberately constructed them. The typewriter, in this sense, became both a literal and conceptual extension of this self-effacing authorship. It mechanized the gap between intention and inscription. Twain’s refusal to claim full ownership over his ideas was not a withdrawal of responsibility, but a recognition that thinking is not fully owned in the first place. This resonates with post-structuralist challenges to authorship, but it also cuts deeper: it stages the mechanical, gendered, and spiritual infrastructures that always already complicate the figure of the solitary human thinker. His work dramatizes the distributed nature of cognition long before we had a language for it.
That, for me, is the crucial bridge to our present moment. As AI-generated writing becomes increasingly common, we’re forced to revisit some of the same anxieties that surrounded automatic writing in its earlier forms. The concern is not simply about plagiarism or originality, but about whether language still bears the mark of thought—whether it carries the friction, the difficulty, the situatedness that signals a human was here, struggling to make meaning. But we should resist the impulse to turn this into a moral panic about automation. After all, as Derrida argued, writing has never been co-present with thought. It always involves deferral, displacement, a kind of automaticity. But Derrida was not advocating a relativism in which all writing is equally meaningful. He was insisting that the conditions under which meaning emerges—and the signs that meaning has been worked for—matter.
The connection to AI goes deeper, however, because AI-generated writing is built from patterns of linguistic probability. It is optimized for the recognizable, not the original. This makes it particularly susceptible to the overuse of conceptual shorthand—to a fluency that lacks density. AI, in this sense, writes like a first-year graduate student grasping for the most familiar term in a given discursive field. In EdTech marketing, this means accelerating the use of terms like “personalization,” “engagement,” “active learning,” and “student-centered instruction,” all of which are deployed with such regularity that they begin to function as placeholders rather than concepts. In academic settings, AI-generated prose can be jargon-heavy while remaining epistemically hollow—offering the form of theory without its friction. I know this pattern well. I’ve seen it outside of AI for years, which makes it easy to spot when it’s generated by AI. This isn’t about purity tests or gatekeeping who gets to write with AI. It’s about recognizing when writing becomes a simulation of thought rather than a product of it.
What’s often overlooked in Twain’s engagement with the typewriter—and with automatic writing more broadly—is his quiet but sustained rejection of the humanist assumption of authorial sovereignty. Twain resisted the notion that thought originates entirely within the self, portraying authorship instead as something distributed, recursive, and often involuntary. In doing so, he effectively deconstructed the boundary between the self and its so-called internal automations, treating cognition as an emergent process rather than a possession. This was not just a stylistic quirk—it was a conceptual position, one that unsettled the very conditions under which originality, intention, and ownership are claimed. That’s why concerns over plagiarism today, particularly in relation to AI, may be asking the wrong question. The issue is not simply whether a piece of writing is “authored” by a human, but whether it carries the marks of conceptual labor—whether it demonstrates that something has been thought through, not just produced. Twain’s discomfort with fixed notions of authorship reminds us that writing has always involved degrees of automation, mediation, and influence. What distinguishes meaningful work is not provenance alone, but rigor—the capacity of language to hold friction, precision, and depth. That, more than originality or attribution, is what signals that thinking has actually occurred.
What Twain’s ambivalence teaches us—and what the longer history of automatic writing makes clear—is that the ethical challenge of writing in the age of AI is not simply a matter of authorship, plagiarism, or even originality. It is a question of whether language continues to carry the weight of thought, whether it bears the friction of real conceptual work or merely mimics its surface. Technology, in this context, doesn’t so much replace human thought as reveal its value—by contrast, by absence, by the ease with which language can now be generated without depth. The arrival of AI forces us to ask what distinguishes meaningful writing from patterned output, and the answer cannot be reduced to whether a human typed the words. The more pressing question is whether the language itself is doing the work of thought: making distinctions, testing assumptions, extending frames of reference, and inviting us to understand the world in ways we had not yet considered. If AI amplifies the recycling of familiar forms, then our response cannot simply be to defend human authorship—it must be to defend the ethics of rigorous thinking as an ethical practice, one that resists ease in favor of insight, and cliché in favor of conceptual transformation. This essay is not a critique of AI itself, but of the increasing tendency to turn to it for the kind of work it cannot do: what Deleuze, in What Is Philosophy?, defines as the creation of concepts—not the recombination of existing ideas, but the invention of new conceptual structures that bring problems into being and make genuine thought possible.
That, more than anything else, is what’s at stake—not just who writes, but what kind of thinking writing invites. Rigorous thinking is not just a technical skill or an exercise in style. Rather, it is an ethical act, one that insists on insight over expedience, and understanding over mimicry.