AI Signals The Death Of The Author

Credits

David J. Gunkel is the Presidential Research, Scholarship and Artistry Professor in the Department of Communication at Northern Illinois University and associate professor of applied ethics at Łazarski University in Warsaw, Poland. His most recent book is “Communicative AI: A Critical Introduction to Large Language Models” (Polity, 2025).

In response to any written document, like the one you are reading right now, it is reasonable to ask who wrote it and can therefore authorize its content. To resolve this, you will probably try to learn a bit about the author, for their identity can help determine the truthfulness of what is in the document. Given, for instance, that my bio tells you I’m a professor of communication studies at an American university, you may assume I’m well-placed to write on the disruption caused by large language models (as I intend to here) — even, perhaps, that what I say is more or less to be trusted. After all, you’ve identified the author and found him to hold some authority on the subject.

But when a text is written or generated by a large language model like ChatGPT, Claude or DeepSeek, the view of the author becomes clouded. Technically speaking, an algorithm wrote the text, but a human had to prompt the algorithm. So who or what is the author? Is it the algorithm, or the human, or a joint venture involving both? Why does it even matter?

Since ChatGPT was launched in 2022, much commentary has been dedicated to bemoaning the end of the human writer. Either LLMs will overtake the act of writing entirely, or humans will cede too many of their own creative powers to them. The advance of this technology will “render us wordless, thoughtless, self-less,” one journalist lamented last year. But, he emphasised, it is not only writers who will be impacted. “If AI does indeed supplant human writing, what will humans — both readers and writers — lose? The stakes feel tremendous, dwarfing any previous wave of automation.”

I hold a different view. LLMs may well signal the end of the author, but this isn’t a loss to be lamented. In fact, these machines can be liberating: They free both writers and readers from the authoritarian control and influence of this thing we call the “author.”

“The death of the author is the birth of the critical reader.”

If you were to ask someone what an author is, they would most probably answer that it is someone who writes a book or some other text and is therefore responsible for what it says. They could reel off the names of people we identify as such: William Shakespeare, Jean-Jacques Rousseau, Virginia Woolf, maybe even this guy David Gunkel. But this understanding of an author is not some kind of universal truth that has existed from the beginning of time. Rather, it is a modern conception. The “author” as we now know it comes from somewhere in the not-so-distant past; it has a history. 

The French literary critic Roland Barthes, in his 1967 essay “The Death of The Author,” traced the roots of this now-commonplace idea to the modern period in Europe, beginning around the mid-16th century. Before then, people did of course write texts — but the idea of vesting responsibility and authority in a singular person was not common practice. In fact, many of the great and influential works of literature — the folklore, myth and religious scripture that we still read today — have circulated in human culture without needing or assigning them to an author. 

The modern period, however, spawned a number of related intellectual and cultural developments in Europe that centered around what Michel Foucault later called a “privileged moment of individualization in the history of ideas.” In rejecting subservience to the papacy, the Protestant Reformation of the 16th century birthed an individualized faith. Then, in the following century, philosopher René Descartes built his rationalist philosophy on the statement “I think, therefore I am,” making all knowledge dependent on the certainty of self-conscious thought. Accompanying these innovations was the concept of personal property as an individual right, ensured and protected by the state.

The concept of the author, as both Barthes and Foucault demonstrate, emerges from the confluence of these historically important innovations. But this does not mean that the author as the locus of literary authority is just a subject for theory — it also evolved to be a practical matter of law. In 18th-century England and its breakaway North American colonies, the author became the responsible party in a new kind of property law: copyright. The idea of an author being the legitimate owner of a literary work was first introduced in London not out of some idealistic dedication to the concept of artistic integrity, but in response to an earlier technological disruption that permitted the free circulation and proliferation of textual documents: the printing press. 

As Sven Birkerts explains in the book “The Gutenberg Elegies”: “The idea of individual authorship — that one person would create an original work and have historical title to it — did not really become entrenched in the public mind until print superseded orality as the basis of cultural communication.” Once mechanically generated copies of text became easily accessible, and it became possible to make money from them, it was important to identify the author — or, rather, to be identified as the author. Thus, the proper name of the author is not only critical in terms of the origin of a text and its significance and attribution — it’s necessary for commercial transactions and cutting checks. 

“The authority for writing has always been a socially constructed artifice. The author is not a natural phenomenon. It was an idea that we invented to help us make sense of writing.”

The advent of what we now understand as “an author” had several important consequences for modern literary theory. As Barthes wrote: “When the Author has been found, the text is ‘explained.’” Authority came to be seen not in the material of the writing — i.e., in the words themselves — but in the original thoughts, intentions and character of the individual who wrote it. 

Because of this, the primary task of the reader became one of penetrating the surface of the writing, finding the authorial voice behind it, and then comprehending what they originally intended with it. Following this formulation, modern critics and philosophers agreed with Descartes that “the reading of good books” meant “having a conversation with the most distinguished men of past ages.” (The use of the gender-exclusive “men” in this context is not insignificant — the author, like so many of the other authority figures during this period, was usually a white guy.) 

This conceptualization of writing as a medium of expression or communication has a deep intellectual reach and a well-established historical foothold. In Aristotle’s doctrine of signs and their meanings, the written word was characterized as a symbol of mental experiences — in other words, what you write represents or expresses what’s on your mind. This was further theorized in the science of communication, formalized by Claude Shannon and Warren Weaver in the mid-20th century, who gave us a unidirectional model that is taught in every introductory course on the subject: source, transmitter, channel, receiver, message, destination.

Thus, or so the argument goes, the best writings are those that speak clearly and directly so a reader can access, understand and comprehend what the author had in mind. The writing should become virtually transparent and permit the unimpeded flow of information from the mind of the author to the mind of the reader.


If the author as the principal figure of literary authority and accountability came into existence at a particular time and place, there could conceivably also be a point at which it ceased to fulfill this role. That is what Barthes signaled in his now-famous essay. The “death of the author” does not mean the end of the life of any particular individual or even the end of human writing, but the termination and closure of the author as the authorizing agent of what is said in and by writing. Though Barthes never experienced an LLM, his essay nevertheless accurately anticipated our current situation. LLMs produce written content without a living voice to animate and authorize their words. Text produced by LLMs is literally unauthorized — a point emphasized by the U.S. Court of Appeals, which recently upheld a decision denying authorship to AI. 

Criticism of tools like ChatGPT tends to follow on from this. They have been described as “stochastic parrots” for the way they simply mimic human speech or repeat word patterns without understanding meaning. The ways in which they more generally disrupt the standard understanding of authorship, authority and the means and meaning of writing have clearly disturbed a great many people. But the story of how “the author” came into being shows us that the critics miss a key point: The authority for writing has always been a socially constructed artifice. The author is not a natural phenomenon. It was an idea that we invented to help us make sense of writing.

After the “death of the author,” therefore, everything gets turned around. Specifically, the meaning of a piece of writing is not something that can be guaranteed a priori by the authentic character or voice of the person who is said to have written it. Instead, meaning transpires in and from the experience of reading. It is through that process that readers discover (or better, “fabricate”) what they assume the author had wanted to say. 

This flipping of the script on literary theory alters the location of meaning-making in ways that overturn our standard operating presumptions. Previously, it had lain with the author who, it was assumed, had “something to say”; now, it is with the reader. When we read “Hamlet,” we are not able to access Shakespeare’s true intentions for writing it, so we find meaning by interpreting it (and then we project our interpretations back onto Shakespeare). In the process of our doing so, the authority that had been vested in the author is not just questioned, but overthrown. “Text is made of multiple writings, drawn from many cultures and entering into mutual relations of dialogue, parody, contestation,” wrote Barthes, “but there is one place where this multiplicity is focused and that place is the reader. … A text’s unity lies not in its origin but in its destination.” The death of the author, in other words, is the birth of the critical reader. 

This fundamental shift in the location of meaning-making also explains how LLM-generated content comes to have meaning. The critics are correct when they point out, for instance, that LLMs generate seemingly intelligible sequences of words but do not “truly comprehend the meaning behind” them because they “have no access to real-world, embodied referents.” But it would be impetuous to conclude that LLMs simply generate bullshit

Their writings are and can be meaningful. What they mean is something that comes about through the process of our reading them and then interpreting and evaluating them. But this is not specific to LLMs; instead, as Barthes already demonstrated, it is a defining characteristic of all writing — this essay included, since it is you, the reader, who has had to determine what it means. LLMs simply render all of this legible and obvious.

“What we now have are things that write without speaking, a proliferation of texts that do not have, nor are beholden to, the authoritative voice of an author, and statements whose truth cannot be anchored in and assured by a prior intention to say something.”

But there’s something bigger at play here. The advent of LLM AI also brings into question the concept of meaning itself. When I write the words “large language model,” it is assumed that those words stand for and refer to some real thing out there in the world, like the ChatGPT application. Words have meaning because someone, like an author, who we assume is an embodied human person with access to the real world, uses words to refer to and say something about things. This, after all, is what Aristotle was getting at when he said that language consists of signs that refer and defer to things. And the big problem with LLMs is that they lack this ability: They manipulate words without knowing (or caring) what those words refer to. 

But this seemingly common-sense view of how language works has been directly challenged by 20th-century innovations in structural linguistics, which considers language and meaning-making as a matter of difference situated within language itself. The dictionary provides perhaps the best illustration of this basic semiotic principle. In a dictionary, words come to have meaning by way of their relationship to other words. When you look up the word “tree,” you do not get a tree; you get other words — “a woody perennial plant, typically having a single stem or trunk,” and so forth. 

Words, therefore, do not exclusively come to have meaning by direct reference to things; words refer to other words. This is the meaning (or at least one of the meanings) of that famous statement associated with the notoriously difficult French theorist Jacques Derrida: “Il n’y a pas de hors-texte,” or “There is nothing outside the text.” And this fact is especially true for LLMs as there is, quite literally, nothing outside the texts on which they have been trained and that they are prompted to generate. For LLMs, it’s words all the way down. 

Consequently, what has been offered as a criticism of LLM technology — that these algorithms only circulate different words without access to the real-world embodied referents — might not be the indictment critics think it is. LLMs are structuralist machines — they are practical actualizations of structural linguistic theory, where words have meaning not by reference to things but by referring and deferring to other words — and they thereby disrupt the standard operating presumptions of classical (Aristotelian) semiotics.


We should be critical of the promise and peril that LLMs present. After all, they and other forms of generative AI are powerful technologies whose impact on the world will be enormous. ChatGPT is not even three years old and already boasts half a billion weekly users; DeepSeek is one of the fastest-growing platforms worldwide. More will surely follow.  

But in responding to the ways in which these systems challenge how humans access, interpret and convey knowledge, linguists, philosophers and AI experts tend to simply reassert concepts of authorship and authority that have long since been undermined. And the problem is not that these traditional ways of thinking about writing have somehow failed to work in the face of recent innovations with LLMs. It’s quite the opposite. The problem is they work all too well, exerting their influence and authority over our thinking as if they were somehow normal, natural and beyond question.  

A large part of the reason for our misunderstanding of the significance of these machines is the way “artificial intelligence” is understood. Because of its nominal focus on “intelligence,” AI’s outputs are taken to either signify the actual presence of intelligent thought or, in cases where the device spits out nonsense or hallucinates, the lack thereof. Taking the generation of written content as a sign or symptom of intelligence has been the definition of AI since the time of Alan Turing’s imitation game. LLMs, however, produce intelligible textual content without intelligence (or without us knowing for sure whether there is intelligence inside the black box or not, which is actually worse). In doing so, they destabilize the rules of the game.

All this throws up something that has been missed in the frenzy over the technological significance of LLMs: They are philosophically significant. What we now have are things that write without speaking, a proliferation of texts that do not have, nor are beholden to, the authoritative voice of an author, and statements whose truth cannot be anchored in and assured by a prior intention to say something. 

“Large language models open an opportunity to think and write differently.”

From one perspective — a perspective that remains bound to the usual ways of thinking — this can only be seen as a threat and crisis, for it challenges our very understanding of what writing is, the state of literature and the meaning of truth or the means of speaking the truth. But from another, it is an opportunity to think beyond the limitations of Western metaphysics and its hegemony. 

LLMs do not threaten writing, the figure of the author, or the concept of truth. They only threaten a particular and limited understanding of what these ideas represent — one that is itself not some naturally occurring phenomenon but the product of a particular culture and philosophical tradition. Instead of being (mis)understood as signs of the apocalypse or the end of writing, LLMs reveal the terminal limits of the author function, participate in a deconstruction of its organizing principles, and open the opportunity to think and write differently.

But don’t take my word for it. Who or what am I? What authorizes me to assert and exercise this kind of authority over a text? How can you be certain that everything you just read is the product of a human author and not something generated by an LLM or some human-machine hybrid? 

You have no way of knowing for sure. And everything that could be done to resolve this suspicion, like pointing to my name, listing the details of my biography or even having me add a statement asserting that everything you have just read is “100% genuine human-generated content,” will ultimately be ineffectual. It will be so mainly because an LLM can generate exactly the same. No matter the assurances, there will always be room for reasonable doubt.

And that’s the point. The difficulty that has been assumed to be unique to LLM-generated content — that we have words without knowing for sure who or what authorizes what is being said through them — is already a defining condition of all forms of writing, this essay included. The LLM form of artificial intelligence is disturbing and disruptive, but not because it is a deviation or exception to that condition; instead, it exposes how it was always a fiction.