Large language models (LLMs) are causing consternation, awe, and eye rolling, depending on one’s crisis related to their deployment. Which “future-proof” career is going to implode next? Lawyers, copywriters, college students, emotional supports?
This did not happen because it was inevitable. Often the setup to such crises become problematic before automation joins the fray. Let’s look at writing, presently considered under threat by LLMs.
If you were born in the 80s or later, you might have learned about the Flesch-Kincaid readability test. A relatively simple equation used both word and sentence count and length to create a numerical rating of language complexity. A short sentence with short words had high readability. A long sentence with long words had low readability.
Flesch-Kincaid is useful for guesstimating whether a 10-year-old is ready to read War and Peace. Unfortunately, the score instead became a hard boundary that writers had to clear for publication. The hard line approach also made writing less comprehensible. The first iterations of content media were guided by Flesch-Kincaid readability cutoffs and SEO. What is now famously termed enshittification started with strict compliance with readability scores and SEO that we can clearly see detracts from the reading experience.
There are a couple forces at work here. First, when readability scores take priority, 1) the human writer is forced to write something a professional writer never would, and 2) the human writer is forced to conform to parameters that a machine-like output. When these parameters are invisible to human readers (except through enshittification), it renders invisible the full breadth of human ability and makes machine writing look like fair competition. This invisible restraint of human factors in writing can lead to performance graded on a curve that favors machines.
Going back to LLMs, the major problem now lies with the illusion that a machine can act on its own, and proponents of AI are loathe to clarify the actual human labor needed to make an LLM work. That may change someday, but today, machines are still tools that require a lot of human input, even if we pretend otherwise. When human performance is graded on a curve that rewards tool capabilities over human abilities, the activity of creation stops being a process whereby humans use tools. Instead, humans appear to compete with their tools, with very little said about what would actually happen if humans were banished from the process. The end result has been a decline in perceived competence of workers and increased perceived competence of tools.