Digital editor and journalist, SAMUEL SHEPHERD, looks into the ways in which human writers have continued to thrive in an AI-focused world.
The past few years have been quite stressful for creatives across the business landscape.
New innovative technological advancements and shifts in the way that companies tend to operate have led to a growing feeling of lingering uncertainty as to the future of many careers once thought to be irreplaceable and inimitable.
One doesn’t have to look too far back to remember the strikes formed by the Writers Guild of America that heavily pressed on the potential for AI to be used in ways that negatively impact the industry and its professionals, a very visible fight that saw a considerable widespread coverage of this growing fear.
However, despite the gloom, the world of professional design has not been completely overtaken by Midjourney, just as professional publications have not been completely generated by large learning models. Rather, while the hammer of artificial intelligence continues to hang low, the optimism of the initial AI boom appears to have shifted quite significantly in the past few years, and while the tools are still widely used by businesses and creatives alike, their new role as dominating business leaders in the creative fields doesn’t seem much closer to materialising.
Why is that? What of the initial excitement and fascination with these tools hasn’t quite come to fruition? As an editor and copywriter that has been working within the digital space for over a decade, I wanted to dive deeper into the power of writing as a creative pursuit, and the limitations of artificial intelligence in replicating its impact.
Artificial intelligence isn’t a person, and that’s okay
With any major advancement in technology, there is a level of uncertainty as to the trade off that comes with its benefits. While it has been talked to death at this point, many of the same concerns people have with new technologies are the same as those levelled at the printing press in the 1440s. We have seen many innovations over the past 100 years alone that have completely uprooted how we think of and identify with particular industries, but for each of these disrupting creations you are likely to hear the same awed words:
“This technology will change EVERYTHING!”
This is, quite frankly, a very heavy burden to place on any technology, and one that while technically arguable in some senses for machine learning, is a bit more extreme than trends we are actually seeing. When we think about how human beings behave, and how technology has refined and altered those behaviours, we often forget that the end goal is generally unchanged within that process.
As discussed by ScienceDaily in a recent post, research around the efficacy of AI storytelling has found that not only do those studied prefer human writing in blind tests, they also tend to rank things worse when they are told they are AI, even when they’re not. This suggests that not only is AI not up to the task of crafting compelling writing in the same way as a real writer, but that people simply don’t like the idea that they’re being spoken to by a machine.
This isn’t particularly surprising, as it speaks to why people enjoy reading narratives and hearing stories in the first place, whether it’s for novelists or freelance writer jobs in the digital space. Storytelling is a deeply human endeavour, it is a means of interpersonal connection, and while AI learning models that have been trained on the writing of others can come to an admirable facsimile with input and oversight from an editor, they can only put out a work based on other works. An AI learning model has no first-hand experiences, nor can it “think” in the traditional sense. It can only put together information like a puzzle based on how other puzzles have come together before.
This naturally leads to another issue that learning models will face when fully replacing writers.
AI isn’t ‘intelligent’ in the way we are
In an article from The New Yorker simply labelled “There Is No A.I.”, computer scientist Jaron Lanier puts this quite well:
“The sooner we understand that there is no such thing as artificial intelligence, the sooner we’ll start managing our new technology intelligently.”
While saying “AI isn’t AI” by itself feels like just another pithy statement, it is an important distinction when we are talking about the ways in which these models will develop from a functional perspective. AI learning models are, to drastically simplify, pattern and feature recognition software that works within a particular task or framework. This isn’t a bug in the system, it’s simply the limitation of what the system is, and as an incredible innovation in and of itself, the mythologising that it often receives does a disservice to its genuine utility.
Human writers are intelligent, independent beings capable of many things. Rather than simply comprehending a subject based on different preexisting works, a good writer brings a world of first-hand experience and ideas and emotional nuances to a piece that can’t be imitated. It’s why a lot of even basic ChatGPT writing is often criticised for having no personality to its tone, because it cannot have a personality that it has not expressly been trained to have, and there are always going to be limitations to that system.
AI lies, but not like people lie
One of the big issues in the AI writing space has always been the quirks of how artificial intelligence in general actually gathers and compiles data. Namely, the fact that AI systems such as ChatGPT are essentially Black Box systems, with the inner workings of how they come to conclusions being difficult to parse at the best of times. If a piece of AI writing has a declarative statement about an event or an idea, and it doesn’t source its work, it will often be very difficult to figure out whether what it’s saying has any merit, or why the output took the form it did. Even when there is a resource or a citation listed, AI’s penchant for “hallucinations” when filling in the gaps means that you can’t just assume the citation exists, or says what the output claims.
This in part comes down to how language models understand structure. If a learning model has been given thousands of scientific journals and told to write a report on a particular subject in the same style, its pattern recognition will detect certain elements, but not necessarily the broader picture of why they’re used in the way that they are. After all, if a learning model detects that after certain types of statements, there tends to be the name of a person, an institution, and a date, then why wouldn’t it just add those things, even if they don’t exist?
This is a problem in many fields of human writing too, with academia especially having an issue where citations will be used without even actually reading them to verify their relevance. However, in the case of a human writer, an error is an error in judgement, and is generally easy to cast blame an rectify, while for a learning model, you would often need a specialist within the field anyway to determine whether much of what is being said is sound. So, while a writer can be adaptive, and weave finished works while understanding where their sources are and what information comes from what origin, this is an area in which AI continues to struggle.
For example, I asked ChatGPT about the Kanon Trading Card Game, an obscure Japanese card game for which the only proper resource on it was written by me around two years ago, here’s a screenshot of its response:
It provides information that is too broad to be helpful, and cannot give any specifics. So, I asked it where it got this information from in the first place:
The interesting thing of note about this interaction isn’t that it couldn’t give me a resource, given how niche a topic it is, but rather that it gave me an answer on something it had no information in its database. It saw Kanon, the 1999 video game, and it saw trading card games, a thing it has data for, and just fused them together. This is a big problem for data gathering, as it means that a language model like this will give you information, whether it has that information or not.
Even asking about myself as the author, it managed to come up with a biography for me as a writer in the field of gaming and subcultures:
This is impressive. It is amazing that our language models have become so good at putting information together, but if a human writer did something like this, you wouldn’t be mad at them; you would be worried for them. So, with the amount of AI-generated content out there, it makes you wonder how much of what is being said can be validated, and even more worryingly, as AI has been eating itself since as early as 2023, how much of what AI is being fed is just its own hallucinations and half-truths.
Moving forward
We can never truly know what the future will bring, especially for emerging technologies such as AI and machine learning. However, when we look at the present, we can at least see the road that we’re on, and for writers, that is a road in which they are still valuable and necessary parts of the business ecosystem. That said, this hasn’t stopped businesses from trying to replace them, and it is still a time of great upheaval whether the technology is truly there or not.
If you’ve found yourself reading the end of this article, on a news publication run by real writers and editors, then I think you already know the significance of human-written content. So, no matter what position AI holds in the future of publishing, it is clear to me now that this future will still involve capable writers doing what they’ve been doing best since BCE.
Who can be trusted?
In a world of spin and confusion, there’s never been a more important time to support independent journalism in Canberra.
If you trust our work online and want to enforce the power of independent voices, I invite you to make a small contribution.
Every dollar of support is invested back into our journalism to help keep citynews.com.au strong and free.
Thank you,
Ian Meikle, editor
Leave a Reply