A Robot Didn’t Write This Article: AI Art And The New Assembly Line

AI was invented to replace labor. Even creative professions may be tomorrow’s low-level, low-paying work––all of us cogs in an algorithm chain and the results will be bland and boring.

“A robot wrote this entire article. Are you scared yet, human?”

So begins the most recent incarnation of the viral bot-written article, which first appeared in the Guardian. It’s a stunningly coherent piece of writing. The robot, in this case OpenAI’s latest creation, GPT-3, makes a compelling case for its intelligence and existence. The article was written “to convince as many human beings as possible not to be afraid of me”, and it does just that. Over the course of a few paragraphs, the robot insists that it would happily sacrifice its existence for the sake of humankind and denies its desire to become all-powerful. The article also manages to get in a number of jabs at the “violent”, “misguided” human race, and even makes a compelling case for AI ethics. There are a few errors – some mixed participles or changing speakers – but it reads as though a human could have written it.

The AI presented by the article is far from the visions of unstoppable killing machines or AI domination that films like Terminator and The Matrix have burned into our collective psyche. Here, the AI is for contemporary culture instead–more Her than HAL. Maybe this is the future we have to look forward to: one where the AIs are coherent, compassionate “humans”, but better.

The problem with this AI’s vision is that an AI didn’t actually write it. Not really. Conveniently hidden at the bottom of the article is a lengthy editor’s note which, while insisting that the article was written by GPT-3, includes a few incriminating caveats. GPT-3, it reads, “takes in a prompt and attempts to complete it.”  For this article, the neural network had two human-written inputs: a prompt to write a short op-ed on why people have nothing to fear from AI, and its introduction – 

“I am not a human. I am Artificial Intelligence. Many people think I am a threat to humanity. Stephen Hawking has warned that AI could “spell the end of the human race.” I am here to convince you not to worry. Artificial Intelligence will not destroy humans. Believe me.”

OK, sure, it took a prompt and completed it. That’s still pretty impressive, right? But the editor’s note goes on to inform us that this was not a method consisting of only one shot. GPT-3 was not fed this prompt once, spitting out an  impressive essay in less time than it takes an  average high-schooler to finish the writing portion of their SAT. For this article, they fed GPT-3 the prompt eight times, creating eight essays. They claim that each essay was “unique, interesting, and advanced a different argument.” They could have run any of them, but chose to combine the best parts of each.

This editor’s note bears closer scrutiny. I don’t work in a newsroom, but I don’t think the usual editing process involves a writer submitting eight versions of an op-ed, which are then cut and pasted together. And if they could have published each one as a standalone article, why didn’t they? They could have made it a series, or had GPT-3 as a new opinion columnist, putting out a new article from the robot perspective once every week. Imagine the clicks that would bring in.  The reason they haven’t done so is that, despite the great marketing push from OpenAI, GPT-3 isn’t ready to write an interesting op-ed article on its own. For that, the human hand must still intervene, if only to curate and edit.

GPT-3 is, functionally, an auto-completion engine. It’s trained to imitate text–in this case, a massive amount of text scraped from online resources: books, message boards, Amazon reviews, and articles from publications like the Guardian. This method of collection gives it many data points to work from. It can take a specific prompt, such as the one given by The Guardian, and attempt to write something that logically sounds like it would follow from that prompt. But that isn’t thinking, it’s a probability operation:  a big statistical machine trying to predict an objective function. And the result is text that’s sometimes interesting or useful, but still text that has to be thoroughly edited and generated eight times in order to get a compelling result. Even then, the article isn’t particularly well-written! It’s more coherent than ELIZA, but who’d read it for fun, once the novelty wears off?

What GPT-3 will be used for – and how it’s  already making a mint – is replacing human labor lower down the food chain. Clickbait is a prime example. Why have a dozen overworked people generating poorly-written articles when you can have one person feeding prompts into GPT-3 and editing them together? The end goal of software like this isn’t to replace a Guardian columnist, let alone, say, a real writer like Dostoevsky. It’s to automate away the human touch on an already empty endeavor, increasing efficiency and making the internet a slightly more boring, poorly-written place that sounds like the aggregate of humans writing rather than an individual. Agency, in this case, isn’t on the side of the machine– it’s just shifted down the line to the writers on the internet the algorithm is imitating, and to the editors who have to cut and paste it together into something coherent. But the messy process of attribution this necessitates isn’t as exciting to talk about as a conversation about an article written by an AI.

The Guardian isn’t alone in promoting the AI creator. In 2018, the “AI art gold rush” began in earnest with the sale of Portrait of Edmond Bellamy for $432,500, often credited as the first painting sold at auction by an AI. As Ian Bogost’s excellent article on the topic notes, it was a fairly similar process to the Guardian‘s, albeit more profitable. Obvious, the artist collective that created Portrait, used an existing machine learning model and dataset (which led to its own attribution controversy), trained it, and printed Portrait of Edmond Bellamy  to be sold at Christie’s. As Bogost explains, this was a controversial move. Without writing the algorithm, or curating the content, this effectively removed any artistic intent from the creation process. Think of it as  the high-tech version of Duchamp putting a urinal on a plinth, without  the  conceptual framework. But the art world loves a spectacle, and the choice to credit the algorithm to the artwork was a financial windfall for Obvious.

The machine learning algorithm that generated Portrait of Edmond Bellamy has become widely popular with tech-savvy artists in the past few years. Depending on the artwork, (the techniques vary somewhat), they’re made using Generative Adversarial Networks, or GANs. Similar to GPT-3, these algorithms are fed large sets of images, then attempt to output a new image that “autocompletes” the image data based on a random starting point.  The output images are then tested by the adversarial portion of the GAN: a separate algorithm trained to guess whether the images generated are real. If the generation algorithm tricks its adversary,  the generated image is marked as correct, which strengthens the model.

The images used to train these models are, by and large, collected and labeled by people in a tedious, labor-intensive process. Using software like Amazon’s Mechanical Turk, workers across the globe make pennies an hour taking thousands of images of birds, or trees, or renaissance paintings. They meticulously label them by what they contain, and in some cases draw little boxes over each item, which helps image recognition algorithms detect them. This same monotonous process is performed over and over on thousands of images.

What’s the end of this process, artistically? Most of the time, the end result is similar “paintings” whose blurry edges and barely recognizable shapes produce a stylistically uniform, and fairly uninteresting, series of visual art. As art critic Mike Pepi notes, while the technology may be interesting, “the results just aren’t very good”. But, because it has the shiny new AI art touch, it sells.

Some AI artists have more interesting processes, recognizing these machines as tools rather than artistic agents in and of themselves. Helena Sarin, for example, crafts machine-learning folk art using a combination of hand-printing, photography, and GANS. Mario Klingemann mashes up input datasets like classical renaissance paintings combined with German pornography. But what gets the most attention tends to be the work less focused on craft and more focused on automation–the cynical school of generative art. Why worry about craft, anyway, when you can just say a robot made it?

The work of Ahmed El-Gammal, a computer scientist whose work with his own custom algorithm, AICAN, was exhibited last year at HG Contemporary Gallery, is another interesting case. El-Gammal, who at one point ascribed full authorship to AICAN but has since relegated it to the title of collaborator, makes artwork that’s quite similar to any other GAN-generated work. But, as John Sharp, an art historian and critic points out, it’s more interesting as a “tech demo” than a “deliberate oeuvre.” But El-Gammal has reason to push his technical demo as a creative agent.  He created Artrendex, a company that provides “artificial-intelligence innovations for the art market.” These innovations include authentication for artworks and an art cataloguing and recommendations engine. Perhaps what drives El-Gammal to market AICAN as a creative agent is upstream of the work itself. If AICAN can make art as well or better than a person, that certainly lends credibility to Artrendex’s other offerings.

We are now living in a world where the algorithm that generates a painting is credited as the artist, or at least as the collaborator. But we rarely see the New Old School of digital and generative artists crediting Processing, Cinema 4D, or Photoshop as collaborators. As Casey Reas, the inventor of Processing, argues about artists working with technological media: “the artist should claim responsibility over the work rather than to cede that agency to the tool or the system they create.” 

What makes AI art different? Very little, other than the “AI” label. These new artists are cynically (or, at the very least, naively) covering up the poorly paid labor of out-of-sight workers, exaggerating or outright lying about how their works are made, and making a killing in the process. Of course, this tactic isn’t new. It might just be a case of art imitating art. There’s a rich history of artists taking full credit for a process that is, at the very least, a collaboration. From Thomas Kinkade’s assembly-line studio to Jeff Koons’ “studio serfs”, many of America’s most successful artists (including photographers, like Annie Lebovitz) outsource the actual labor of making the art. But GAN art, like GPT-3 writing, takes the actual labor performed and puts it even further out of view. Kinkade’s Americana assembly line becomes a warehouse in India, creating datasets that will never bear the names of these workers and can be reused infinitely. That labor creates layers of outsourcing upon outsourcing, obfuscating the chain of attribution. In doing so, they’re fulfilling the ultimate goal of AI writ large. What goal is that?  

To explain it, we have to redirect our focus from the New York art world to the other side of the country, where these algorithms are being developed, and to the industry that has such a vested interest in their adoption: tech. Big tech has spent $8.9 billion on AI, with Google alone accounting for almost $4 billion. And the tools they develop are being integrated into their platforms and software at an exponential rate. But why are they doing this?

The answer lies in what AI is designed to do above all else: replace human labor, even if it rarely does it particularly well. The ability of AI to replace workers has always been, from inception, one of its greatest selling points, and one of the driving ethical concerns around its exponential adoption throughout global industry. As a recent Brookings Institution paper noted, a quarter of the American workforce, 36 million people, are at risk of being replaced by automated technologies in the coming decades. This is, of course, of great concern for the good folks at Brookings, as well as the loving leaders of American industry. A Deloitte study notes that replacing human workers is the highest ethical concern around artificial intelligence for companies in the process of adopting it–not that it’s stopping them. Luckily, we’re told, embracing these new technologies will lead to a wave of job creation that will make workers more productive, increase the standard of living, and usher in the brave new world we were promised when Henry Ford invented the Model T.

It’s only a throwaway line that points to the other side of the coin here. Namely, that without legal and economic intervention (Brookings suggests, sagely, an expanded earned income tax credit), “automation and AI will exacerbate financial insecurity by forcing many workers into low-wage work.” How will this happen, given all the new jobs AI is going to create? By creating a new kind of low-skilled work, through collaboration with AI technologies, the same work being done already on Mechanical Turk and its brethren. Once we get to this stage, we’ll be at the crux of the use of AI in new forms of production. AI is no longer framed as a replacement, but a collaborator, a tool for efficiency. Instead of a customer service representative, we have LivePerson: an “AI-powered dashboard that allows humans to serve as “bot managers”. LivePerson allows its users to monitor hundreds of chatbot interactions at once which are then flagged with sentiment analysis to allow the “managers” to take over the conversation when it’s deemed too negative. The actions of the managers, in turn, train the algorithm to do better the next time someone’s order hasn’t arrived.

Marketed as a human-AI collaboration tool, LivePerson glosses over what it’s actually doing, which is replacing hundreds of people in a call center with one overworked person monitoring each call, making a more frustrating experience for just about everyone involved–users left banging their head against a chatbot wall until they’re furious by the time they reach a person and the “bot manager,” dealing only with furious people, constantly putting out fires while having to keep one eye on their screen. And where will the newly laid-off call center workers go, along with the  rest of the  estimated 75 million jobs that will be displaced?  Maybe they’ll be labeling the images going into the datasets that generate our brave new world of art, or auditing the datasets that go into our text prediction software. We do know that workers won’t be getting retrained to work with their new AI collaborators, since, according to Deloitte, 68% of companies would rather hire new AI talent than retrain their existing workforce.

With the rise of AI, fear of it–whether it’s automation or Skynet–has begun to rise, too. And with that fear, AI ethics has emerged as a critical new area of research and analysis. An entire industry has sprung up to advocate for a more ethical AI. Researchers like Timnit Gebru and Joy Buolamwinmi, who discovered a systemic failure on the part of computer vision systems to recognize the faces of Black people, or Kate Crawford and Trevor Paglen, who exposed the ImageNet dataset in ImageNet Roulette, revealing the offensive or racist labels that found their way into a dataset widely used in benchmarking image-based models. There are plenty of text-based models, including GPT-3, that are running into the same problem. But a lot of this research seems to be missing the point. It assumes that if we get the data right by better representing marginalized communities, or coming up with the right set of guidelines for AI’s use, we will have an artificial intelligence that can finally be objectively evaluated and deemed to be ethical. Still, this seems like trying to build a fortress out of papier maché. When our datasets come from real people, it’s close to impossible to remove bias and maintain a dataset that accurately represents them. And if a machine learning model is designed to imitate those people, bias will be a feature, not a bug. Unfortunately, people in aggregate tend to be biased. 

The entire process of building a more ethical AI as it’s currently laid out seems to sidestep the entire end goal of commercial AI: replacing a human workforce. And in the push for better, bigger AI, the goal, is “rapid and massive progress”, which is fundamentally at odds with an ethical approach. According to the paper Data and Its (Dis)contents: “Fixes that focus narrowly on improving datasets by making them more representative, or more challenging [for improving their scores on various benchmarks] might miss the more general point.” 

It’s in the best interest of the people selling these models that we keep looking for a more ethical artificial intelligence on their terms – fighting about the best way to make a model, or collect our data – than to evaluate why we’re doing this at all. A more ethical AI model exists to be a marketing tool for whoever’s peddling it. And the purpose of this AI, whether it’s trained on Bull Connor or Malcolm X, is to automate away something that might otherwise be done by a paid person, and put more money in the coffers of the company that trains it. The goal of AI, in the end, is just the age-old goal of capital: to increase profit for the few at the expense of the many.

As Mimi Onuoha writes in her essay, The Point Of Collection, “as we abstract the world, we prioritize abstractions of the world. The more we look to data to answer our big questions[…] the more incentives we have to shape the world into an input that fits into an algorithm.” Whether it’s an artist making yet another GAN painting, or someone at a call center being used as a “bot monitor”, the increased presence of these technologies in our lives works mostly to make us behave more like machines, and make the world around us more uniform. The whole enterprise shifts low-skilled workers into ever-more monotonous roles, and high-skilled workers into curators, massaging and trimming the work of the algorithm. It increases the speed at which content is created, decreases its labor cost, and removes the pesky problem of workers with thoughts, complaints, and bathroom breaks.

This is the issue we run into with “creative” artificial intelligence and agency. What these algorithms tend to do, mostly, is carry water for the broader AI industry. Showing people an essay “written” by an algorithm instills  the idea that algorithms are smart enough to, say, replace your marketing team, or your social media manager, or your call center. In the case of artists like El-Gammal or Obvious, it even gives people the idea that art can be replaced by machines. And despite a process closer to Thomas Kinkade’s “mall art” factory than that of a single, thinking artist, the AI is given partial or full credit, with little thought to how it was made or the thousands of man-hours it depends on.

What’s insidious about AI is not that it’ll become superintelligent, or even creative in any meaningful sense. The frightening thing is what it’s made to do–to take the place of a lot of real, interesting, and paid human labor and replace it (or obfuscate it) with something spit out by an aggregation machine. The labor cost is one half of the equation, but uniformity is the other. Why stop at blurry abstract paintings that all look the same, when every telemarketing call could be the same, too? This is the end goal of AI’s funders. Once we’re all on the same five  platforms, why not minimize the human element on them? Why pay for real writers on your platform when you can downsize your editorial department to an API and one editor? Why hire a person, when AWS has a cloud service that will  do the same thing?

This isn’t a new trend in technology. Over the past decade, website design has reverted to the mean, and now we live in a world of rounded corners, dark modes, and sans-serif fonts. With the introduction of AI, every customer conversation, every bit of filler text, and even code, could someday work the same way. Uniformity helps websites achieve their intended purpose, which is either selling something, or collecting data, or both. This creates a perfect enclosed system. Your data trains the algorithm, which informs the content of the website, which in turn extracts more data, or more capital, from the user. As MacKenzie Wark describes it in their book Capital Is Dead: “your job, for which you are not getting paid, is to train a machine to know what the “human” is when seen entirely from the perspective of consuming.”

I doubt human creativity will be subsumed by artificial intelligence – we’re already seeing the backlash to platform capitalism, and they haven’t even tried to give us the bot-written New York Times article yet. Besides, GPT-3 still needs an editor, LivePerson’s Glassdoor reviews paint a picture of a company built more on promises than product, and in general, the AI economy is still more hype than reality. As Cory Doctorow points out in How To Destroy Surveillance Capitalism, despite the popular depiction of machine learning and surveillance tech as “a mind-control ray out of a 1950s comic book,” “sales literature is not a reliable indicator of a product’s efficacy. “ Internal analyses of the claims that Google or Facebook make about their data show that they rarely meet reality. Advertising with big data is still plagued with the issues that have always plagued advertising. As John Wanamaker remarked over a century ago, “half the money I spend on advertising is wasted; the trouble is I don’t know which half.” The tech titans, and the algorithms they’re selling, rarely deliver what they say on the tin. Lies, in art as in business, remain lies.

But, Doctorow notes, the sheer size of these companies, their dominance of the tech space, gives them an outsize power over what the internet looks like. And the ways these companies lie about their products, much in the same way AI artists lie about their work, helps create more and more demand for automation. Which means that there will be more LivePersons, more Artrendexes, more content produced by GPT-3 and hastily cut together by an editor, sitting alone in a room somewhere. More human labor will be shifted down the line towards the pursuit of data. The AI death march will continue until performance improves.

The goal of artistic and commercial AI is, ultimately, the same–an assembly line on a scale never seen before, a chain of production that enforces aesthetic and commercial uniformity. And the more we try to fit the world into datasets, and let models designed to flatten those datasets into our lives, the more uniform everything becomes. In the human-written first paragraph of the Guardian’s GPT-3 article, we’re told that “artificial intelligence will not destroy humans.” That much I am willing to believe. But what interests me is its claim that robots are made in our image. There’s a certain interpretation under which that’s true. But what becomes of the people who make those images, once the images have left them behind?


Brent Bailey(ITP ‘20) is a creative technologist and researcher based in Brooklyn, New York. His work focuses on technological methods of resistance, sabotage, and subversion.

Back to Top
Previous Article
Next Article