AI is not creative - you are

Featured Image

I subscribe to Dave Anderson’s Substack Scarlet Ink (and recommend it to people interested in an insider’s view on managing in the tech industry). A couple weeks ago in a bit of a departure from his regular fare, Dave wrote about his grueling attempt to test generative AI’s capacity to write a novel. His post was a painful read, as might be the resulting 90,000 word tome. The agony and insight of his post was reading the tremendous amount of yeoman work Dave expended just to coax AI into creating a work that at best is (in his words) sounds like AI drivel?

His post was instructive in light of concern about the future state of human creativity. In many of my discussions over the last few years with creatives– authors, musicians, faculty and staff in the humanities, as well my own family– I hear the sense of powerlessness about an artificial intelligence that might threaten our human agency and creativity. But there’s a different angle to see here. While it’s true that large language model training feasted on a large body of copyrighted work, this doesn’t confer creative superpowers to AI. As science fiction writer Ted Chiang explained in 2024:

When you give a generative-A.I. program a prompt, you are making very few choices; if you supply a hundred-word prompt, you have made on the order of a hundred choices.

If an A.I. generates a ten-thousand-word story based on your prompt, it has to fill in for all of the choices that you are not making. There are various ways it can do this. One is to take an average of the choices that other writers have made, as represented by text found on the Internet; that average is equivalent to the least interesting choices possible, which is why A.I.-generated text is often really bland.

– Ted Chiang, “Why A.I. Isn’t Going to Make Art,” The New Yorker (2024)

As many others have noted, writing is thinking. In college, students pressed for time may weigh using AI to summarize large volumes of reading, or to speed the writing of a paper. As described by Clay Shirky in his recent essay I, Chatbot, however, one risk is that people mistake outputs of AI for the education product (as opposed to product being the AI process). While some faculty have dug in to learn about AI, others still lag in developing the level of understanding that would be helpful to students:

“Students are using AI all over the place, and they benefit from guidance about how to consider or use AI in discipline-specific settings. To provide that guidance, faculty need a foundational familiarity with the tools.”

We’re all working through the very beginning of the AI era. Berkeley faculty and many staff are deeply engaged with interrogating our AI future. As is very clear to those of us who are thinking a lot about it, AI is not so much an IT or technology problem as a set of complex institutional and societal challenges, not least of which is the nature of human creative work.

*As a footnote, while AI prompting work is real human toil, now the law weighs in in a recent legal ruling, that the AI generated product of this work will not get copyright protection.


Photo by James A. Molnar on Unsplash

Not by AI Badge