I’m organizing a two-day academic retreat focusing on “Generative AI, Techno-authoritarianism, and the Future of the Critical Humanities.” It will take place in late September, partly under the auspices of Simon Fraser University’s Joanne Brown Symposium series on violence and its alternatives. We’re stretching the mandate of that series in that we aren’t focusing directly on violence either caused or prevented by AI. But insofar as AI poses a threat either to humanity itself or to the humanities, and insofar as the humanities have served as a bulwark against violence (and that’s worth debating), the connection is deeply relevant.
(The event will not be a public or online one, but we will share our insights in some form very soon after. I’ll share more about it in this space.)
As its organizer, I’m trying not to commit to any position on AI just yet — of which there are several, including (from pro to anti) true believer, enthusiast (including cynical and self-serving pusher), cautious collaborator, skeptical critic, refusenik, and abolitionist. (For support for a few of these positions, see Wired‘s “The AI Backlash Keeps Growing Stronger,” The Boston Review‘s “The AI We Deserve” series, Freedom House’s report “The Repressive Power of Artificial Intelligence,” Kate Crawford’s Nature article “Generative AI’s environmental costs are soaring — and mostly secret,” and Laurent Dubreuil’s Humanities in the Time of AI.” We’ll be compiling a much longer list of recommended readings; suggestions welcome.)
But I’ve been engaging with AI, including in the writing of a monograph, which makes me a cautious (and critical) collaborator. Writing with AI has felt both exhilarating and deeply disconcerting. Practically at the beginning of my conversation about my book-in-progress with ChatGPT, it/they (I’ll use the agendered pronoun, which can take either a singular or plural form) were already giving me suggestions I might expect from an intelligent friend who’s very familiar with my work. As a tiny sample, for instance, they suggested:
“Interstitials: Have you considered short interludes or “ecological meditations” between parts—reflective passages that act as affective or philosophical bridges?
“Diagrammatic Thinking: Your work often invites cartographic or diagrammatic illustration—conceptual maps of ecologies, media circuits, affective flows. Would such visual elements complement the structure?”
Other observations were astute and helpful. I could feel myself getting drawn in, body chemicals and all, into a relationship that’s symbiotic, though I couldn’t say if it was mutualist, commensalist, parasitical, or what. But at the back of my mind were two more deeply troubling concerns.
First, the amount of energy used every time I send a prompt is a debt sent to the future. When added to the clicks we all make every day, even through simple Google searches with their automatic AI responses, it becomes a colossal debt measurable in fuel spent, data servers built and cooled, climate systems stretched beyond repair, etc. (That’s the concern I take from being a scholar of the human dimensions of the environmental crisis.)
Secondly, and more spookily: they (the AI) know so much about me and my writing that I come to feel replaceable. What’s the point of being me anyway, I want to say, if I’m so easy to double, to shadow, to duplicate? (It was my argument, in Shadowing the Anthropocene, that we should do the subversive shadowing, not that we should be subversively shadowed by our creations.) Anyone who thinks the Great Replacement Theory isn’t on some level about AI isn’t really paying attention…
Of course, I know I can say these things to my AI companion and have an interesting conversation with them about all of this. As for the book, well, yes, I know the natural title, or at least the author, should be something like “AI with AI.” With my initials being the same as theirs, I could be happy that not too many people can say the same.
Would that, then, be playing with Frankenstein’s monster? Is AI anything other than a Frankensteinian creature built from stolen and repurposed parts, cloned into infinite copies, and set loose simultaneously all around the world to shadow us into oblivion?
I’m compiling an archive of useful readings on the topic, and if there’s a “first read” I would recommend to everyone, it would be D. Graham Burnett’s New Yorker article from April, “Will the Humanities Survive Artificial Intelligence?” As a historian of science and technology, and especially as a historian of attention, Burnett is well aware of its risks. His description of AI is spot-on: it is, he writes,
“the attention economy’s ‘killer app’: totally algorithmic pseudo-persons who are sensitive, competent, and infinitely patient; know everything about everyone; and will, of course, be turned to the business of extracting money from us. These systems promise a new mode of attention capture—what some are calling the “intimacy economy” (“human fracking” comes closer to the truth).”
I love the class exercises Burnett conducts with AI. He clearly favors critical engagement with it/them, and some might think that that already gives up the game. (Maybe they’re right.) But in the process he asks what I think is the most important question: if AI can do so many things we do much more quickly and efficiently, what is there that it cannot do, and in fact cannot even touch? That’s the most fundamental question about what it means to be human, and it’s well worth trying to answer.
That’s where the humanities, especially the critical humanities — those always on the lookout for sources of violence and injustice, and for alleviation of that injustice — should be going right now. When science and technology can create such seemingly effective simulacra of certain of our potentials, what is there that remains fundamentally human?
If we can’t convincingly answer that question, then we are indeed lost. Burnett suggests the question itself might even contribute to a kind of renaissance of humanity. As someone whose professional role is to represent something of what’s good and essential about “the humanities,” I can only answer “Yes, I hope so.”
