Like a lot of university faculty these days, I’ve been thinking about, and testing out, chatbots like OpenAI’s ChatGPT and Google’s Bard. In fact, I’ve been quizzing them on various things.
They have answered some of my questions with general-consensus knowledge. For instance, on whether or not it’s too late for humanity to successfully respond to the climate crisis so as to “preserve a civilizationally-conducive climate,” both gave reasonable, concise, “first shot” answers at the question, in the manner of “on the one hand” this, “on the other hand” that. ChatGPT-3.5 assessed the “too late” probability at 20-30% and “not too late” at 70-80%. Bard assessed the first at 40% and the second at 60%. (Bard is a little more up-to-date in its database.) Both provided the kinds of responses you’d expect from students who’ve read IPCC report summaries and a smattering of other popular writings.
But on some issues they plainly make stuff up. Asked who are the most important authors and writings that have comparatively analyzed the philosophies of A. N. Whitehead and C. S. Peirce, Bard provided a few surprisingly reasonable answers, in no logical order (Corrington, Griffin, Hartshorne, Cobb), but threw in a made-up name, “Donna Orange,” a “professor of philosophy at the University of Vermont” and author of the book “Peirce’s Pragmatism: The Logic of Chance.” No such person is or has been at the University of Vermont (my university for the last twenty years). An actual Donna Orange, who works for NYU, wrote her doctorate (and later a small book) on Peirce’s theism, but not with that title.
Other responses get more wildly fictional. Asked about Ukrainian ecofeminist philosophers, Bard invented two out of four people from scratch, along with books they have supposedly written. Asked to create a course syllabus on “environment in world cinema,” Bard either made up or seriously mangled every single book (or author) it listed. (For the record, ChatGPT didn’t list books, just films, and otherwise tended to do better.)
The normal explanation for this chatbot creativity seems to be that their creators have programmed them to give seemingly reasonable answers even when they aren’t sure of those answers (as if AIs could be “sure” of anything). In their haste to respond quickly, they blend facts together into “believable” responses. Clearly, the imperative to satisfy its querents comes before the imperative to be accurate.
But the U. of Vermont reference made me wonder: did Bard throw that in as a kind of Hitchcockian MacGuffin, an empty plot-forwarding device meant to deflect from the fact (while still suggesting it) that Bard is actually playing with me?
Continue Reading »