Like a lot of university faculty these days, I’ve been thinking about, and testing out, chatbots like OpenAI’s ChatGPT and Google’s Bard. In fact, I’ve been quizzing them on various things.
They have answered some of my questions with general-consensus knowledge. For instance, on whether or not it’s too late for humanity to successfully respond to the climate crisis so as to “preserve a civilizationally-conducive climate,” both gave reasonable, concise, “first shot” answers at the question, in the manner of “on the one hand” this, “on the other hand” that. ChatGPT-3.5 assessed the “too late” probability at 20-30% and “not too late” at 70-80%. Bard assessed the first at 40% and the second at 60%. (Bard is a little more up-to-date in its database.) Both provided the kinds of responses you’d expect from students who’ve read IPCC report summaries and a smattering of other popular writings.
But on some issues they plainly make stuff up. Asked who are the most important authors and writings that have comparatively analyzed the philosophies of A. N. Whitehead and C. S. Peirce, Bard provided a few surprisingly reasonable answers, in no logical order (Corrington, Griffin, Hartshorne, Cobb), but threw in a made-up name, “Donna Orange,” a “professor of philosophy at the University of Vermont” and author of the book “Peirce’s Pragmatism: The Logic of Chance.” No such person is or has been at the University of Vermont (my university for the last twenty years). An actual Donna Orange, who works for NYU, wrote her doctorate (and later a small book) on Peirce’s theism, but not with that title.
Other responses get more wildly fictional. Asked about Ukrainian ecofeminist philosophers, Bard invented two out of four people from scratch, along with books they have supposedly written. Asked to create a course syllabus on “environment in world cinema,” Bard either made up or seriously mangled every single book (or author) it listed. (For the record, ChatGPT didn’t list books, just films, and otherwise tended to do better.)
The normal explanation for this chatbot creativity seems to be that their creators have programmed them to give seemingly reasonable answers even when they aren’t sure of those answers (as if AIs could be “sure” of anything). In their haste to respond quickly, they blend facts together into “believable” responses. Clearly, the imperative to satisfy its querents comes before the imperative to be accurate.
But the U. of Vermont reference made me wonder: did Bard throw that in as a kind of Hitchcockian MacGuffin, an empty plot-forwarding device meant to deflect from the fact (while still suggesting it) that Bard is actually playing with me?
So, what do I make of the threat AI chatbots pose to student learning?
Not much, so far. It’s really the evaluation of student learning that is being threatened (how do we know our students aren’t relying on chatbots for their answers to questions?). The arm’s race between students using AI tools to answer questions and educators using AI tools to catch them out will be entertaining to watch. (I share my conversation with ChatGPT about this in the appendix below.)
Chatbots do reasonably well with general questions on topics about which a lot has been written — such as “Compare Gilles Deleuze’s and Maurice Merleau-Ponty’s philosophies of nature.” Or on topics they’re expected to have good answers for, like “What impacts will artificial intelligence have on higher education?” (See the Appendix below for the latter.) They do significantly less well with more projective and speculative questions on very specific topics, like “How might Alfred North Whitehead have evaluated the later writings of Maurice Merleau-Ponty? Write an essay on Merleau-Ponty’s idea of nature, written in the style of Whitehead.”
Asked to “write a poem on the Covid-19 pandemic in the style of T. S. Eliot,” Bard refused to do it all. ChatGPT-3.5, on the other hand, was asked twice and wrote two different, but remarkably similar, poems that a literate undergrad might have written. Were they creative? No. Were they in the style of Eliot? In a caricaturish way, yes (which is still impressive, for a machine). But I don’t think Eliot would have been impressed.
Meanwhile, both chatbots I tested failed to follow basic instructions (such as “write a 3,000 word essay,” for which they wrote well under 1,000 words each).
At this point, I would say that what chatbots provide is reasonable facsimiles of answers to difficult questions — “templates” of what an answer could look like — but that the answers themselves can leave a lot to be desired. We teach our students the skills to provide much better answers than chatbots do. They remind me of the way Power Point “packaged” people’s thinking into bullet-pointed, cue-card sized thoughts, but eroded the capacity to think laterally, creatively, and expansively. (But that’s only if you refrain from experimenting with Power Point or its rivals, all of which provide powerful visual and communicative tools if you’re willing to work with them. I tend to use Power Point for talks, though it’s often an unrecognizable form of it.)
Should we ban chatbots for educational purposes (as a colleague of mine has suggested)? That would be silly. Students can’t be prevented from using them, and should learn about their strengths and their weaknesses. Among the strengths is that they can provide reasonable starting points for answering a question, rather like Wikipedia provides reasonable starting points for understanding a topic. Among their weaknesses is that they don’t provide much to go by. Their crediting of sources is minimal, and typically nonexistent unless you specifically ask for it.
They also make shit up.
All of that, of course, might change.
But then there are their genuine risks. By working with massive databases, and by not being open about the their own parameters, limits, and biases, AI chatbots and other algorithmic, machine-learning systems ingrain biases into evolving informational systems that are becoming ever more powerful. Their power is based in their capacity to elicit and respond to human needs and desires in ways that serve the interests of private entities. And their power is growing in part because they aren’t limited to textual information — the sort of thing my students might be doing with ChatGPT or Bard. They are also working with images, sounds, and environments that operate on multiple registers of our perception, cognition, emotion, and sensibility.
Like digital media in general, then — which is what so-called artificial intelligence is a form of — these technologies need more regulation and management in the direction of serving the public interest. That remains the struggle over all digital media. And the stakes keep getting higher.
Appendix
Below, I’m sharing several examples of ChatGPT3.5’s responses to my questions about AI’s potential impacts on higher education.
As you’ll see, these are reasonable, general-consensus answers. If you have no time to research these questions yourself, they are a bit like asking a smart and very logical colleague, a Mr. Spock or Data, what they think about something. They start to get weird only if you take them to weird places. But you can also never really be sure where they’re coming from (as in, the limits of their databases, training, etc.), or where they’re taking you (as in, what they’re doing with your data, and what their creators’ financial motivations and goals are). They are our new co-workers, from another planet. And they may have come for your job.
Me:
What impacts will artificial intelligence, including programs like ChatGPT, have on higher education, and how do you recommend that educators and students adapt to it?
ChatGPT:
Artificial intelligence, including programs like ChatGPT, is poised to have a significant impact on higher education in various ways. Here are some potential impacts and recommendations for educators and students to adapt:
Impacts on Higher Education:
- Personalized Learning: AI can tailor educational content and learning experiences to individual student needs, learning styles, and paces. This could lead to more effective and efficient learning outcomes.
- Virtual Classrooms and Remote Learning: AI-powered virtual classrooms, online platforms, and chatbots can facilitate remote learning, making education more accessible and flexible for students around the world.
- Automated Grading and Feedback: AI can assist educators in grading assignments, providing timely feedback, and identifying areas where students might be struggling, allowing instructors to focus on higher-order teaching tasks.
- Curriculum Design: AI can analyze data on student performance and industry trends to help institutions design relevant and up-to-date curricula that align with the demands of the job market.
- Research and Data Analysis: AI can aid researchers in data analysis, literature review, and hypothesis generation, potentially accelerating the pace of scientific discovery.
- Language Translation and Cross-Cultural Communication: AI can facilitate communication and collaboration among students and educators from diverse linguistic and cultural backgrounds.
Recommendations for Educators:
- Embrace AI as an Aid: Educators should view AI as a tool to enhance their teaching rather than a replacement. Utilize AI-powered tools to streamline administrative tasks, offer personalized support, and create engaging learning materials.
- Develop Digital Literacy: Educators should invest time in understanding how AI works and its potential applications in education. This knowledge will enable them to make informed decisions about integrating AI into their teaching practices.
- Create Ethical Guidelines: Establish ethical guidelines for AI usage in education, including issues related to data privacy, bias mitigation, and transparency. Ensure that AI technologies are used in ways that prioritize student well-being.
- Foster Critical Thinking: While AI can provide answers and solutions, educators should continue to emphasize critical thinking, problem-solving, and creativity in their teaching to develop skills that AI cannot replicate.
Recommendations for Students:
- Develop Digital Literacy: Students should become familiar with AI technologies and their capabilities. Understanding how AI works will help them effectively use AI-powered tools for learning and research.
- Manage Time and Distractions: While AI can enhance learning, it’s important for students to strike a balance between AI-assisted learning and other activities. Develop time management skills to avoid distractions and maintain focus.
- Lifelong Learning Mindset: AI is constantly evolving, and students should adopt a mindset of continuous learning to stay updated on new technologies and their implications for their field of study.
- Collaboration Skills: Engage in collaborative projects and discussions with peers from diverse backgrounds, taking advantage of AI-powered communication tools for effective cross-cultural communication.
- Critical Evaluation: When using AI-generated content or information, students should critically evaluate its reliability, credibility, and potential biases.
In conclusion, the integration of AI into higher education has the potential to revolutionize learning experiences, making them more personalized, efficient, and accessible. Educators and students should work together to harness the benefits of AI while also upholding ethical considerations and maintaining the essential human elements of education.
Me:
How should educators tailor their assignments so that students do not over-rely on bots like ChatGPT? What methods would ensure that students are responding critically and creatively to the questions we ask them on assignments?
ChatGPT:
To ensure that students do not over-rely on AI-powered tools like ChatGPT and to encourage critical and creative thinking in their assignments, educators can employ several strategies:
- Clearly Define Assignment Objectives: Clearly articulate the learning objectives and goals of each assignment. Make it explicit that the assignment is designed to assess critical thinking, creativity, problem-solving, and other higher-order skills, rather than just information retrieval.
- Limit AI Usage: Specify guidelines for AI tool usage, such as allowing students to use AI for initial research or inspiration but requiring them to synthesize and present the information in their own words and ideas. Restrict the use of AI for certain parts of the assignment, if necessary.
- Provide Open-Ended Questions: Frame assignments with open-ended questions that require students to analyze, evaluate, and synthesize information. Encourage them to formulate their own arguments, opinions, and solutions.
- Require Original Thought: Emphasize the importance of originality and creativity. Ask students to provide personal insights, real-world examples, or innovative solutions that go beyond what AI tools can generate.
- Use Peer Review: Incorporate peer review sessions where students evaluate each other’s work. This can help students recognize the value of diverse perspectives and stimulate critical thinking through constructive feedback.
- Group Discussions and Debates: Assign group discussions or debates on complex topics. Encourage students to engage in collaborative problem-solving and defend their viewpoints with evidence and reasoning.
- Case Studies and Real-World Scenarios: Present students with real-world scenarios or case studies that require them to apply their knowledge, analyze situations, and propose solutions based on critical thinking.
- Reflection and Self-Assessment: Include reflection components in assignments where students explain their thought processes, decision-making, and the rationale behind their responses. This encourages metacognition and self-assessment.
- Iterative Assignments: Design assignments that involve multiple stages, allowing students to refine and improve their work based on feedback. This promotes continuous learning and refinement of critical thinking skills.
- Incorporate Varied Formats: Offer assignments in a variety of formats, such as essays, presentations, debates, projects, or creative works. This accommodates different learning styles and encourages students to think creatively.
- Rubrics for Assessment: Develop detailed rubrics that explicitly outline the criteria for evaluating critical thinking, creativity, originality, and depth of analysis. Share these rubrics with students to provide clarity on expectations.
- Engage with Current Events: Integrate current events, controversies, or emerging trends into assignments. This encourages students to apply their critical thinking skills to real-time issues.
- Encourage Diverse Sources: Suggest a range of sources beyond AI-generated content, such as scholarly articles, primary sources, interviews, or field research. This encourages students to explore various perspectives.
By implementing these strategies, educators can guide students toward developing essential critical thinking and creative skills, while also leveraging the benefits of AI tools for educational support and enhancement.
Me:
If a student presents something as their own work, but their professor thinks it is mostly retrieved from an AI program like ChatGPT, how can the professor determine whether and how much of the work was accomplished by the student?
ChatGPT:
Detecting the extent to which a student’s work has been derived from an AI program like ChatGPT can be challenging, but there are several approaches that a professor can take to make a fair assessment:
- Comparison with Known Writing Style: Professors who are familiar with a student’s writing style may notice significant deviations or inconsistencies in the submitted work. They can compare the writing in the assignment with the student’s previous assignments or writing samples to identify any discrepancies.
- Inconsistencies in Language Proficiency: If the assignment’s language complexity or vocabulary significantly exceeds the student’s demonstrated language proficiency, it could indicate that the content was largely generated by an AI program.
- Specificity and Originality: AI-generated content might lack specific details, personal insights, or original thought that are expected from a student’s work. Professors can look for signs of generic or boilerplate content that doesn’t reflect the student’s unique perspective.
- Conversational or Informal Language: AI-generated content can sometimes exhibit a slightly unnatural or overly formal tone, or it might display conversational language that doesn’t match the student’s typical writing style.
- Topic Mastery: Professors can assess the student’s understanding of the topic by asking probing questions related to the assignment. If the student struggles to explain or elaborate on the content, it might indicate that they relied heavily on AI-generated material.
- In-Person Discussion: Engage the student in a conversation about their assignment during office hours or in-person meetings. This can help the professor gauge the student’s depth of knowledge and understanding beyond what is presented in the written work.
- Plagiarism Detection Tools: Utilize plagiarism detection tools that are specifically designed to identify AI-generated or heavily copied content. These tools can flag similarities between the submitted work and existing online sources or AI-generated texts.
- Student Engagement: Evaluate the student’s engagement with the assignment throughout the learning process. If the student’s work shows a lack of progression, iteration, or evidence of learning, it might indicate that they did not fully engage with the assignment.
- Request Drafts and Process Documentation: Ask students to submit drafts, outlines, or process documentation that demonstrate their iterative approach to the assignment. This can provide insights into their thought process and original contributions.
- Collaboration Indicators: If the assignment is meant to be individual work, look for signs of collaboration or sharing with others, including consistent phrasing across multiple submissions.
It’s important for professors to approach this situation with fairness, open communication, and a focus on the student’s growth and learning. If there are suspicions of AI-generated content, a respectful conversation with the student can help clarify their contributions and understanding of the assignment. Providing guidance on proper citation, research practices, and critical thinking can also help prevent future instances of over-reliance on AI tools.
Me:
What are the best plagiarism detection tools available today (for university writing assignments)?
ChatGPT:
As of my last knowledge update in September 2021, several plagiarism detection tools are widely used by universities and educators to identify potential instances of plagiarism in student writing assignments. Please note that the availability and effectiveness of these tools may have changed since then. Here are some of the well-known plagiarism detection tools:
- Turnitin: Turnitin is one of the most widely used plagiarism detection tools. It checks submitted papers against a vast database of academic and non-academic content to identify similarities and potential instances of plagiarism.
- Grammarly: In addition to its grammar and spell-check features, Grammarly also offers a plagiarism checker that scans text for similarities and provides a similarity score.
- Copyscape: Copyscape is primarily used to check web content for plagiarism, making it useful for identifying potential instances of online content copying.
- Plagscan: Plagscan provides a comprehensive plagiarism detection service that scans documents against a large database of academic and professional content.
- Unicheck: Unicheck offers plagiarism detection and similarity checking services, integrating with various learning management systems (LMS) used in educational institutions.
- WriteCheck (by Turnitin): WriteCheck is a sister service to Turnitin and allows students to check their work for plagiarism before submitting it for assessment.
- DupliChecker: DupliChecker offers a free online plagiarism checker that scans text for similarities and provides a percentage similarity score.
- Moss (Measure Of Software Similarity): While originally designed for detecting similarities in computer programming code, Moss has also been used for identifying similarities in text-based assignments.
- Ephorus: Ephorus is a plagiarism detection system that is often used by educational institutions to identify similarities in student submissions.
- SafeAssign: SafeAssign is integrated with Blackboard, a popular learning management system, and is used to check student submissions against a database of academic papers, websites, and other content.
It’s important to note that while these tools can be helpful in identifying potential instances of plagiarism, they should be used in conjunction with other assessment methods and educators’ judgment. No tool is foolproof, and it’s essential for educators to review the results and context to make informed decisions.
Before adopting a specific plagiarism detection tool, universities and educators should consider factors such as the tool’s accuracy, ease of integration with their existing systems, cost, user interface, and the level of customer support provided by the tool’s provider. Additionally, staying updated with the latest developments in plagiarism detection technology is crucial to ensure that the chosen tool aligns with current best practices.
Me:
In the long run, how will AI programs like ChatGPT impact higher education? What will change, or what should change, as artificial intelligence improves its capacity to retrieve and generate knowledge?
ChatGPT:
AI programs like ChatGPT are likely to have a profound and transformative impact on higher education in the long run. Several changes are anticipated, and some adjustments should be considered as AI continues to improve its capacity to retrieve and generate knowledge:
- 1. Personalized and Adaptive Learning: AI can provide highly tailored learning experiences, adapting content and teaching methods to individual student needs and preferences. This could lead to more efficient and effective learning outcomes by catering to different learning styles and paces.
- 2. Augmented Teaching and Administrative Tasks: Educators can leverage AI to automate administrative tasks such as grading, attendance tracking, and scheduling, allowing them to focus more on engaging with students, facilitating discussions, and providing personalized support.
- 3. Enhanced Research and Data Analysis: AI can significantly speed up data analysis, literature review, and hypothesis generation for research purposes, leading to faster scientific discoveries and advancements across various disciplines.
- 4. Accessible and Inclusive Education: AI-powered tools can make education more accessible to individuals with disabilities by offering features like real-time transcription, language translation, and text-to-speech capabilities.
- 5. Redefined Roles for Educators: Educators may take on more mentorship and facilitation roles, guiding students in critical thinking, problem-solving, and creative applications of knowledge, rather than focusing solely on content delivery.
- 6. Integration of AI Ethics and Literacy: Higher education institutions should prioritize teaching students about AI ethics, bias, transparency, and data privacy to ensure responsible and informed usage of AI technologies.
- 7. Shift in Assessment Approaches: Traditional assessment methods may evolve to include more performance-based evaluations, project-based assessments, and practical demonstrations of skills, focusing on applied knowledge and problem-solving.
- 8. Lifelong Learning and Upskilling: AI-driven education could facilitate continuous learning and upskilling throughout individuals’ careers, helping them stay relevant in rapidly changing job markets.
- 9. Global Collaborations and Cross-Cultural Exchange: AI-powered communication tools can facilitate collaboration among students and educators from diverse cultural and geographical backgrounds, enabling cross-cultural exchange and international learning experiences.
- 10. Adaptation of Curricula: Educational institutions may need to update curricula more frequently to align with the evolving demands of the job market and to integrate new AI-related disciplines.
- 11. Ethical Use of AI-Generated Content: Students will need to be educated about when and how to use AI-generated content ethically, including proper attribution and understanding when human input is required.
- 12. Creativity and Originality Emphasis: With AI handling more routine tasks, there could be a renewed emphasis on fostering creativity, critical thinking, and original thought in students.
- 13. Embrace of Hybrid Learning Models: Educational institutions might adopt hybrid learning models that combine AI-powered online resources with in-person interactions, providing flexibility and personalized learning experiences.
As AI continues to improve its knowledge retrieval and generation capabilities, higher education must adapt to ensure that students are equipped with the skills and mindset needed to thrive in an AI-augmented world. This includes fostering a balance between AI-assisted learning and the development of uniquely human attributes like creativity, empathy, ethical decision-making, and complex problem-solving.
Me:
What are the long-term risks of artificial intelligence to human society?
ChatGPT:
Artificial intelligence (AI) holds great potential to benefit society in various ways, but it also presents certain long-term risks that need to be carefully managed. Some of the potential risks of AI to human society include:
- 1. Job Disruption and Economic Inequality: AI-driven automation could lead to job displacement in certain industries, potentially causing economic inequality and social disruption. Workers in roles susceptible to automation might face challenges in transitioning to new types of employment.
- 2. Dependence on AI Systems: As AI becomes more integrated into daily life, there is a risk of overdependence on AI systems for critical decision-making, which could lead to reduced human agency, loss of skills, and susceptibility to failures or malicious manipulation of AI systems.
- 3. Bias and Discrimination: AI algorithms can inherit biases present in the data they are trained on, leading to discriminatory outcomes in areas such as hiring, lending, and criminal justice. This could perpetuate existing societal inequalities and injustices.
- 4. Privacy Concerns: The proliferation of AI-powered surveillance systems, data collection, and facial recognition technologies raises significant privacy concerns, eroding personal freedoms and potentially leading to mass surveillance.
- 5. Ethical Dilemmas and Autonomy: AI can pose ethical challenges related to decision-making in critical situations, such as autonomous vehicles deciding who to prioritize in a potential accident. These dilemmas raise questions about how machines should value human life and the extent of human oversight.
- 6. Loss of Human Connection: Overreliance on AI-driven communication and social interaction might lead to a decrease in meaningful human connections, affecting mental health and societal cohesion.
- 7. Security Risks: As AI becomes more sophisticated, it could be exploited for cyberattacks, misinformation campaigns, or the creation of sophisticated deepfakes, which could have far-reaching consequences for politics, business, and society.
- 8. Control and Accountability: Developing AI systems that are transparent, explainable, and controllable is a challenge. The lack of accountability for AI systems’ decisions and actions could lead to unforeseen consequences.
- 9. Disruption in Healthcare and Biotechnology: Advances in AI could lead to unprecedented breakthroughs in healthcare and biotechnology, but they also raise concerns about genetic manipulation, biosecurity, and the potential for unintended consequences in medical treatments.
- 10. Existential Risk: In the long term, there is speculation about the potential for AI to surpass human intelligence, leading to scenarios where AI systems could make decisions that are difficult to predict and potentially detrimental to humanity’s survival.
Addressing these risks requires a combination of regulatory frameworks, responsible AI development, collaboration between governments and technology companies, public awareness, and ongoing research to ensure that AI is developed and deployed in ways that prioritize human well-being, safety, and ethical considerations.
And, finally, a separate question relevant to my Media and the Environment course…
Me:
How could social media be developed to serve the public interest rather than commercial/private interests?
ChatGPT:
Developing social media to serve the public interest rather than commercial or private interests is a complex endeavor that requires a combination of regulatory measures, technological innovations, and shifts in business models. Here are some potential approaches that could contribute to this goal:
- 1. Transparent Algorithms and Content Moderation: Social media platforms should provide transparency about their algorithms and content moderation processes. Users should have a clear understanding of how their feeds are curated and how content is ranked. Openness can help prevent hidden biases and promote fair content distribution.
- 2. User Empowerment and Data Ownership: Give users more control over their data and the content they see. Empower users to customize their algorithms, decide what data is shared, and have the ability to easily migrate their data to other platforms. This could reduce the platform’s ability to exploit user data for commercial gains.
- 3. Non-Profit or Cooperative Models: Explore alternative ownership and governance models, such as turning social media platforms into non-profit organizations or cooperatives. This could shift the focus away from profit maximization and prioritize the well-being of users and society.
- 4. Public Funding and Oversight: Support the development of social media platforms through public funding or grants, with the aim of reducing dependence on advertising revenue. Independent oversight bodies could ensure that platforms adhere to public interest goals and ethical standards.
- 5. Civic Engagement and Deliberative Spaces: Design features that encourage constructive civic engagement, informed discussions, and deliberative spaces for meaningful dialogue. Promote fact-checking, critical thinking, and respectful interactions to counter misinformation and polarization.
- 6. Digital Literacy and Media Education: Invest in digital literacy and media education programs to help users develop critical thinking skills, better navigate social media, and distinguish between reliable and unreliable sources of information.
- 7. Ethical Advertising Practices: Implement strict guidelines for advertising, ensuring that ads are not deceptive, misleading, or harmful. Promote transparency in political advertising and prevent micro-targeting based on sensitive personal information.
- 8. Peer Review and Accountability: Implement mechanisms for peer review and external audits of platform policies, algorithms, and content moderation practices to hold platforms accountable for their impact on society.
- 9. Decentralization and Interoperability: Encourage the development of decentralized social media platforms that allow users to interact across different networks. Interoperability would reduce the monopoly power of large platforms and promote competition.
- 10. Long-Term Perspective and Sustainable Metrics: Measure success based on long-term societal benefits, user well-being, and positive impact on democracy rather than short-term metrics like user engagement or ad revenue.
- 11. Global Collaboration and Standards: Promote international collaboration to establish ethical guidelines, standards, and regulations for social media platforms, ensuring a consistent approach to protecting public interests across borders.
Developing social media for the public interest is a multifaceted challenge that requires the cooperation of governments, technology companies, civil society, and users themselves. A combination of regulatory actions, technological innovations, and cultural shifts is necessary to create a digital environment that fosters positive social interactions, informed discourse, and the well-being of individuals and society as a whole.