Skip to main content

Gen AI Explorations: Conversation with Faculty Fellow Mark Collier

 This spring Extra Points features a series of conversations focused on how faculty and staff around the University of Minnesota are using generative AI to do University work. 

Interview with Mark Collier about his use of AI tutors in a philosophy class
Adam Brisk (Information Technology Systems and Services) and Bill Rozaitis (Center for Educational Innovation) interviewed Emerging Technologies Faculty Fellow Mark Collier, Professor of Philosophy at UMN Morris. The following has been revised for length and clarity.

Tell us about your role in the Philosophy department and how that is informing your work with generative AI.

Mark Collier: I'm a professor of philosophy. I teach a wide variety of courses, including the history of philosophy, artificial intelligence and ethics, philosophy of mind, and biomedical ethics. Because it's a small department, we have broad coverage. My research has primarily focused on the philosophy of David Hume.

When I entered grad school at UC San Diego, I joined an interdisciplinary program in cognitive science and philosophy. My advisor, Paul Churchland, was exploring the philosophical implications of artificial intelligence, with a particular focus on neural networks, known as connectionism back then. My own research project built on this foundation, using insights from cognitive science, especially neural network models, to evaluate Hume’s account of the mind as a pattern-matching and associative machine.

Tell us about your Gen AI Project

MC:My project centered around introducing AI tutors into my class on ethics and artificial intelligence. I began by creating a Socratic tutor. After completing the assigned readings for Monday and Wednesday, students would formulate a question, engage in a one-on-one tutoring session with the bot, and then post a transcript of that conversation to Canvas. We would use those transcripts as the basis for in-class discussion every Friday. 

Building the initial tutor was relatively straightforward. I created a custom GPT with instructions to adopt a Socratic approach and uploaded the course materials into its knowledge base. The goal was to give students access to a tutor who was available 24/7 and who was programmed to lead them to a deeper understanding of the course content. 

At the end of the term, I conducted an evaluation to find out what students thought; the reactions were mixed. One major issue was that the AI struggled to maintain a purely Socratic role. Despite repeated attempts to instruct it not to answer questions but only ask them, the GPT often defaulted to being dogmatic. I came to realize that this behavior stemmed from system-level instructions embedded in the custom GPTS, which bias the model toward answering user questions. These conflicting directives produce a kind of cognitive dissonance for the AI.

Since then, I’ve begun developing a new version of a Socratic tutor with a developer, supported by funds from the Emerging Technologies Faculty Fellowship. We discovered that you have to build your own custom interface to bypass the default instruction layer of custom GPTs. With this new architecture, the tutor now behaves as intended.

How have your students reacted?

MC: At first, students struggled to use the AI tutor because they weren't sure what they wanted to ask. They would start conversations like, "Hello," and the session would stall. It took students about two weeks to learn that the key to a successful interaction was to be clear about what you want to ask. I provided them with a guide for using the Socratic tutor which explained that, after reading the course material, they should:

  1. identify the big questions they have,
  2. write them down, and 
  3. take time to craft them into a clear and focused paragraph. 

Once students learned to formulate precise questions, the quality of the conversations improved dramatically. This point is crucial because asking the right questions is arguably one of the most valuable outcomes of a liberal arts education, especially for jobs in AI and technology. Philosophy students in particular would make good prompt engineers because they become really good at analytical thinking and figuring out what they really want to understand. 

In general, I was impressed by the conversations between students and the AI tutor. I read hundreds of transcripts and I never thought the tutor’s responses were shallow or missed the point. And while the initial version of the tutor often lapsed into dogmatism, it was also amazingly encouraging. 

How are peers responding?

Collier trains Hume bot by providing access to works by HumeI gave a presentation at the Academy for Distinguished Teachers in the fall, and people seemed excited about the AI tutor. Many faculty members saw the potential of integrating similar tools into their own classrooms. I would just add a bit of caution here. I came to realize that building a truly effective AI tutor is more complex than it initially appears. It actually requires a team to navigate the technical and design challenges that are involved. 

Since that presentation, I’ve expanded the scope of the project. I created an application called "Conversations with Philosophers" where students can interact with philosophers in a variety of modes—conversational, debate, or advisor. I’ve also designed a series of educational games customized for my modern philosophy course where students can engage with philosophical concepts in a more fun and interactive format. These games can also help students prepare for class and enhance in-person discussions. 

Can you give us an example of AI used in the context of your philosophy classes? 

MC: In my modern philosophy class, we look at a debate between Thomas Hobbes and Francis Hutcheson over the nature of laughter. The central question is, "Why do we laugh?" Hobbes defends what’s known as the superiority theory, like a Simpson laugh— where we laugh because we feel superior to others. In contrast, Hutcheson argues that laughter is more sociable and merely responds to incongruities, like when you're giving an important speech and your pants fall down. At its core, it isn’t only a debate about humor; it’s ultimately a clash between two views of human nature. 

Hutcheson's critique of Hobbes is based on his analysis of a satirical poem by an 18th-century writer, which has a Scottish Presbyterian sense of humor that falls flat with contemporary students. So, I began to think it would be more fun to allow students to bring in their own examples of humor and analyze those instead.

To support this, I developed an AI-based game set in a virtual comedy cellar. Students begin by selecting jokes from a “laughter menu.” They can either tell their own joke or choose from a curated set of examples from comedians like Richard Pryor, Groucho Marx, or Mitch Hedberg. They can then analyze the jokes to test Hobbes’ and Hutcheson’s rival theories. 

What's amazing about these GPT games is that they have access to so much information. If a student asked me to provide ten jokes for analysis, I'd obviously struggle. But the AI can instantly draw from a vast archive of comedic material across genres and eras. This makes the analysis of the debate more engaging and gives students a richer set of materials for exploring what humor is and how it works. 

What role will this technology play in your courses ongoing?

In a course on ethics and artificial intelligence, the use of AI tools was particularly effective. But I'm not advocating for the use of generative AI in every class. I don't use it in most of my other courses. For example, I'm teaching a small seminar on ancient philosophy right now, and I don't think generative AI is appropriate in that context. The models often perform poorly, at least for now, when it comes to the nuances of ancient texts. In fact, I've banned technology altogether in that class–no laptops, no PowerPoints; just paper, pencils, and the print texts. It feels more like 1895 than 2025. We focus entirely on close reading and discussion, and interestingly, my students seem to be thriving with this slower pace. Clearly, there is no one-size-fits-all approach. The challenge going forward is to figure out when, and when not, to use AI tools in our classrooms.

Resources

To learn more and see demos of gen AI educational tools, watch Professor Collier’s Founders Scholar presentation, March 2025.

"The Future of the Liberal Arts in the Age of Artificial Intelligence" | Mark Collier