Skip to main content

GenAI Explorations: Conversation with Cody Hennesy


This fall Extra Points will feature a series of conversations focused on how faculty and staff around the University of Minnesota are using Generative AI to do University work. 

interview with Cody Hennesy about his use of GenAI is part of a series.
Lauren Marsh and Sara Schoen are kicking off our new series interviewing Cody Hennesy. Hennesy is a computational research librarian for the Twin Cities campus and a facilitator for the system-wide Emerging Technologies Faculty Fellows Program. In these roles he explores and supports work in Generative AI (GenAI). He is also a lover of nerdy board games. 

As a librarian at the University of Minnesota, how do you use Generative AI? 

Cody Hennesy: I use Generative AI in some specific ways. I use it to help with computer programming  - it’s really helpful for suggesting Python code snippets, for troubleshooting code that doesn’t work, and suggesting more efficient ways to solve specific problems. Code either works or it doesn’t, so it can be easy to notice when ChatGPT is wrong in this case, and it’s also possible to build in checks to make sure code is working the way you are expecting.   

I also work with faculty or grad students on specific projects. An example is building an AI assistant with a faculty member so students can ask questions about, for instance, a government document. An AI assistant is a version of a chat tool that is prompted to serve a specific purpose, like being an expert in board game rules, for example. Some systems allow you to upload documents to serve as a "knowledge base" to help answer questions with specific reference to those documents. So a faculty member might load a long public government document into the knowledge base of an AI and prompt the AI assistant to answer student questions. What I’m looking for then is, does it actually work? What’s not working? What weird behaviors do you get? The challenge is that, unless you’ve read the 800 page government document, it’s really unclear how accurate the results are. It’s hard to judge, but I try to figure out ways to help people understand what might be missing and how to better set up the AI assistant.

Interviewers: What tools are you using to create assistants?

CH: Hugging Face, which is available for free, can create assistants but you can't load your own documents in to serve as a knowledge base. I also use a paid version of ChatGPT because you can load documents. They’re called GPTs in ChatGPT, not assistants. I haven’t explored many other platforms just for lack of time.

Often when we talk with people about Generative AI they most often think of ChatGPT. Can you contextualize ChatGPT in the field of GenAI? 

CH: I’m not an expert but I can try. OpenAI, who developed ChatGPT, is one of a few companies building these big foundational models - they’re called frontier models. ChatGPT is trained on a ton of data to do everything rather than a specific task. Contrast this to companies building smaller models to accomplish specific tasks: for example, Bloomberg built a tool to work with financial data. But Google, Meta, and OpenAI are trying to build these huge models that do everything. 

The overall goal of OpenAI and ChatGPT is to create a general artificial intelligence. Apple, on the other hand, is looking to add small features for specific tasks on people’s iPhones. They are looking at how Generative AI can help me make an appointment with my hairdresser more quickly, for example.

How is Generative AI different from machine learning?

CH: GenAI is a kind of machine learning, but machine learning is much broader and has many applications in higher ed/academic research. Here’s a classic example of machine learning: you have a training set of labeled x-rays - some show cancer and some do not. Once you train the machine learning model with enough of these examples, you can then give it a new image that it’s never seen, and it will classify it as cancer or not cancer. 

For the past 20 years, scholars have been using machine learning to do stuff like image recognition, or classifying texts into different categories, or do character recognition with different handwritten documents. Those are all AI and machine learning tasks. It’s not about generating new stuff, it’s about achieving a specific goal. They are effective, and scholars can often measure the accuracy of machine learning tools. If you’ve measured accuracy and you know, for example, that the tool is right 99 percent of the time, then you can implement it in the real world. Generative AI is pretty risky for scholars because it makes stuff up and it’s difficult to measure its accuracy. 

What conversations should instructors be having with their students at the beginning of the semester?  

CH: First, students and faculty should know that Generative AI makes stuff up, which is called hallucinating.

Second, we should be talking about bias. Generative AI is trained in part on sources from the internet, which includes a lot of ageist, sexist, racist, and homophobic content. These tools reproduce these kinds of biases because that’s what it’s learned from. For instance, research has shown that when Generative AI is used to help screen job candidates, it will often discriminate and recommend hiring white and male candidates more often than women or people of color.  

My sense is that faculty are often telling students that they can or cannot use Generative AI, but they aren’t always providing more context. Instructors should share WHY they allow or don’t allow use of GenAI, including concerns of bias, and it would also help to share specific contexts that GenAI can be especially useful or risky. 

Beyond the basics, it really depends on your discipline. Applicable tools and practices are discipline specific. Your discipline’s professional organizations should be offering some guidelines or applicable tools that could be used. 

When you use GenAI, it’s important that you acknowledge how you used it by providing a citation. The Libraries have a great resource guide on how to cite AI to help you get started.

If you’re not comfortable having these conversations or feel that you don’t know enough,invite your subject librarian to your class! We’re happy to help. 

And finally, how do you personally use GenAI? 

CH: My sister is really into complex board games, and my partner hates learning complex board games. So I asked ChatGPT to help teach the game rules by providing a simple 3 minute overview of a game we were learning. It seemed like it provided a great response, but it’s unclear if it's giving the right instructions because it’s probably just pulling from the general internet. It’s not like someone has uploaded all the board game instruction manuals. So then you have to go check the board game instruction manual to even know if it's giving you the right instructions. I think it probably is making stuff up. 

Interviewers: What’s your favorite board game? 

CH: Well, we haven't played for a little bit as it’s sort of a winter thing, but Wingspan is probably the one that we like the best. Have you ever played that? It's really beautiful, like it's just all these birds. And it's also mellow. It's not too competitive.

Thank you Cody! 

If you’d like to hear even more from Cody Hennesy, he will be presenting at Integrating AI into your assignments and exploring the pedagogical implications. This hands-on session includes pre-work in advance of a live workshop on August 28. 

Cody kindly put together this video on AI Ethics & Efficacy that dives into hallucinations, bias, discrimination, your privacy & copyright, ideas around using (or not using) student and faculty work in GenAI tools and other general tips. 

Resources

 

Like what you read? Subscribe to Extra Points to receive an email notification when the next blog post goes live.