Skip to main content

Gen AI Explorations: Conversation with Faculty Fellow Rob Erdmann

 This spring Extra Points features a series of conversations focused on how faculty and staff around the University of Minnesota are using generative AI to do University work. 

Interview with Rob Erdmann about his use of gen AI in health science education
Lauren Marsh (Academic Technology Support Services) interviewed Emerging Technologies Faculty Fellow Robert Erdmann, Assistant Professor of Bioinformatics/Data Science at UMN Rochester. The following has been revised for length and clarity.

Tell us about your role in the Center for Learning Innovation at Rochester, and how that informs your work with generative AI.

Robert Erdmann: UMR is organized into just a single department, the Center for Learning Innovation (CLI). This means that faculty from all disciplines - biology, math, writing, public health - are all in the same unit. This lowers a lot of barriers to cross-disciplinary interactions and collaborations. You get to talk to a lot of people who aren't in your field. It gets us thinking about teaching in different ways. I don't know that other places foster that same strength of relationship, that same comfort, to have regular informal information sharing with folks who aren’t disciplinary peers. When we're having a faculty meeting and discussing generative AI and teaching, you hear from folks in writing and literature at the same time as those in STEM. I learn a lot from that, and hopefully, I get to share some of what I'm learning with a broader audience than I might have otherwise. That's one of the joys of being in the CLI environment.

Set the context for us and tell us about your generative AI project.

RE: My project involved setting up a local instance of CodeLlama, an open-source large language model, on a University of Minnesota server. This allows faculty and students enrolled in data analytics and other technical courses at the University of Minnesota Rochester to use it. It was important to think about how students could use generative AI tools in a university-sanctioned way. We wanted to avoid having a bunch of rogue individual accounts and instead provide a secure environment where student privacy and data were protected from external entities that might not have our students' best interests at heart.

The front end of the project focused on setting up this secure environment - which is now working as hoped! The back end involved incorporating units into my instruction that built on earlier lessons in the term - with the first iteration starting in April 2025. I aim to teach students how to "be a cyborg"—how to augment their skills, work more efficiently, and troubleshoot AI mistakes. At one point, I will challenge students to implement some high-level concepts that they haven’t been explicitly taught, with the aid of AI, and then reflect on the challenges associated with the experience. These are going to be valuable skills for their academic and working lives. This project aims to prepare them for the future. 

Your students haven't yet experienced the generative AI unit this semester. How are you preparing? 

RE: The students know it's coming, which is very interesting because they have such a wide range of thoughts on generative AI. Some use it all the time for their schoolwork and have no qualms sharing that. Others are a bit more coy and guarded about how they talk about it. Some are actively opposed. Very few have no opinions—people have feelings, regardless of what those are.

I'm really trying to find ways to measure these feelings. I've set up a pre/post data gathering system to measure how they feel about AI going into the unit and then see what changes or stays constant afterward. Regardless of their opinions on AI in general, whether they're in favor or against it, most students think the AI unit for the class is a really good idea. They are curious and enthusiastic about it. It's engaging to them, and they want to see it in their classes. That feels really good going in, and I think it's a great starting point to launch into something new in the class.

Are there other generative AI projects you want to highlight?

RE: Another exciting project is a course I'm co-designing and co-teaching in the fall of 2025, titled "Human-Centered AI for Biomedicine." The University of Minnesota Rochester focuses on health sciences, so we wanted to include a course that helps students understand AI applications in biomedical research and clinical care. This course will cover topics like bioethics, bias, and data security, and explore how AI will transform healthcare. It's designed to be accessible, with no coding or programming experience required, to attract a large number of students. This course is a collaboration with a colleague from UMR and another from Mayo Clinic. We're excited about its potential and hope it becomes a regular part of the curriculum.

I'm also excited about a research project we hope to conduct in the class, involving think-aloud interviews. Students will narrate their thoughts while completing tasks, helping us understand how they construct AI prompts. This method, commonly used for testing technology, will provide valuable insights into students' thought processes at various stages of practice. The AI community could benefit greatly from this research.

Do you have any ah-ha moments to share?

RE: Setting up your own instance of something is hard.

I set up the CodeLlama instance. I don't know why I thought it would be more straightforward than it was—I should have known better. I had a ton to learn, and I had quite a few false starts along the way. 

Your project was technically challenging. How did you find support? Who were your partners?

I had good contacts. I could reach out to people and say, "Hey, I need help, and I don't know who to go to." I can specifically call out Trenton Raygor, who was very helpful in getting me in touch with the right people when I didn't know where to turn. There were chains of communication, like talking to one person who directed me to another, who then had me put in a ticket that went to a fourth person. But at least I had a few key contacts who could point me in the right direction if I was flailing. I'm grateful for the help of many folks in OIT and elsewhere. 

We got there in the end. It was definitely a fun journey. Things like that are very gratifying because, at the end, you feel like, "Wow, I did that." It's very exciting.

How are your peers at Rochester responding to this work?

RE: There's been a lot more talk about the overall curriculum lately because of the new gen AI courses going in front of the curriculum committee. People are excited and want to discuss it. The preparation for the fall course has really informed how I can have conversations with peers about curriculum, as we discuss learning objectives and what we will do for different units and days. I really look forward to telling folks how the AI unit goes in April, and will have even more to share with the department after the Fall AI course wraps up.

Do you have recommendations for instructors as they approach using generative AI in their own classes?

RE: Give yourself a lot of time to plan and prepare. Using and teaching with generative AI tools requires a different set of skills. Unless you're a machine learning specialist, it's likely outside your disciplinary area. For example, I'm a biologist, so teaching with generative AI is like using a new toolbox. The user interface of generative AI tools is very easy to use—you just type something into ChatGPT or Microsoft Copilot, and something pops out. This ease of use can lull you into a false sense of security, making you think you don't need to prepare as much. I encourage folks to fight that instinct and give this time it deserves. This preparation will significantly improve what you're able to do with generative AI in your classroom.

Another important point is to produce both low-quality/misleading outputs and high-quality/helpful outputs. Show examples of prompts that led to outputs that didn't work at all, as well as those that look convincing but are fatally flawed. These real examples, not just hypothetical ones, will help students understand the potential pitfalls. For instance, a piece of code that runs but gives a completely wrong answer in a way that's hard to detect can be a powerful teaching tool.

By showing students the real ways they can run into problems, as well as the good answers, you provide illustrated examples that are essential. Even if the models change and outputs vary, knowing that these issues are real possibilities and seeing how they can play out in real life is invaluable. It emphasizes the importance of understanding what you're doing and not just relying on AI to get you through areas where you lack expertise.

Can you provide examples of misleading outputs that contrast with good outputs in a way that is useful for your students?

RE: One example comes to mind. There's a package in R called ggridges that creates ridgeline plots. These are essentially density plots stacked on top of each other. For instance, if you had 12 months of data, you could see all 12 months stacked in a visually appealing and easy-to-read way. I really love these plots. They're not super common, but I think they're becoming more popular as people realize how powerful they are and how well they adhere to data visualization best practices. 

I asked a gen AI model to provide the code needed to create an example ridgeline plot with the ggridges package, along with some sample data that could be used to execute the code. It generated the sample data set and the code to generate a ridgeline plot, just as instructed. 

When I looked at the code, it was actually solid—it would do what it was supposed to do. However, the sample data was completely incompatible with the type of data needed for a ridgeline plot. The code was spot on, but you wouldn't get a plot at all because the sample data was incorrect. This was an eye-opener for me. I realized I needed to show people these types of mistakes. Imagine a student spending three hours trying to debug this code, thinking they've done everything right with the code itself, but not getting the expected result because the sample data was wrong. The model didn't understand what would make good sample data for this type of plot. That was a mind-blowing moment for me.

Is there anything else you'd like to share about your journey with generative AI? 

RE: The journey is very interesting! The programming language Perl has an informal motto: "There's more than one way to do it." I think that's a good way to summarize my philosophy on AI. It's an interesting, useful tool, and we're still working out the many ways it can and will be used. There's not a single right answer.