Skip to main content

UMN instructors' perspectives on generative AI: April 2025 focus groups results

yellow circle with black AI lettering

In April 2025, Academic Technology Support Services (ATSS), in collaboration with academic
technology professionals across the University of Minnesota system, conducted a series of focus groups to understand instructor sentiments and address the complexities of integrating generative AI into higher education. Focus group participants included instructors from multiple University of Minnesota campuses.

The goals were to gauge instructors' feelings about the value and applicability of these tools in and beyond the classroom, and to identify where common assumptions about generative AI break down across different disciplines.

Note: throughout this post, “generative AI” and “AI” are both used to refer to the broad category of artificial intelligence that can produce content such as text, images, video, audio, and/or code.

Focus group process

ATSS partnered with Usability Services to establish the session goals, determine participant recruiting criteria, craft the interview protocol, and set the ground rules for the focus group sessions. During the sessions, the project team took notes and recorded their observations, including direct quotations from participants. Following each session, the team shared their impressions, while a facilitator highlighted the most prominent issues. In this study, we focused on issues that had implications for instructors engaging with generative AI. 

After all sessions were completed, we gathered with Nick Rosencrans, a User Experience Analyst. With Nick's guidance, we reexamined the issues in light of how they might inform strategies, policies, and student involvement around the use of generative AI. The last step in the focus group process was the creation of a final summary report, which pulled together key findings and observer recommendations.

Key Findings

The findings and instructor quotes in this section are excerpts from the focus group summary report.

Ethics

The increasing use of generative AI tools in academic and creative fields has raised significant ethical concerns among faculty and researchers. Focus group participants questioned what happens to data, such as text, file uploads, and links, once it's used to train these AI models. Even with University agreements intended to prevent misuse, a common concern is the general assumption that all AI models operate with the same ethical standards, leading to questions about the potential for data to be used in unintended ways after inputting a prompt. One instructor asked, "How will this be used in other ways after inputting a prompt or command?" This lack of transparency and accountability is compounded by worries about intellectual property and fair compensation. As one participant stated, "We aren't paying artists for their work. If materials are being 'recycled,' there is no new art being made." The ethical decision of whether to use generative AI is not universal and can vary significantly by field, underscoring the need for a thoughtful, field-specific approach to navigating these complex issues and ensuring responsible use.

Bias

Generative AI's inherent bias was a significant concern for faculty who recognized that these tools are only as unbiased as their training data and human creators. Participants noted that students may not always be aware of these biases, which are present in both the data and the tools themselves. This issue is particularly problematic when a single AI model is applied universally, especially in fields such as medicine, where accuracy and fairness are crucial. The importance of AI evaluative literacy was also highlighted. When talking about prompt engineering, one participant stated, "The wrong question reveals more bias from the tool than it gives us satisfactory responses." This highlights the importance for students and faculty to develop the skills to critically identify and assess potential biases in AI outputs.

Teaching and Critical Thinking

While generative AI is seen as a powerful tool to foster critical thinking, its role in the teaching and learning process was met with both excitement and apprehension. Instructors are already leveraging generative AI to encourage deeper engagement by having students evaluate AI-generated content for accuracy and hallucinations: an approach that shifts the focus from policing usage to elevating academic standards. Participants shared that by automating time-consuming tasks, such as summarizing case studies or gathering data, generative AI can "raise the bar for student assignments," freeing up valuable time for students to engage in more complex analysis and "dive deeper than they could have gone before." However, these opportunities are balanced with concerns about student over-reliance, the "blithe acceptance of the generated material," and the potential for students to lose their "genuine voice" in reflective pieces. One participant shared a successful assignment where students compared their own summaries of a rare book with an AI-generated one, illustrating how these tools can empower students to go "farther, deeper, and have a better level of understanding."

Academic Integrity

Discussions about academic integrity revealed a pressing need for open and proactive communication between faculty and students regarding the use of generative AI. Participants expressed concern that if students are afraid to even mention using generative AI, a "cultural split" could emerge in the classroom, leading to significant equity problems. As one participant emphasized, "if you make it so students cannot talk to you about AI at all without getting into trouble, then you are setting yourself up for inequities," as some students may use the tools effectively in secret while others are left behind. The consensus was that if instructors don't openly discuss how generative AI can and cannot be used in their courses, students will make their own assumptions. For example, a student might assume an instructor's adamant opposition to generative AI stems from a simple lack of familiarity with the technology.

Prompt Engineering and Planning

Focus group participants underscored the growing importance of prompt engineering and planning as key skills for effectively interacting with generative AI tools. Participants recognized that knowing "the good questions to ask" is a crucial skill for eliciting valuable responses, while also noting that even a poor prompt can reveal a tool's flaws, highlighting the iterative nature of working with generative AI tools. They also saw the potential for students to use generative AI for planning purposes, such as outlining a complex assignment or breaking down the steps to complete a semester-long project. This feedback suggests a clear need for universities to provide resources and guidance on effective prompt engineering strategies.

Final Thoughts

The focus group conversations revealed that generative AI is not a distant concern but a present reality for participants. The key issues of ethics, bias, critical thinking skills, and academic integrity are interconnected and require a thoughtful, proactive approach. To support instructors in navigating this new landscape, Teaching Support has developed a series of generative AI and academic integrity resources available on the UMN Navigating AI website. We encourage you to explore the materials and join the ongoing conversation.

Contributors

This post was co-authored with Annette McNamara, Jennifer Englund, and Google Gemini. While GenAI assisted in drafting and organization, the final product has been reviewed, analyzed, and edited by humans, representing approximately 85% human-created and 15% AI-assisted content.

Like what you read? Subscribe to the Extra Points Google Group for an email notification when the next blog post goes live.