Skip to main content

Gen AI Explorations: Conversation with Faculty Fellow Junhua Wang

This spring Extra Points features a series of conversations focused on how faculty and staff around the University of Minnesota are using generative AI to do University work. 

Interview with Junhua Wang about her use of gen AI in her writing-intensive business class
Adam Brisk (Information Technology Systems and Services) and Lauren Marsh (Academic Technology Support Services) interviewed Emerging Technologies Faculty Fellow Junhua Wang, Associate Professor of Business Communications in the Labovitz School of Business and Economics. The following has been revised for length and clarity.

Tell us about your role in the Labovitz School of Business and Economics (LSBE) at UMD, and how that is informing your work with generative AI.

Junhua Wang: I teach the course Business Communication, which is an upper-division, writing-intensive course required for all students at LSBE. I think generative AI presents significant opportunities to help students in the writing process, such as enhancing their drafting, revising, and editing skills. Recognizing this potential, I began integrating generative AI tools into my course in Fall 2023.

Tell us about the generative AI project that you started with your students. What successes have you had and what challenges have you encountered?

JW: When I started to explore generative AI in the summer of 2023, I didn't know much about the tools. I set my goal right from the beginning: I wanted to explore AI and integrate it into my class in a pedagogically sound way to enhance learning. My research focuses on how I can integrate subject knowledge, i.e. rhetorical analysis skills, into the process of using AI as a writing assistant. My project has two phases. 

In the first phase, I focus on researching and exploring various prompting techniques, especially the one I designed myself, which integrates rhetorical genre analysis into the prompting technique. I examined how these techniques impact students' ability to generate effective business communication messages.

To get into more details, I recruited student participants, introduced them to various prompting techniques, and incorporated AI literacy into the course.

We explored biases, ethical concerns, and user responsibilities that help to mitigate issues and maximize AI's benefits. I developed various assignments using AI, and I also used AI to quickly modify assignment and quiz questions instead of repeating the same ones. I also adjusted my assessments and grading rubrics to better assess students' learning when AI was allowed to enhance the writing process.

For the research project itself, I provided a scenario that required a negative message in which a landlord has to deny a tenant’s request for a two-year lease renewal because of the noise level created by the tenant’s entertainment agency. Student participants were asked to freely use the prompting techniques I introduced to guide the AI tool in generating the negative message. The prompting techniques included:

  • zero-shot (no further guidance provided to the AI beyond the task), 
  • few-shot (providing an outline or sample response), role prompting, instructions based on a rhetorical analysis of the writing situation, and 
  • feedback looping. 

Another rater and I then analyzed the impact of each prompting technique on the effectiveness of the AI-generated message.

In phase two, I developed my own rhetorical prompting framework. I wanted to compare this framework with traditional teaching methods to see how AI tools can enhance students' learning. 

We had the opportunity to interview Dan Emery from UMN”s Writing Across Curriculum.” We’d be interested to hear your response to something he said: “I promote writing to learn as an activity in all kinds of environments and all of my work. Now, tools exist that might allow students to generate text without learning, and that's very concerning to me.”

When I incorporate AI into a class, I want to ensure that my pedagogy is sound, that real learning can happen, and that AI is used to enhance subject knowledge, not replace it. That's why I teach students how to incorporate rhetorical analysis skills into prompting. When I introduce a project and allow students to work with AI tools, I check their process by collecting their AI chat records along with the final work and their reflections. Sometimes, I have students complete assignments in class to see their understanding directly. It takes different approaches and a lot of effort to ensure actual learning, and it's not easy.

When I introduced this in Spring 2024, I encountered student resistance. My students quickly found that it takes a lot of effort to make AI work effectively. Some students believed AI should be quick and effortless. When instructors require them to provide AI chat records and reflections, it contributes to resistance to the work. However, some students appreciate learning in a responsible way. I agree with what Dan said, but there are ways to mitigate issues and maximize benefits.

How do you help students critically assess AI output? How do you build those skills?

JW: When I teach Business Communication, I always tell students that “In 15 weeks, we cannot learn all existing business communication genres; we can only learn a few.” Instead of simply memorizing a few genre structures, we learn to analyze each communicative situation and develop rhetorical strategies that are tailored to the specific context. That's the skill I want students to take away from the class when they finish the semester.

When I use AI tools, I teach the necessary subject knowledge in class and provide students with guidance on how to analyze rhetorical situations, understand their communicative purpose, and consider the needs and concerns of their audience. We learn different genre structures and why they are prescribed as they are. All this subject knowledge is reflected in their process of interacting with AI. 

When I check their chat records, I can see if they only focus on changing the format or name, or if they are telling AI their rhetorical purpose for the situation and the audience's concerns. That is how I can assess whether actual learning is happening.

How are your colleagues in LSBE and the business community responding to this information? 

JW: Since Spring of 2024 there has been a change in perspectives. I attended the LSBE AI retreat last fall and my discipline's annual conference. My colleagues were interested in what I was doing. A reviewer's comment about my research published in the Journal of Technical Writing and Communication, "Improving ChatGPT's Competency in Generating Effective Business Communication Techniques," responds to your question: 

"This study serves as an exemplary model for integrating ChatGPT into educational settings. It offers valuable insights into effective prompt engineering techniques for technical communicators amidst the evolving technological landscape. The work is exceptional and has the potential to be a highly cited piece in the field." 

What recommendations do you have for other instructors at the University of Minnesota as they approach using AI in their classes?

JW: We do not learn technology for its own sake. We want to ensure the way we use technology aligns with our teaching goals and objectives. I suggest instructors be clear about their teaching goals and objectives. It does take some effort to ensure that the way you incorporate AI matches the goals you set. Every subject is different, but a general principle is to make sure that no matter what you do with AI, it enhances students' learning and matches the goals and objectives you set for the course.

Is there anything else you would like to share with us? Anything we should have asked you?

JW: We all know that AI is not perfect due to limitations in its training data, the risk of hallucinations, biases, etc. However, users can influence how AI tools perform through a process called Reinforcement Learning, in which users evaluate AI responses and provide feedback and guide AI to learn which kind of outputs are more useful for the user. In this way, we play an active role in shaping the AI’s output. In other words, the AI-generated content is as good as what feedback users provide.

When I started developing research ideas with ChatGPT, I realized the tool didn't understand the topic initially. But after providing it with some background information and my own research the tool quickly grasped the idea, and I could have meaningful conversations with it. AI's capability to learn from users was surprising to me. Even though AI tools can be limited and inaccurate, we can guide them with our prompts and feedback to reduce issues and maximize benefits. AI tools can learn from users and be as good as we want them to be.