Skip to main content

Gen AI Explorations: Conversation with Faculty Fellow Darren LaScotte

 This spring Extra Points features a series of conversations focused on how faculty and staff around the University of Minnesota are using generative AI to do University work. 

Interview with Darren LaScotte about his use of generative AI to promote language learning
Mary Jetter (Center for Educational Innovation) and August Schoen (Academic Technology Support Services) interviewed Emerging Technologies Faculty Fellow Darren LaScotte from the Minnesota English Language Program in the College of Continuing & Professional Studies. The following has been revised for length and clarity.

Give us some context for your work with generative AI: what was the class, and who are your students? 

Darren LaScotte: The basis of my project was seeing how generative AI might promote language learning. I teach a writing-intensive English as a second language course, which prepares students for First-Year Writing (University Writing 1301). We work on teaching students the writing process and provide feedback on things like content and organization in its culturally specific Western context. So, I was exploring AI as a tool to help promote language learning based on specific principles of second language acquisition. I was thinking about AI providing extra help for revising and for other parts of the writing process. Also, I was  thinking about students being able to  interact with generative AI to get explanations about things they don't understand, and receiving feedback about how they might rephrase something. It's about bringing in AI tools throughout the writing process. That's the gist of it - thinking about a tool for promoting these principles of second language acquisition and how that might impact students and language learning.

Tell us more about your project, and how you're using gen AI in your classes.

DL: Early on in the class, I really wanted students to think about appropriate or inappropriate use of gen AI. At the beginning of the course, we normally talk about plagiarism and cheating and how that's harmful for learning. When students are cheating, they're not learning. Then, what's the point of education? We used the appropriate use of AI discussion activity that Stephanie Hanson developed. I tweaked some of the scenarios to be more specific to my class. The students went through and indicated acceptable or not acceptable. The students did this individually, then in groups, and then we discussed them as a class. We talked about transparency - the do's and don'ts. If you're ever outsourcing your learning, then that's a don't. We're going to use gen AI for the class, but it's to help us learn, not outsource our learning. 

In previous semesters, students got my feedback and we also did peer review feedback. They then revised and submitted their final draft. However, through this project, AI was a way for them to get additional feedback at different parts of the writing process:

Students submitted the first draft and got feedback from me. With their second draft, they were given feedback by only Microsoft Copilot. Finally, students submitted a reflection summary that included their entire interaction with Copilot and their critical thoughts about the feedback given by Copilot. 

For example, students felt Copilot didn't summarize well. We read an essay and then spent a lot of time talking about how to write summaries - what they are and how to structure them. Students then wrote their own. Then, we looked at summaries generated by Copilot and by ChatGPT of the same essay. Students were very critical about some things that the tools didn’t do well. The AI tools were conflating multiple examples from the essay, or giving surface level detail which would not be clear to someone who's never read the original. So, there were opportunities built into the class to be critical of the tool. 

What conversations with your students have been impactful to you or to them?

DL: A big piece of the class is about critical thinking and building student confidence in terms of what they are able to produce in English so that they can fully participate and give peer feedback. Students tell me that they have become more confident in their other classes because of the things that we practice and do. I know students have been impacted from reading their reflections. For instance, if Copilot changed their wording and it doesn't sound like them anymore, it's no longer their voice and they aren’t comfortable with that. These conversations have been impactful in reminding me of the importance of students hearing and seeing themselves in their writing. A danger of using some tools to totally clean up language is perhaps promoting this bias that language must be error free. I remind myself and my students that we're using this as a tool to improve, but not to make us sound like someone else.

If you are just looking at the conversations the students are having with Copilot, a lot of the feedback is on things like clarity, word choice, and style. Copilot really pushes for brevity and concise wording, so students get a lot of style suggestions. Sometimes when they get that feedback, they don't understand why. “My original sentence wasn't incorrect, so why do you want me to change this?” That’s the tension that I've seen in their reflections –– about it not sounding like them. 

What were your ah-ha moments during this project?

DL: I taught a class in the summer where I was able to pilot parts of this project. I thought about how I wanted to integrate AI and not outsource learning. I wanted to make sure that students had done some kind of writing already before we started using AI because I didn't want them to take the ideas from AI and run with them. Some of my other ah-ha moments were about the limitations of tools, one being that for Copilot there’s a 500 word prompt limit. Students can't put in a prompt and then copy paste a whole essay if it's longer than that. So, they weren't able to get holistic feedback; they had to copy and paste part of their essay, and then get feedback on it, and do that again for subsequent parts. There was also no wording in the original prompt to let Copilot know what exactly the interaction was going to be. There were a lot of things about prompt engineering to learn. Now I know I have to clearly identify what the student's role is, the AI tool’s role is, the nature of this interaction, the task, and what specifically I want in terms of output. 

What were your students’ ah-ha moments?

DL: That AI is fallible was an ah-ha moment for students. These tools can be wrong, or actually give misinformation. Students might think “I don't have time to do it, and I’ll just ask ChatGPT to summarize the content for me,” thinking this is helping with time management or their homework load. They could get an incorrect summary, or something that does not prepare them for their class.

What recommendations do you have for instructors as they approach using AI in their classes?

DL: The biggest recommendation I have for instructors is about transparency of expectations –– having that discussion with students about appropriate uses of AI or whether assignments can include AI usage in different stages. That was important in this project. Those transparent discussions, I think, led to students’ success. Many students don't seem to have as many of those transparent discussions with their instructors. They don't know what their instructor’s policy is on AI use: whether it's totally banned, or if it can be used for specific things. I think instructors should have that conversation. Be transparent with students and the reasoning behind your choices.