This fall Extra Points will feature a series of conversations focused on how faculty and staff around the University of Minnesota are using Generative AI to do University work.
Adam Brisk, Academic Technologist with ITSS ATCD UMD, and Lauren Marsh, Academic Technologist with ATSS, interviewed Dan Emery, Assistant Director of Writing Across the Curriculum. Emery collaborates with faculty to enhance learning goals through writing, and this work informs his active and insightful participation in discussions about GenAI at the UMN. The following has been revised for length and clarity.
As Assistant Director of Writing Across the Curriculum at the University of Minnesota, how do you use Generative AI?
Dan Emery: My job in Writing Across the Curriculum serves two programs, the Writing-Enriched Curriculum Program and our Teaching with Writing series. In the context of my work with the Writing-Enriched Curriculum (WEC), I talk about AI with folks who are teaching with writing in the undergraduate curriculum. I work with a slate of about 26 different colleges, departments, and programs in conversation with faculty about their writing expectations. Much of my initial attention to generative AI came from faculty members in various areas concerned about generative AI, particularly in the academic integrity space.
The other part of my job is in the Teaching with Writing program, which is for all instructors across system campuses. We provide programming that addresses how writing might help advance learning goals or the skills, abilities, well-being, and self-efficacy of your students. AI has also emerged in these contexts.
At this point, I use AI to generate sample documents based on faculty assignments or to generate examples that I can use to demonstrate different skills in editing and revision. I've also experimented with AI to automate processes like calendar management. For the most part, I'm really still in an exploratory stage. I'm interested in how it's changing the ways people write.
Are there challenges or opportunities presented by GenAI that are specific to writing or that are showcased by the writing curriculum?
DE: The biggest challenge is understanding that AIs generate text differently than humans do. Humans have our own language learning processes and come to language in the world by reading, writing, speaking, and listening, and through direct instruction and teaching. Generative Pretrained Transformers are built on a different model, even though the language of neural networks suggests some sort of relationship between them. People who study language have always known that language is rule-governed. Still, the mathematical power of text-mining tools reveals that you can figure things out about a language and make plausible guesses about appropriate responses from how the pieces of language are arrayed. The bizarre thing is that with generative artificial intelligence, all of this happens without consciousness and without meaning. AIs don’t read words to understand; they analyze text tokens to predict plausible next words. All of these relationships are essentially part of the structure of language itself as it is used; large language models don't rely on any kind of inside consciousness to be able to figure it out. We strongly tend to anthropomorphize these technologies and treat them like artificial beings, rather than statistical prediction engines, when we talk about how AI “reads,” “learns,” and “writes.”
We’ve never had a writing technology before that works without consciousness behind it.1 What happens when writing can be generated without a writer thinking or knowing? That's the big question. Most of the AI panic emerged from fears about AI and cheating, but this strange new mode of text generation informs concerns about using AI in their classrooms, where it might short-circuit learning activities that go along with writing. I promote writing to learn as an activity in all kinds of environments and all of my work. Now tools exist that might allow students to generate text without learning, and that's very concerning for me.
Many faculty wonder why the University hasn't invested in tools addressing generative AI and academic integrity. What are your thoughts?
DE: Faculty who have relied on technology like Turnitin to identify examples of borrowed text would like to see something similar for Generative AI. Turnitin is a useful example in both its affordances and its limitations when dealing with borrowed text. If you're teaching a large multi-section course and want to ensure that one roommate isn't borrowing another roommate's paper, Turnitin's a good technology to identify that kind of conduct.
However, it’s a mistake to believe that the originality report can serve as an index of a student's intent to engage in academic misconduct, or that a particular score or number indicates academic misconduct has occurred. Many times, cases emerge where you see something cited inappropriately or not cited at all. In most cases, I think of that gap as a citation error, not an effort to bamboozle a faculty member or take undue credit.
That same misplaced faith in Turnitin is being imported into these probability engines, which will state that the “likelihood” of a document being AI generated could range from 0% to 100%. Currently, most tools are pretty awful at drawing those comparisons and provide numbers between 30% and 60% on almost everything. Different AI trackers might give you different results for the same piece of text. The nature of the computational processes involved and the interactions between humans and texts suggest that reliable detection is exceptionally difficult, if not impossible. Several large AI companies have given up trying to create AI detectors, and the proliferation of AI tools and text generators will only make things harder.
There could be simple ways to detect AI writing. The companies that produce the AIs could induce a watermark or put something in the code or metadata behind the writing as it appears as text, but companies are reluctant to do that because they worry that it will shrink their market share. Perhaps companies will change their minds once the market leaders sort themselves out.
What most sophisticated users of any text-generating AI are doing right now is getting a starter text by providing a prompt. They will then refine that prompt and offer queries and suggestions to help refine it within the system. Ultimately, writers get their hands all over their AI outputs; they make sentence-level and vocabulary changes, move things around, and put things differently. Essentially, they do those things that employ their understanding of meaning and value to bear on this generated text to make it work more effectively. In these instances, the AI is something like a companion or collaborator, but the human agent is still in charge. I think that’s the origin of names like Gemini and CoPilot. Generative AI is presented as a helpful twin in the second chair.
What strategies are effective for faculty teaching writing and faculty dealing with AI?
DE: Talking to students is an incredibly powerful way to understand how they use AI, what benefits they perceive it to have, and whether or not they understand its risks. Ask students, “How are you currently using artificial intelligence? Are you using AIs to produce your academic work? And for what?” If we approach this with a spirit of curiosity and inquiry, we'll better understand how these technologies are shaping what's happening in our courses.
When a faculty member feels like artificial intelligence has written something, I recommend reaching out to make a connection before making any allegation. Ask the student about their writing process:
- Where did you get your idea?
- How did it develop?
- What part did you write first?
- What do you mean in this sentence, or how are these ideas connected?
Even students who are novice writers will be able to answer those questions. But it will be a challenging conversation for a student who hasn't done their writing. Occasionally, the tone of generative AI gives it away, or brevity or sentence constructions. Experienced readers develop a sense of what real students sound like. Sensing something is off should trigger a discussion rather than a sanction.
Do you have a script or talking points for engaging in these conversations?
DE: I do, and it’s not that different from what I recommend with any potential plagiarism. I call it a draft procedure because so much depends on the specific disciplinary and teaching contexts you’re in. It may also be challenging to incorporate into a large or asynchronous course where student contact is limited. Still, I think the principles are sound—allow the student to show that they’ve written the work and done the thinking required, rather than relying on technology to “prove” something else did the writing.
I’d also mention that ANY use of AI-generated text without attribution meets the definition of academic misconduct regardless of whether AI use is authorized or prohibited.
The vast majority of students, even if they are using AI, don't intend to be malicious; they might be facing a failure to manage their time and effort, might lack self-efficacy, or might lack understanding. I don't think of it most of these problems as premeditated crimes against scholarship at the heart of malicious plagiarism (like submitting your roommate's paper as your own). Reporting inappropriate use is essential, but the penalties need not be draconian for a first offense. The more experienced the writer, the less grace I tend to extend.
If we understand why writers are making those choices, we will be better positioned to help them make smarter ones. In the classroom, you're not there to be a cop. You're there to mentor and say, “I'm worried that you're doing this the wrong way.”
Are there particular authors or blogs you follow, or researchers who help you stay on top of a fast-moving field?
- Melanie Mitchell's book Artificial Intelligence, a Guide for Thinking Humans, was probably one of the foundational books for understanding how these technologies work.
- Cathy O’Neil’s “Weapons of Math Destruction” is a great one on algorithmic intelligence more broadly.
- Joy Buolamwini “Unmasking AI: my mission to protect what is human in a world of machines”
- The Mollicks, Ethan and Lilach Mollick. They're at UPenn. They're doing fantastic work on how AI is changing higher education. Ethan writes the “One Useful Thing blog.” This is a great post about how LLM’s make decisions
- Tiziano Bonini and Emiliano Treré have a book, Algorithms of resistance: the everyday fight against platform power, that is glaring at me from the top of my TBR river.
1 Some might argue that the Dada movement in art and poetry explored writing without consciousness, which is absolutely true. However, their emphasis was on randomness and refusal of rationality, which is almost the opposite of a generative pretrained transformer.↩