Skip to main content

Gen AI Explorations: Conversation with Faculty Fellow Ezgi Tiryaki

 This spring Extra Points features a series of conversations focused on how faculty and staff around the University of Minnesota are using generative AI to do University work. 

Interview with Ezgi Tiryaki about her use of generative AI
Lauren Marsh (Academic Technology Support Services) and Bill Rozaitis (Center for Educational Innovation) interviewed Emerging Technologies Faculty Fellow Ezgi Tiryaki, Professor of Neurology in the Medical School. The following has been revised for length and clarity.

Tell us about your role in the Medical School and how that is informing your work with generative AI.

Ezgi Tiryaki: I am with the Medical School and involved with the Office of Faculty Affairs, focusing on faculty development programming. Until last year, I co-directed a program called Foundations of Leadership Excellence (FLE). I am also helping co-lead the Rothenberger Leadership Academy for mid-career health systems leaders. My involvement and use of AI are primarily in the sphere of faculty development.

Tell us about your generative AI project.

ET: My project was geared toward creating content using generative AI to develop faculty. I wanted to explore how AI could deliver content in an engaging way and expand the reach of leadership programming, making it more accessible to faculty and students. That was the starting point for my project.

Interviewers:What types of content were you interested in developing?

ET:Initially, I noticed it was hard to find suitable images when putting slides together. I thought, could I use AI to generate the kinds of images I need? I experimented with that, and I found that AI can produce better results when prompted carefully and iteratively.

Here is an example from ChatGPT for an image that introduces the polarity of rest and activity.

AI-generated image shows a woman relaxing in a hammock on the left side and a woman running on the right

Over time, I started experimenting with AI video creators to create teaching videos. One limitation of our programming is that it requires faculty time and is often delivered synchronously. Short teaching videos could be useful for those who miss a session or need to reference it later. I discovered many AI tools generate professional-looking videos from slides, documents, or recordings.

Interviewers: Which tools did you use?

ET: I experimented with many tools, most of which have free versions. I tested about seven rigorously, giving them the same prompt and source to see how they differed. I ended up choosing Lumen5 and Synthesia. Here is a video about reframing a challenge that was created with Lumen. I'm getting approval for the paid versions of these products to make sure they are fully compliant with the Medical School policies on third-party technology tools and hope I will get permission to unlock more advanced functionalities. Synthesia allows you to upload a script and create custom avatars with different voices and backgrounds. It can create short teaching videos for key concepts. Lumen5 can upload various content types, summarize them using AI, and create engaging videos with stock photography or videography. Both tools have great user interfaces and are easy to navigate.

Another use case I'm experimenting with is a virtual AI-supported leadership coach. I created a prototype for a custom GPT leadership coach on ChatGPT. The GPT is programmed to reflect on what the user says, summarize it within a leadership framework, and respond by asking coaching questions for deeper reflection. It's like a conversation partner for leaders that is informed by the leadership theories and tools we teach in our Foundations of Leadership Excellence and Rothenberger Leadership Academy modules. I haven't offered it to program participants yet, but it's easy to use and fun to interact with.

Interviewers: What are your plans for implementing the conversation partner?

ET: One limitation is that users need a paid ChatGPT account to use the coach. One of my colleagues used AI fellowship funding to get paid accounts for his students. We need to figure out how to provide these resources to students and faculty, possibly using different platforms like Google or Microsoft that the University supports. The prototype is easy to create, but we need to test it with learners and refine it based on their feedback.

Once you learn about AI, there are endless opportunities. It's about finding people with similar challenges and partnering up. The Emerging Technologies Faculty Fellowship helped me find like-minded people. 

What are your ah-ha moments? What have you learned?

ET: One thing I had to unlearn is that I was thinking about this technology as intelligence. It's called artificial intelligence, but it really isn't. It is just making predictions based on whatever source materials the tool was trained on.

Another ah-ha moment was realizing that every output should be seen as questionable until you have vetted it and looked it over carefully. Another one was understanding how energy-consuming and resource-intensive these tools are and the environmental impact they have. I still have a lot to learn about offsetting any negative impact. It is easy to forget when using an AI tool that there is a large energy expenditure to run data centers and support the computing power. Another thing I'm learning is that there is initial enthusiasm and maybe an over-reliance or overestimation by early users of what AI can do. I've read that the impact of these technologies are overestimated in the short term and underestimated in the long term, and I think that's true. I find myself in that early phase right now.

So, it's important not to try to humanize generative AI or anthropomorphize it and to not overestimate what it can do. Discerning where it has a role and how to use it are some of the biggest ah-ha moments for me.

Talk to us about how your peers are responding and maybe engaging with some of this work.

ET: I've done a few things to get peers engaged. I'm a neurologist by background and involved with the American Academy of Neurology (AAN), my professional organization. I was invited to give a talk to the AANLeadership Development Committee and AAN leadership. I expanded on my project and discussed how a national professional organization could leverage AI for leadership programming, from content creation to engagement, delivery, data collection, planning, and reducing administrative workload. The response was positive.

One interesting activity we did in the Fellowship program was to pull up the website of our professional organization and see what it says about AI. It was eye-opening. Every major academic professional organization has an AI statement now, and a lot of work is going on to understand what AI means for our field, profession, and work. We're all learning together, including our leaders. There are many considerations, such as ethics, copyright, intellectual property, security, and doing no harm. The technology is evolving rapidly, and the guardrails are trying to catch up.

I'm also a member of the Academy of Distinguished Teachers. At our fall retreat last year, one of the big topics was AI. Out of that came an action group called the AI Use Case Cafe. The idea is to bring faculty together for an hour to learn from each other in an informal setting. We have two speakers who each get about 10 minutes to share what they're doing, followed by 20 minutes of Q&A. It's very informal, with no more than 10 slides if any. Our goal is to host this forum every other month and hope it becomes self-organizing, with faculty learning from each other. The technology is so new and rapidly changing that peer learning is crucial for building the necessary skill set and figuring out what we need and can do. We're all learning together. 

What recommendations do you have for instructors using AI in their classes, for the first time?

ET: Start small; dip a toe in. Even if it's just starting small with platforms like Microsoft Copilot, Claude, Perplexity, or ChatGPT and writing three prompts to see what happens. Don't try to solve all the big problems or do something groundbreaking. Start with something small that bugs you or is repetitive or boring, and see if a tool can make your life easier. You can briefly describe your problem and ask the AI tool how it could help you.

Once you're more comfortable, AI can really expand what you can do. I'm not a video editor, but I can make videos. I'm not a computer programmer, but I want to experiment with tools like Claude Artifact, which allows you to create interactive learning tools by writing language prompts without needing to code. Perplexity Pages is another tool on my list of things to try out, which allows you to create interactive content pages. Start small and experiment.

Another tool I've seen a demo of is Google NotebookLM. It allows you to pull in Google Drive documents or YouTube videos and create study guides, tables of content, or even podcasts. The podcasts sound incredibly realistic, like two people chatting about the topic you uploaded. It's going to be a transformative tool over time.

Is there anything else you want to share about your journey with generative AI?

ET: Two things have informed me. One is that AI doesn't see context. It doesn't understand context like humans do. For example, rib and cage make sense together as rib cage, but street and mountain do not. AI has an error rate when figuring out if word combinations make sense. While tools will improve, there's a unique role for human intelligence.

The other thing is that these are learned skills. Nobody is born knowing how to do this. Prompt engineering is a learned skill. Just like driving a car, you don't need to know how it works, but you need to know how to drive it safely. Similarly, we don't need to know all about the algorithms and machine learning behind AI, but we should become knowledgeable consumers who make informed decisions. Using AI will become a regular part of teaching.