Skip to main content

Gen AI Explorations: Conversation with Fernando Burga

 This spring Extra Points features a series of conversations focused on how faculty and staff around the University of Minnesota are using generative AI to do University work.

Interview with Fernando Burga about his use of gen AI in Urban & Regional Planning at the Humphrey School of Public Affairs
Adam Brisk (Information Technology Systems and Services) and August Schoen (Academic Technology Support Services) interviewed Emerging Technologies Faculty Fellow Fernando Burga, Assistant Professor of Urban & Regional Planning in the Humphrey School of Public Affairs.

Tell us about your role in the Humphrey School and how that is informing your work with generative AI.

Fernando Burga: I'm an assistant professor in urban and regional planning at the Humphrey School of Public Affairs. I'm also a specialist in community development at Extension. I teach a variety of courses, including land use planning, site planning, and a class called civic participation. My role is essentially divided into three main areas: research, teaching, and service.

I emphasize the use of AI and large language models in my teaching. I am also exploring their application in service. The classes I teach tend to be technical. Students learn specific techniques and concepts that are applied in urban planning. My goal is to consider ways that students can use large language models to navigate analytical and technical tasks they may encounter in their professional lives. 

Ultimately, I believe the application of AI is really about reading, writing, reviewing, synthesizing, and developing formats that allow for the translation of policy or the enactment of policy-making work. This is my main approach in how I carry out my work.

Can you tell us about your interest in generative AI and how it impacts your teaching?

FB: I'm the type of professor who likes to always experiment with new formats in teaching. I applied for the faculty fellowship to bring AI into the classroom as a tool to improve my pedagogy. My goal was less about efficiency and more about helping students gain critical thinking/interpretation skills they may use in their professional lives. Policy makers essentially are translators of text/policies. They spend time interpreting and explaining policies written by experts and explaining data -also produced by experts - to fit into people's life. AI provides a new multi-faceted tool that allows them to consider ways to do this. 

In my exercises, students begin by developing an understanding of the situation they face, then they consider how to apply the AI language. A key aspect of the exercises I design involves their reflection on their experience of using AI to assess the pros and cons of AI application. You may call this a hybrid approach or a hybrid loop. 

Interviewers note: The hybrid loop that Professor Burga refers to appears in his assignments as iterating between human intelligence (HI Path) and artificial intelligence (AI Path). Here is an example from PA 5211 LAND USE PLANNING 2024 - ANALYTICAL EXERCISE A - CPCA Part 1: 

This exercise asks you to carry out an in-depth analysis of comprehensive planning language to develop a critical position. This will be done by combining two paths:

    1. Human intelligence Path (HI Path) - Inductive coding: You will select a comprehensive plan of your choice to analyze through inductive coding to identify themes and categories that stand out from your perspective. This analysis will lead to a research question(s) for future exercises.
    2. Artificial intelligence generative language Model Path (AI Path) – ChatGPT prompts: You will apply ChatGPT to consider the generative question that you have identified and explore potential applications and alternatives. You will conclude by considering the pros and cons of using Chat GPT by writing a reflection.

How are your peers responding or engaging with AI?

FB: Faculty come to AI with different perspectives. Some are curious, some are not. Many faculty avoid AI because they think it's wrong. They say, "We can't use AI because it's biased and has hallucinations." 

To me, that's what's interesting about using AI. Of course, it has hallucinations and biases. That's what we need to actually expose and face in order to become literate about fallacies. Faculty have their own internal biases. Research shows that these biases are often reaffirmed. We need to deal with these issues, build critical awareness, and develop new understandings and critical thinking to make assessments. I don't see AI as something independent that takes over our lives. I see it as a practical tool that involves reading and writing. We input data into it, and we need to be aware of what we put in and what it provides as an output. 

That's where I find joy in teaching. As I tell my class, you are smarter than AI. You can understand that AI outputs are incomplete, incoherent, or bland. Your critical thinking, awareness, embodiment, positionality, purpose, and ethical position give value to AI outputs. That's where you make value with AI.

How do you respond to peers who think of generative AI as a tool for cheating?

FB: The concern of cheating, or how students are learning and how we are evaluating their learning, is very valid. The challenge we have as professors is that students are already using AI in their assignments, whether we want them to use AI or not. In this regard, the way forward is not to prohibit or deride, but rather to engage critically and consider all the challenging questions that come with the use of AI. 

The last project for my Land Use Planning course is the AI thesis. For this assignment, I invite students to actually lean into the use of AI and then write a reflection about what they've done in AI. The point is not to prohibit AI, but rather to build a critical ongoing reflection about the challenges of using it and how to develop a reflexive/responsible practice. 

Tell us what comes next!

FB: Next semester, I'm teaching a site planning course in which I plan to apply AI to the charrette model. A charrette is an intensive design process workshop where people come together to develop various outputs. Typically, a charrette spans several days and involves a wide group of stakeholders. It's an intensive brainstorming session with specific goals and objectives, usually resulting in plans, drawings, or designs.

I'm considering doing a charrette with the site planning students that focuses on the Hennepin Energy Resource Center (HERC). The HERC is located in the North Loop next to the Target Center. It's a building that, while no longer creating significant pollution, still represents an eyesore due to its past.

The students will work on rethinking the HERC. This area represents a site of environmental injustice. I want to test how we can use AI during these workshops to create visions or scenarios that may inform ideas for alternatives. Students may use AI to identify language that may be useful for their policy analysis and design precedents. In addition, AI visualization tools can be used to generate before and after scenarios that may be helpful for scenarios and prototypes.