I have an admission to make. I'm really excited about ChatGPT. I like it because AI is so mind-blowing and thought-provoking, and I love thinking about change and technology acceptance. I also discovered it can be a really useful tool, but like many of you, I have concerns. Because I know it's here to stay, I'm grappling with both how to detect its use in higher ed but also how to use it creatively in the classroom.
On the continuum of technology adopters, I would say I generally fall somewhere in the late early adopter or early early majority categories. I have spent a lot of money on technologies that didn't end up working well for me, and those experiences keep me from being a solid early adopter in most cases.
Perhaps because ChatGPT is free, and I imagine myself a futurist (this Mega Trends and Technologies 2017-2050 "subway map" is one of my favorite things), I started using it early on. First, I asked it to write a poem - not horrible after a few tweaks - but then I got down to business. Since I was in the midst of reading dossiers, I told it to write a teaching philosophy based on culturally relevant pedagogy.
Having talked with many faculty about their contract renewal and tenure documents, I knew that developing and writing a teaching philosophy didn't come naturally to some. I was curious to see what an AI-generated philosophy would look like. It was scary good and very personal. In fact, it sounded like something that could be copied and pasted right into one's dossier.
Next, I asked ChatGPT how I could serve Native Hawaiian students at the college. The tool spit out a list of five practical ways that I, as an educator, could ensure I was doing so. A little creative combination of these two outputs would result in a quick and easy teaching philosophy. I am not freaking out.
Why share this? Aren't I worried that I've given someone ideas? No. I have no doubt that someone somewhere is using AI in exactly this way, but I cannot control that. What I can do is get know my faculty and have meaningful conversations with them. In the same way that AI-generated text makes cheating easy for students, it makes cheating easy for us, too. Yet, I regularly use ChatGPT in my own life now.
I have had to use my own sense of professional ethics when deciding how and when that's appropriate. For example, a former work colleague asked me to write a letter of recommendation for a selective professional development opportunity. What I would normally do is research the training to see what they were looking for in their candidates. Then I'd compare that list to the colleague's CV, what I knew of their strengths, and my experience working with them. I could then craft a personalized letter of recommendation that carefully matched the person's education and and goals with the needs of the organization.
Even if I wanted it to, ChatGPT couldn't do this for me. I could, however, have asked it to "write a letter of recommendation for a colleague to attend a professional development training on [subject]" then edited the letter to fit the person's skills, and I would end up with something adequate, but I'd have that nagging, icky feeling that comes with taking shortcuts like this. Instead, I chose to use the tool to educate myself on the subject of the training, which helped me write a better, more targeted letter.
Regarding my colleagues and how they choose to use it: I fall back on my belief in the inherent goodness of others. Most especially, I choose to believe that as academics, we understand and value original thought, and I trust them to make decisions about how, when, or if to use ChatGPT themselves.
But what about students who have, in most cases, neither the experience nor the confidence in their own abilities to easily choose the more labor intensive route. Further, they may lack the knowledge of what's allowable and what isn't. The temptation we may feel to use AI to do things for us can be amplified by students' fear of failure, inexperience, and yes, even laziness. But I think it would be a mistake to write it off as something to be banned.
There is already a lot of information out there about how to use ChatGPT productively in the classroom in acceptable ways, and therein lies the work. As we are figuring out for ourselves what's a respectable use of AI and what's not, we can have these conversations with others, including our students.
I recently took a survey for ChatGPT that asked me to indicate how I was using it:
- Helping me with work (e.g., debugging code, creating a template)
- Helping with school (e.g., math, creative writing)
- Learning something new outside of work or school (e.g., a new language, how to code)
- Getting feedback on my creative pursuits (e.g., short stories, poems)
- Creative brainstorming (e.g., generating ideas for a novel or game)
- As a search engine replacement (e.g., to answer random questions)
- Completing daily tasks or chose (e.g., finding recipes, creating to-do lists)
- For entertainment (e.g., telling jokes, telling fictional stories)
- Helping me play or create games, or playing games with me (e.g., writing prompts for D&D)
- To get life advice or support (e.g., therapy, coaching)
- As a general companion (e.g., someone to talk to)
- Something else (please specify)