Interacting with OpenAI's o1 offers insights into adding AI to coursework

It's my fervent hope that, a few years from now, the pace of generative AI innovation will have slowed down enough that we can actually track it all. For now, not a day passes without a new LLM or entirely new AI system appearing and asserting its claim to the GenAI crown. What's even more impressive is how improvements in one model trickle down to previous versions, too, lifting all proverbial boats with this rising tide.

My two go-to models at this moment are ChatGPT Pro and Microsoft Copilot (built atop ChatGPT-4o), due to its convenience. I use them endlessly and find GenAI nothing short of transformative in my workflow. Even for hobby projects: I review motion pictures and have started asking Copilot detailed and very specific questions about actors and storyline elements. The results so far are spot on.

Here's a recent query of mine: In the film "How to Make a Million Before Grandma Dies," who plays M and what previous roles has he had in Thai cinema?

And here's the response: "In the film 'How to Make a Million Before Grandma Dies,' the character M is played by Putthipong 'Billkin' Assaratanakul.

"Billkin has had several notable roles in Thai cinema and television. Some of his previous roles include: Teh in the popular series 'I Told Sunset About You' and its sequel 'I Promised You the Moon,' and Oh-aew in the series 'Bad Genius: The Series,' which is based on the hit Thai film 'Bad Genius.'

"These roles have established him as a prominent actor in the Thai entertainment industry."

All of which is to get us to OpenAI's experimental "o1" model and how it ties in with academia. What caught my attention in this regard wasn't that it actually thinks before it answers queries, but rather how experts like Ethan Mollick and the team at The Neuron counsel new users on best practices with 01 prompting.

ChatGPT 01 PROMPTING BEST PRACTICES

According to these two sources, the best approach for interacting with o1 is to:

• Have a casual back-and-forth about your ideas, then summarize what you've discussed, along with any constraints.

• Be specific about what you're asking, the narrower and more specific, the better.

• When you receive an answer, ask the program to explain why it arrived at that response.

Teachers, doesn't this sound remarkably like a script for an oral exam? That's not an accident, of course, because understanding how to assess someone's knowledge is critical to any educational endeavor. It's also the reason that so many academics are afraid of AI's impact: How do we ascertain if a student understands a subject if they're using AI to complete assignments?

Notice I didn't say "using AI as an assistant to expand their knowledge so that they can learn more and submit more erudite assignments." That's a whole different approach, and it's what we should be working toward as quickly as possible.

What's most intriguing is the way that this approach can also be a roadmap for effective inclusion of GenAI in coursework. Imagine if we encouraged students to interact with their "learning assistant" using a similar approach, and we then required them to submit a summary of their prompts, the AI response and reasoning chain, and their own assessment of the results.

It turns out that some college professors are doing just that, exploring ways to move students into what I'll call a neophyte expert role through assignment prompts like "ask AI to solve this problem, then criticize its answer. Submit both for credit." A more subtle variation is to give students a prompt that will generate an incorrect answer and have them explain why the answer is incorrect.

To complete the idea, asking students to submit videos explaining their research process and their answer can quickly help identify those who are relying too heavily on technology in lieu of understanding. At which point, an in-person meeting might be called for, in which a teacher could even use the earlier prompting tips to evaluate the student's knowledge: a casual, back-and-forth, a very specific question for them to answer on the spot, and an explanation from them of how they arrived at their answer.

You can explore this concept yourself by asking your favorite GenAI tool to evaluate your most recent prompt and suggest ways it could be improved. The software's not self-aware, but it can certainly offer smart suggestions. Speaking of which, how many of your students could now improve upon a classroom assignment or lecture you just wrapped up?