ChatGPT:

Terror or Tool?

Image by kjpargeter on FreePik

When Christian Terwiesch, a Wharton professor, announced that ChatGPT might earn an MBA, the internet went wild. AI is big news yet again. ChatGTP can generate its own essays and is not limited to simply responding to prompts.  This past November, OpenAI, the company making ChatGPT, announced its release to the public. In January, about a month later, when I went to sign up for an account, they already had up a waiting list. And why not? The promise is tremendous.

Could an AI, ChatGPT, really qualify for an MBA? Terwiesch, an operations management professor, tried it out by giving it the questions from one of his final exams. He pronounced the AI could have earned a B or maybe B- in his course. That seems a bit of a stretch extrapolating from some exam question, but nevertheless, a big one. This biz school version of the Turing test raises a lot of issues. Among other things is also questions about how we assess learning.

Did ChatGPT learn the material? Or was its reasoning good enough to glean enough material and then compose a seemingly reasonable answer to a test question? Or is that indistinguishable from human learning? How is human learning different from machine learning? Both people and machines learn from examples, but the enormous capacity of machines for analyzing tremendous volumes of cases may give machines an advantage. But does Garbage In-Garbage Out still apply? What about all those cases on the internet? Are they all good? Equally good?

A human learning from a Wharton prof has the advantage of getting reviewed, curated, and selected cases and learning materials. Do we need to insure AI’s get a higher quality “curricula”?

What should schools and universities do about ChatGPT and similar AI? Both NYC and the Seattle school districts have banned AI for both students and teachers. This is presumably to help prevent widespread cheating and preserve the established learning process.

But what is cheating in this Brave New World? Even now, students are supposed to “do your own work” but once in the workforce, teamwork and collaboration are the expected norm. What is cheating when you come down to it? It was once considered cheating to use a calculator in math class, but now it is expected.

As our technology changes, our expectations for what to learn and how much to learn have changed. Does anyone still teach students how to do square roots by hand anymore? Why would you? It makes sense to understand what a square root is, why it is important, and how it can be used for both practical work and to extend theory. Maybe it even makes sense to walk though one hand calculation in life, sort of like visiting to historic Jamestown and letting your kids help make a candle by hand to see how it’s done.

How will learning and work change with AI now? How much will we need to learn, to embed in our brain, versus how to use tools to extend what is in our brain? Just like Google, Wikipedia and your always-with-you smartphone have changed what we think we need to remember, has Chat GPT pushed that boundary again?

The CEO of OpenAI and the innovation Prof at Wharton both call for embracing generative AI rather than fighting it or banning it. And why not? Isn’t working with AI now a 21st century skill? AI clearly won’t go away and it will only get stronger. Human history is a story of adopting new and more powerful tools and this is the latest chapter.

%d bloggers like this: