Learning with AI

Photo by Andy Kelly on Unsplash

An Essential 21st Century Skill

ChatGPT and other AI’s that can generate natural language text when prompted are big news. Recently, a Wharton business school professor announced ChatGPT had passed the final exam he had given it and speculated that perhaps the AI might earn an MBA. This ignited a firestorm of controversy, speculation, and philosophizing—including from me.

The Stanford School of Education (SUSE) issued a news release about its current thinking and investigations.  The Graduate School of Education News in a December 20, 2022 article noted ChatGPT can write essays that are difficult to distinguish from ones written by humans, compose poetry, and write computer code. Wow! What does all this mean for almost everything?  This week, we’ll focus on education.

Sarah Levine from Stanford summed up most of the discussion around ChatGPT and other generative AI’s. “Teachers are talking about ChatGPT as either a dangerous medicine with amazing side effects or an amazing medicine with dangerous side effects. When it comes to teaching writing, I’m in the latter camp.” Much of the discussion seems to fall into either “how do we prevent students from using ChatGPT” or “how can we teach students to use ChatGPT responsibly?” The overall message of this news post was about how to harness this new technology.

ChatGPT and generative AI represent a breakthrough in artificial intelligence, with almost everyone astonished at what they can do (see last week’s post) and many pointing out their shortcomings. But in another way, generative AI is just another forward step in the sophistication and scope of technology. Some regressive educators moaned about Google and Wikipedia, which allowed students to look up answers to factual question quickly and easily instead of needing to memorize them. They complained students could just copy down factual answers for their homework from queries rather than plowing through encyclopedias and reference books for hours.

Forward thinking educators welcome the opportunity to move beyond the teaching of mere facts. They teach higher order thinking, which requires comprehending facts, recognizing and patterns of facts, and synthesizing new ideas from existing concepts.

These educators instruct their students about how to use these tools and how to think critically and creatively about the welter of sometimes conflicting information. Generative AI will likewise require new ways of thinking about what we learn and how we evaluate it.

Many have noted that ChatGPT sometimes gets its facts wrong. That is because it gets its facts from the internet—the right and the wrong; the good, the bad, and the ugly. Garbage in, garbage out. When wrong or nonsense information used as input for any computer system, including ChatGPT, it produces nonsense output. We all know the internet is rife with errors, gibberish, and outright lies. Critical thinking, the skill of evaluating and assessing information, will become much more important than simply generating information.

For example, even now, anyone can add almost anything to Wikipedia, and you can find false and mistaken information, right along with valuable information. The wiki process relies on the wisdom of crowds to make corrections and updates. Some dedicated and knowledgeable people take pride in editing Wikipedia. Over time, the quality of the data improves, though it is never 100% reliable.

In the same way, the process of science, which required replication and verification, over time, corrects mistakes and builds a solid foundation. Generative AI “learns” from assimilating vast quantities of information from the internet and distilling it into information it dispenses when queried. So Victor Lee, also from Stanford, calls for “training” generative AI on “the most pertinent and valuable use cases,” to curate the cases, in order to produce the best versions of generative AI that we can. He says we need to ensure that the use of generative AI is “ethical, equitable, and accountable.”

In a famous cartoon, which I wish I could find to post here, a man walks into his home in the evening, looking exhausted and disheveled. His spouse looks at him questioningly and he says, “The computer broke down today and we had to think.”

The days of shoveling masses of data, hoping it will solve a problem, are over. From now on in the 21st century, we will all need to learn to think more, think harder, and think more critically.

%d bloggers like this: