An AI Pause?

Photo courtesy of Pexels and Photo by sum+it

Good Idea or Panic Reaction?

The controversy over ChatGPT and generative AI is heating up.

Generative AI is a type of artificial intelligence that can create new content, such as text, images, and music. This technology trains computers to “learn” and generate new data by analyzing large amounts of existing data.

Generative AI has the potential to be used for a variety of purposes, such as creating new products, generating marketing materials, and producing art. As AI technology advances, the potential for generative AI to create more complex and sophisticated content speeds up. While many welcome these advances and the many benefits they can bring, some people worry that the technology is advancing too quickly.

Future of Life Institute (FoLI), funded by Elon Musk, is calling for a six-month moratorium on the further development of generative AI. During this time, governments and blue-ribbon committees would address potential risks. The Future of Life Institute’s letter raises two important points.

First, it argues that we need to have social dialogue and regulatory frameworks in place before we develop and deploy generative AI technology any further. This six-month “cooling off” period is to allow for this dialogue.

The need for social dialogue and regulatory frameworks any ground-breaking technology is important, whether for AI, biotech, or facial recognition. This dialog is already underway in the press and social media. Do we need to stop AI development cold in order to continue this discussion? Or can we work and talk at the same time?

We also surely need dialogue and regulation around global warming, environmental destruction, nuclear energy, and social justice. These problems have been going on for a long time. Discussion has been going on for a long time. Did the FoLI call for a six-month moratorium on environmental destruction? Would six months be long enough to resolve all the potential problems AI could bring? Would committed problem-solving discussions even start in six months? Or would we do what we humans often do, namely, put off starting on something difficult for five months and three weeks and then ask for an extension of the moratorium?

Second, the FoLI letter argues that this technology also has the potential to be misused, such as creating fake news stories or “cheating” on college tests. Perhaps FoLI has not noticed, but we have fake news stories now thanks to Russian social media bots and others. Any new technology has the potential for misuse, from guns to factory automation to computer technology.

Self-driving vehicles could be a very beneficial technology, making roads safer, saving gas, and making driving more comfortable. But some people point out potential dangers of this technology, including mechanical failures or computer glitches. FoLI, however, has not called for a moratorium on their development. In fact, the Washington Post (March 19, 2023) quotes Elon Musk himself on his plans. “By the middle of next year, we’ll have over a million Tesla cars on the road with full self-driving hardware,” Musk told a roomful of investors.

While FoLI raises valid concerns about the risks associated with generative AI technology, it ignores that many of these risks are already present in today’s digital world. Risks are also present in so many of the technologies we use now and are emerging continually. Should we pause six months to consider each of them?

Regardless of the need, the wisdom, or the practicality of a moratorium, it’s clear that the development and deployment of generative AI technology requires careful consideration and ongoing dialogue. Maybe the Future of Life Institute would like to sponsor and fund a symposium to kick-off this discussion.

%d bloggers like this: