The development of AI tools, such as the ever-popular ChatGPT, has raised many questions about how to safely deploy these shiny new technologies. Instead of shying away from their potential, Northeastern President Joseph E. Aoun says we should embrace innovation.
These questions served as the basis for a discussion this week between The Chronicle of Higher Education and higher education leaders titled, “AI and Higher Ed: the Leadership Perspective.” The event began with a Q&A between the publication's assistant managing editor, Ian Wilhelm, and Aoun.
“Innovation is often scary,” Aoun said when asked about whether specific regulations should apply to the nascent technology — and whether Northeastern has any idea what those rules should look like.
“I wouldn't start with the regulations,” Aoun said. “You can't slow down an innovation and we can't react to an innovation by being protectionist.”
Concern about the pace of AI development has begun to intensify in recent months in response to the widespread use of ChatGPT and other productive AI products. These concerns run the gamut from the explicitly tangible – such as manner Chatbots extract people's personal information while scraping data on the Internet—into more nebulous and unspecified ones, such as those presented by critics in a recently proposed six-month moratorium about technology.
Chief among these concerns is the possibility that, without careful planning and preparation, human beings could create a “superhuman” intelligence—the likely result of which would be an artificial intelligence that “doesn't do what we want and doesn't care for us nor for sentient life in general,” writes Eliezer Yudkowsky, author and AI researcher.
Asked about the moratorium, which has received support from billionaire entrepreneur Elon Musk and Apple co-founder Steve Wozniak, among others, Aoun argued that deep social reflection on the implications of AI-based technology is already underway . And since tools like ChatGPT are already being vetted by students and researchers at senior publications, he thinks much of the chatter around the moratorium is overblown.
That doesn't mean there isn't a need for some serious thought about the safety and ethical application of these technologies, he added.
“I think banning ChatGPT or banning AI from the classroom or from research is not something we want,” Aoun said. “What we want is to understand the tool, the limitations of the tool and how it can be used. After all, we are a center of experimentation and innovation. Let us not be afraid.”
For Aoun, the widespread, almost mouth-watering doomsday about artificial intelligence at the moment is reminiscent of the anxieties that arose in response to home computer and invention of the internet.
“Talking about banning ChatGPT is like what happened when the Internet first started,” says Aoun. “Our colleagues then told us that students should not use information from the Internet to write their essays or conduct their research. Now we laugh about it.”
The questions and answers came a day before Northeastern faculty met on the university's Boston campus for a two-hour symposium exploring the ways artificial intelligence production tools are already being used in the classroom, workplace and health care.
Tanner Stening is a reporter for Northeastern Global News. Email him at t.stening@northeastern.edu. Follow him on Twitter @tstening90.