LONDON—Does artificial intelligence have the ability to be creative? Should artificial intelligence be regulated by nation states? What are the limitations of artificial intelligence in business?
These were just some of the questions explored by a panel of experts in When Technology Goes Wrong: Navigating the Rils of AI in Modern Society. Held on Tuesday at Devon House at Northeastern University in London, the panel prompted participants to think about the challenges presented by AI technologies.
North East London teachers on the panel were Alice Helliwel, Sian Joel-Edgar, Yu-Chun Pan, Tobias Hartung and Xuechen Chen. Each represented a different field that intersected with AI. Moderated by Alexandros Koliousis, Associate Professor of Computer Science at North East London, the panel inspired over 100 participants to think about artificial intelligence from a variety of perspectives – philosophy, mathematics, world politics and more.
“Our panel today is a celebration of local talent,” said Koliousis, also a researcher at Northeastern's Institute of Experiential AI. “The faculty of Northeastern University London are at the forefront of interdisciplinary research.”
Helliwell is an assistant professor of philosophy specializing in the philosophy of artificial intelligence, ethics, and aesthetics. He started the discussion with the question of whether artificial intelligence can be considered creative.
Like many questions in philosophy, he said, it depends on who you ask.
“This genre splits two ways,” Helliwell said. “Some people think that only humans can be creative, while others really believe that I have seen the results of these production systems. they sure look creative.”
To answer that question, he said, we need to define creativity — but that's also not an easy question to answer. One theory is that something creative must have the impact of surprise, must have novelty, and must have value. Others would argue that the machine must have agency to be creative.
“It depends on what you think is the right view of what creativity is,” he said.
Helliwell touched on the subject of creative work with the help of artificial intelligence and whether that counts as creative. For some artists, AI is actually limiting, he said.
“Some artists have also suggested that using an AI system takes away some of their own artistic agency,” he said. “So they gave it a shot and they think that actually limits what they're able to produce.”
Many consumers are well aware of these limitations, according to Pan, an associate professor of digital transformation. He talked about the business side of artificial intelligence, specifically how it has been integrated into company operations.
Pan started by asking the audience if they had ever used a customer service chatbot and if it had given them the results they needed. The audience murmured.
“Sometimes it happens. Give him some credit,” he said with a laugh.
Studies show that 60 percent of the time a chatbot is used, human intervention is still required to solve the problem, Pan said.
“We know that these apps can be useful and sometimes they're not,” he said. People tend to find solutions when they don't see the value in AI applied to business, he said.
But just because you can do something, doesn't mean you should.
Chen is an assistant professor of politics and international relations. He talked about regulation, specifically whether digital spheres should be regulated by governments, or whether cyberspace should be regulated by regional entities like the European Union.
One approach to this question is for many stakeholders—such as European nations and the United States, for example—to apply fundamental rules of democracy and human rights to cyberspace as well. Other countries such as China or Russia “prefer a sovereignty-based approach and re-prioritize, for example, states' rights and put more emphasis on the role of public authorities in government,” he said.
Should the public participate in the creation of these AI applications? Joel-Edgar talked about it.
“There's a broader question that people need to be involved because we decide whether technology is good or bad,” he said. He also spoke about the issue of explanation and the expectation that AI technology is understood by the general public.
“If you go to a doctor and he gives you a prescription, you don't ask him, 'well, can you explain?' I don't have to ask him to explain, “what are your credentials?” he said. “And why do we expect this from AI solutions?”
Hartung completed the panel. The assistant professor of mathematics discussed how artificial intelligence learns and makes decisions. For example, if the AI is shown a picture of a cat, it won't definitively say it's a cat, he said. Instead, it could give an 85% probability that it is a cat, a 10% probability that it is a dog, and a 5% probability that it is an airplane.
“Actually, what artificial intelligence does is it tries to learn a probability distribution, so if I give you some data,” Hartung said. “The AI says, 'OK, this is the most likely explanation for what this data is,'” he said.
Ultimately, the panelists left the audience with plenty of food for thought when it comes to AI creativity.
“Artificial intelligence is an area – today, a meta-discipline – where development seems to be moving faster than society can keep up with,” Koliousis said. “New technology affects society in complex ways and its impact is not always uniformly positive. So we have to stop and ask, “What will be the effect on people?”