A majority of Americans worry that artificial intelligence will be used to spread lies during the next presidential election, according to a recent Northeastern University survey.
The survey was carried out with Northeastern's new AI Literacy Lab to gauge public perceptions of artificial intelligence. It found that 83% of respondents are concerned about the proliferation of AI-generated disinformation during the 2024 presidential campaign.
One thousand American adults aged 18 and over were surveyed from August 15th to August 29th. The lab released the findings as part of its official release during the Experimental AI Institute business conference last month.
The research is the first project to come out of the lab, which plans to work with computer scientists, journalists and other media professionals to help them understand and use artificial intelligence.
“What we do is to be a bridge between the scientific community and the mass media,” he says John Wihbeyassistant professor of journalism and media innovation at Northeastern.
Wihbey is co-chair of the workshop with Rupal Patel, professor at Khoury College of Computer Sciences and Bouve College of Health Sciences.
The research is the first step and will help inform the AI Literacy Lab as it hosts workshops and stakeholder meetings, Wihbey says.
“Really the goal of this particular research was to look at the ways in which people are informed about this emerging technology and start to look at areas where the public feels dread, anxiety or optimism,” says Wihbey.
The finding that 83% of Americans are concerned about the spread of misinformation during the election is a reflection of online platforms like X/Twitter and the lack of tools to detect AI-generated content, Wihbey says.
“As a research community, we're really facing a crisis of some access to data, the ability to actually look at some of these systems to detect, say, large-scale disinformation campaigns driven by artificial intelligence,” he says.
“At the same time, we're a year away from a very consequential election showdown that could go a number of different ways, including sideways if misinformation and disinformation run rampant and really disrupt access to the ballot and the election process itself,” says Wihbey.
With artificial intelligence, bad actors have the ability to more easily create “troll farms”, which Garrett Morrow, Ph.D. researcher at the AI Literacy Lab, describes as “an organization that employs people to make provocative posts, spread misinformation or propaganda, harass people online or engage in other anti-social behaviour, on purpose”.
“The goal of a troll farm is to sow discord, manipulate the public and even make money through ad revenue on their posts,” he says. “A now classic example would be the Russian Internet Research Agency and its actions during the 2016 presidential election, but many different entities have used troll farms, including the national governments of India and the Philippines, and even political campaigns in Western countries such as the US or the UK”
This is just one of the many ways that genetic artificial intelligence, or genetic artificial intelligence, can cause disruption and chaos, Wihbey and Morrow note.
“Generational AI is changing information economies, making it easier for bad actors to create plausible/phenomenal sounding content in ways that can cross language and cultural barriers,” says Wihbey.
Another interesting insight from the survey results is that women are more skeptical than men about artificial intelligence, Wihbey says.
Of those surveyed, 36.5% of men said that media reports on artificial intelligence make them optimistic, while 22.2% of women said the same. When it comes to the responsible development of artificial intelligence, 42.8% of men believe it will happen, compared to 26.2% of women.
Additionally, those with STEM backgrounds tend to be more optimistic about AI. Of those surveyed, 54.6% with a STEM background said they were optimistic, while 26.2% without a STEM background said the same.
Patel says that people who have spent time learning and engaging with AI tend to have a more positive outlook on it.
“People who have spent a little more time understanding and learning about this technology have a much more balanced view,” he says. “So I think awareness and information is a critical way to debunk anything.”
The survey says that people read about AI in the media (77% consume AI news on a weekly basis), but most people don't use it (68% have never tried a large language model like ChatGPT).
“Even though we see a lot about AI in the news, people don't necessarily engage with it that much, at least with genetic AI,” says Morrow.
Cesareo Contreras is a reporter for Northeastern Global News. Email him at c.contreras@northeastern.edu. Follow him on X/Twitter @cesareo_r and Threads @cesareor.