Four artificial intelligence experts have raised concerns after their work was cited in an open letter dated March 22 and signed by hundreds of entrepreneurs and scientists calling for an immediate six-month pause in systems development. “more powerful” than the new Microsoft-backed OpenAI GPT-4, which can hold human-like conversation, compose songs, and summarize long documents.
The letter, published by the Future of Life Institute (FLI) and titled Pause in giant AI experiments, points out that “AI systems with intelligence competitive with humans can pose profound risks to society and humanity, as demonstrated numerous investigations and recognize the main AI laboratories ”.
Read also Francesc Bracero
Among the signatories are well-known personalities, such as Elon Musk, one of the founders of OpenAI; or Steve Wozniak, co-founder of Apple; academics such as Stuart Russell and Ramón López de Mántaras; or writers like Yuval Noah Harari. In total, 1,124 signatures.
The impact of ChatGPT
Since GPT-4’s predecessor, ChatGPT, was released last year, rival companies have rushed to launch similar products.
The open letter says that “human-competitive intelligence” AI systems pose profound risks to humanity, citing 12 investigations by experts, including university academics, as well as current and former employees of OpenAI, Google and its DeepMind subsidiary.
Since then, civil society groups in the US and the EU have lobbied lawmakers to rein in the OpenAI research.
Musk’s Shadow
Critics have accused the Future of Life Institute (FLI), the organization behind the letter which is funded primarily by the Musk Foundation, of prioritizing imaginary doomsday scenarios over more immediate concerns about AI, such as programming racist or sexist biases in the machines.
Among the research cited was On the Dangers of Stochastic Parrots, a well-known paper co-authored by Margaret Mitchell, who previously oversaw AI ethics research at Google.
Mitchell, now chief ethical scientist at artificial intelligence firm Hugging Face, criticized the letter, telling Reuters it was unclear what counted as “more powerful than GPT4”.
“By treating many questionable ideas as fact, the letter affirms a set of priorities and a narrative on AI that benefits FLI supporters,” he said. “Ignoring active damage right now is a privilege some of us don’t have.”
Some experts criticize the letter bias
His co-authors Timnit Gebru and Emily M. Bender criticized the letter on Twitter, with the latter calling some of its claims “unhinged.”
FLI president Max Tegmark told Reuters the campaign was not an attempt to hamper OpenAI’s corporate advantage.
“It’s quite funny. I’ve seen people say, ‘Elon Musk is trying to slow down the competition,'” he said, adding that Musk was not involved in writing the letter. “This is not about a company.”
risks now
Shiri Dori-Hacohen, an assistant professor at the University of Connecticut, also took issue with the mention of her work in the letter. Last year, she co-authored a research paper arguing that the widespread use of AI already posed serious risks.
Their research argued that the current use of AI systems could influence decision-making in relation to climate change, nuclear war, and other existential threats.
Read also Francesc Bracero
She told Reuters: “AI doesn’t need to reach human-level intelligence to exacerbate those risks.”
“There are non-existential risks that are very, very important, but they don’t get the same kind of attention on a Hollywood level.”
FLI, author of the letter, defends himself against criticism
Asked to comment on the criticism, FLI’s Tegmark said that both the short- and long-term risks of AI need to be taken seriously.
“If we quote someone, it just means we’re saying they’re endorsing that sentence. It does not mean that he is endorsing the letter, or that we endorse everything he thinks,” he told Reuters.
Also read La Vanguardia
Dan Hendrycks, director of the California-based Center for AI Security, who was also quoted in the letter, defended its content, telling Reuters it was sensible to consider black swan events, those that seem unlikely, but that would have devastating consequences.
The open letter also warned that generative artificial intelligence tools could be used to flood the internet with “propaganda and falsehood.”
The opacity of Twitter
Dori-Hacohen said it was “quite empowering” that Musk signed it, citing a rise in misinformation on Twitter following his acquisition of the platform, documented by civil society group Common Cause and others.
Twitter will soon roll out a new fee structure for accessing its research data, which could make research on the subject more difficult.
“That has directly impacted the work of my lab, and that done by others studying misinformation and disinformation,” Dori-Hacohen said. “We are operating with one hand tied behind our back.”
Musk and Twitter did not immediately respond to requests for comment.