esc.org.uk
    Facebook Twitter Instagram
    Facebook Twitter Instagram
    esc.org.ukesc.org.uk
    Subscribe
    • Homepage
    • News
    • Technology
    • Games
    • Trading software
    • Contact Us
    esc.org.uk
    Technology

    Experts cited in Musk-backed campaign criticize self-serving use of his work

    adminBy adminApril 1, 2023No Comments4 Mins Read

    Four artificial intelligence experts have raised concerns after their work was cited in an open letter dated March 22 and signed by hundreds of entrepreneurs and scientists calling for an immediate six-month pause in systems development. “more powerful” than the new Microsoft-backed OpenAI GPT-4, which can hold human-like conversation, compose songs, and summarize long documents.

    The letter, published by the Future of Life Institute (FLI) and titled Pause in giant AI experiments, points out that “AI systems with intelligence competitive with humans can pose profound risks to society and humanity, as demonstrated numerous investigations and recognize the main AI laboratories ”.

    Read also Francesc Bracero

    FILE - The OpenAI logo is seen on a mobile phone in front of a computer screen displaying output from ChatGPT, Tuesday, March 21, 2023, in Boston. Are tech companies moving too fast in rolling out powerful artificial intelligence technology that could one day outsmart humans? That is the conclusion of a group of prominent computer scientists and other tech industry notables who are calling for a 6-month pause to consider the risks. Their petition published Wednesday, March 29, 2023, is a response to San Francisco startup OpenAI's recent release of GPT-4. (AP Photo/Michael Dwyer, File)

    Among the signatories are well-known personalities, such as Elon Musk, one of the founders of OpenAI; or Steve Wozniak, co-founder of Apple; academics such as Stuart Russell and Ramón López de Mántaras; or writers like Yuval Noah Harari. In total, 1,124 signatures.

    The impact of ChatGPT

    Since GPT-4’s predecessor, ChatGPT, was released last year, rival companies have rushed to launch similar products.

    The open letter says that “human-competitive intelligence” AI systems pose profound risks to humanity, citing 12 investigations by experts, including university academics, as well as current and former employees of OpenAI, Google and its DeepMind subsidiary.

    Since then, civil society groups in the US and the EU have lobbied lawmakers to rein in the OpenAI research.

    Musk’s Shadow

    Critics have accused the Future of Life Institute (FLI), the organization behind the letter which is funded primarily by the Musk Foundation, of prioritizing imaginary doomsday scenarios over more immediate concerns about AI, such as programming racist or sexist biases in the machines.

    Among the research cited was On the Dangers of Stochastic Parrots, a well-known paper co-authored by Margaret Mitchell, who previously oversaw AI ethics research at Google.

    Mitchell, now chief ethical scientist at artificial intelligence firm Hugging Face, criticized the letter, telling Reuters it was unclear what counted as “more powerful than GPT4”.

    “By treating many questionable ideas as fact, the letter affirms a set of priorities and a narrative on AI that benefits FLI supporters,” he said. “Ignoring active damage right now is a privilege some of us don’t have.”

    Some experts criticize the letter bias

    His co-authors Timnit Gebru and Emily M. Bender criticized the letter on Twitter, with the latter calling some of its claims “unhinged.”

    FLI president Max Tegmark told Reuters the campaign was not an attempt to hamper OpenAI’s corporate advantage.

    “It’s quite funny. I’ve seen people say, ‘Elon Musk is trying to slow down the competition,'” he said, adding that Musk was not involved in writing the letter. “This is not about a company.”

    risks now

    Shiri Dori-Hacohen, an assistant professor at the University of Connecticut, also took issue with the mention of her work in the letter. Last year, she co-authored a research paper arguing that the widespread use of AI already posed serious risks.

    Their research argued that the current use of AI systems could influence decision-making in relation to climate change, nuclear war, and other existential threats.

    Read also Francesc Bracero

    Ramon López de Mántaras is a CSIC researcher on artificial intelligence

    She told Reuters: “AI doesn’t need to reach human-level intelligence to exacerbate those risks.”

    “There are non-existential risks that are very, very important, but they don’t get the same kind of attention on a Hollywood level.”

    FLI, author of the letter, defends himself against criticism

    Asked to comment on the criticism, FLI’s Tegmark said that both the short- and long-term risks of AI need to be taken seriously.

    “If we quote someone, it just means we’re saying they’re endorsing that sentence. It does not mean that he is endorsing the letter, or that we endorse everything he thinks,” he told Reuters.

    Also read La Vanguardia

    Horizontal

    Dan Hendrycks, director of the California-based Center for AI Security, who was also quoted in the letter, defended its content, telling Reuters it was sensible to consider black swan events, those that seem unlikely, but that would have devastating consequences.

    The open letter also warned that generative artificial intelligence tools could be used to flood the internet with “propaganda and falsehood.”

    The opacity of Twitter

    Dori-Hacohen said it was “quite empowering” that Musk signed it, citing a rise in misinformation on Twitter following his acquisition of the platform, documented by civil society group Common Cause and others.

    Twitter will soon roll out a new fee structure for accessing its research data, which could make research on the subject more difficult.

    “That has directly impacted the work of my lab, and that done by others studying misinformation and disinformation,” Dori-Hacohen said. “We are operating with one hand tied behind our back.”

    Musk and Twitter did not immediately respond to requests for comment.

    Previous ArticleOne of lime and one of sand for Red Bull in Melbourne
    Next Article The Government accelerates the reversal of the trunk gas pipeline to supply the north of the country
    admin
    • Website

    Related Posts

    What is the biggest cryptocurrency heist in history?

    July 25, 2023

    Sony takes noise cancellation and sound quality further with the new WF-1000XM5

    July 24, 2023

    Elon Musk hints that he will change the Twitter logo

    July 23, 2023

    ‘Pikmin 4’, the delight of being ordered

    July 22, 2023

    A happy world where we all write well | Newsletter ‘Artificial’

    July 21, 2023

    How to know if your mobile is waterproof

    July 20, 2023
    Add A Comment

    Leave A Reply Cancel Reply

    • Cookies Policy
    • Privacy Policy
    iasc.org.uk © 2025

    Type above and press Enter to search. Press Esc to cancel.