This text belongs to ‘Artificial’, the newsletter on AI that Delia Rodríguez sends out every Friday.
Subscribe to ‘artificial’
Dear readers, dear readers: Once upon a time there was a small generative artificial intelligence created by a poor company that just wanted to dominate the world. She aspired to be a human when she grew up, but she didn’t have enough experience, so she started reading. His poor company did not pay him to buy books – he said something about costs and benefits – and in a romantic gesture inspired by the love of knowledge, he stole them. He looted the largest libraries ever created by humanity, which were no longer in Alexandria but in pirate repositories on the Internet, and that was how he grew up and learned to write, reading.
Perhaps in other areas we are talking about futures, but right now, without doing anything else, generative artificial intelligence knows how to write better than many people. A few signs, just from this week:
1) A study published in Science by MIT researchers assigned 453 graduate students writing tasks such as writing press releases, reports, or analysis. One half used ChatGPT, the other half didn’t. The ChatGPT team took 40% less time to finish and their work was 18% better. And, perhaps most interestingly from the study, those who benefited the most were worse writers: technology narrowed the gap between good and bad writers.
2) A first-year Harvard student asked her professors to correct her exams without knowing if they were written by ChatGPT. The AI scored excellently, with reviewers going so far as to praise the “writer’s voice as clear,” the “clear and vivid” style, and the “well-written and articulate” text.
3) Superhuman is an expensive paid email client revered by productivity fans. It just announced that it integrates ChatGPT to create faster emails, a feature that shouldn’t take long to reach Gmail or Outlook. “No more struggling to find the right words, or spending precious time creating the perfect message. Jot down a few sentences and we’ll turn it into a fully finished email. And best of all, the email will sound like you. Superhuman’s AI mimics the tone of the emails you’ve already sent, applying it to everything it creates.”
4) Follow the Hollywood writers’ strike. One of their requests, let’s remember, is not to become simple underpaid publishers of AI ideas.
5) Some 8,000 writers associated with The Author’s Guild have signed an open letter to AI companies demanding that their works not feed their models. Among the signatories, the dystopia expert Margaret Atwood. They give a piece of information: in the last ten years the income of authors has decreased by 40%.
6) In a report last month on generative artificial intelligence and the UK job market, KPMG said that 43% of the tasks of authors, writers and translators could be automated, speeding up the creation of drafts, summaries and texts. It is the profession most affected by this technology.
7) Google has offered The New York Times, The Washington Post and The Wall Street Journal a product capable of writing news to “help” journalists write and, I suppose, somehow prepare the media for what is coming their way.
8) A journalist from The New Yorker has asked the Writer company to create a personal writing assistant powered by 50,000 words of their corpus, something they do for their clients for a price that reaches seven figures. It is worth reading the author’s reflections on his robotic work, in which he even recognizes his vices as a writer. “If writing is thinking, ordering one’s own ideas, generating text with AI can be a way to avoid thinking. What is writing without thinking? He also tests other services, such as the Estonian application Mindsera, focused on helping with editing, and which he finds to be very interesting.
9) Wix, the popular website building tool and platform, has announced that it will soon allow its users to create an entire page with AI just by typing what is needed and answering a few questions, in a much more personalized experience than the current template-based model.
He says about Wix Om Malik, and you have to listen to him: “And it’s not just the websites. We are at a point where a book can be created instantly for less than a dollar. Amazon, for example, is already inundated with AI-generated books. Music is also in a similar situation. The number of audio tracks created with AI is rumored to have exceeded 100 million. Video, animation, and images: the relentless march of AI-generated content is here. And you and I will find ourselves swimming in misinformation or the cheap false equivalent of information. Soon, there will be so much of everything that we won’t have enough attention for anything.”
I think about how these developments can help people who don’t send emails, write messages, or fill out paperwork because they’re embarrassed by their spelling or writing ability, or because they don’t have the skills to do those tasks. On the other hand, I think about my trade.
What else has happened this week
Meta has announced the new version of its LLM (large language model) Llama 2, and the most interesting thing is the business strategy that accompanies it. First, it is an open source development, that is, available to anyone who knows what to do with it (for example, create your own ChatGPT or Bard). And second, it’s being launched in partnership with Microsoft, which will distribute it through Azure. In fact, it has been presented at an event by this company, the main investor in OpenAI-ChatGPT. Summarizing a lot, Microsoft is going all out, and Meta launches a product worse than others available on the market but free and free, so it can be improved by any developer in the world. Llama can be tested in the form of a chat in this Perplexity Labs implementation. Meta has also announced that it has ready an image generator called CM3Leon (for “chameleon”), which promises to be more efficient because it consumes fewer resources than others, but they have not revealed if or when they will make it available to the public.
Apple is working on language models and an internal chatbot, although its managers have not yet decided what to do with it. Their technology is called Ajax.
Another thing that is in a drawer for the moment: ChatGPT is prepared to recognize faces, which conflicts with many biometric recognition laws, for example, the European ones. They are also concerned about the assumptions you can make about faces, such as identifying a person’s emotional state. So for now, we are not going to see that functionality.
Berkeley and Stanford researchers have published research that supports a perception by some users, which OpenAI once denied and now denies again: that the latest version of ChatGPT is, for some things, worse. If in March he was able to identify a prime number with 97% accuracy, in June he could only do it with 2%. Experts are currently discussing the reasons for these variations.
This prediction from Stability AI CEO Emad Mostaque has been widely discussed. In India, he says, “programmers doing outsourced work will be out in the next year or two, whereas in France you’ll never fire a developer.”
OpenAI continues to do business with the media to compensate them for using their news to train models. After the agreement with AP, five million dollars have been agreed for the American Journalism Project, a non-profit organization that finances a dozen local media outlets in the US.
More details about Elon Musk’s AI company. Apparently, he has managed to attract the talent of his few employees – remember, all men – with the promise of shares that can be worth hundreds of millions of dollars. In a public chat on Twitter, Musk said that his true goal is to solve the great mathematical problems and mysteries of the universe (such as the Fermi paradox), something from which current developments are very, very, very far.
By the way, Musk also recounted that he warned the Chinese brass he met with in May that a superintelligence could drive the communist party from power and lead the country. We don’t know if they are paying attention to it, but in the meantime, China is juggling to support the innovation of its companies while maintaining control over its developments. At the moment it has updated its provisional rules on AI, applicable to services available to the general public.
Intel has a deepfake detection tool that, instead of looking at what defines the fake, detects what is real: for example, the very slight change in the color of human veins when receiving oxygen after each heartbeat, something only machines can see.
Two interesting, useful and disturbing inventions due to human labor that can be replaced. The smart massage chair and the smart cradle.
Some guy custom programmed his own girlfriend, took her on a date and uploaded the fascinating video to YouTube.
Interesting interview by Fèlix Badia to the Swedish philosopher Nick Bostrom. “If one day a digital mind has the same status as a human, and can have, say, a million children or replicas in 20 minutes, it will have to be regulated. Because then, among many other things, the meaning of ‘one person, one vote’ will change. In a democracy that’s an important principle, but if you could make 1,000 copies of yourself the day before the election and then merge them all the day after, it would no longer make sense for each of those copies to have one vote.”
IAnxiety level this week: :_(
Subscribe to ‘artificial’ Read also Delia Rodríguez