This text belongs to ‘Artificial’, the newsletter on AI that Delia Rodríguez sends out every Friday. If you want to receive it, sign up here.
subscribe to ‘artificial’
Dear readers, dear readers: how fast and how slow things happen, right?
Last Wednesday I was invited to attend a meeting where dozens of journalists reflected on artificial intelligence. Divided into teams, some quite plausible future scenarios were presented to us. For example: you are a generalist medium and your competition has surpassed you thanks to the fact that they have replaced a large part of the staff with machines, what do you do?
Several teams expressed the hope of leaving the dirty work to the AI to dedicate itself to the nobler tasks of the profession, reporting and investigation. It seems that the profession harbors the illusion that technology frees up 80% of the time to be able to concentrate on what is important. It is curious how another profession that presses a key, programmers, fantasizes in a similar way. The CEO of the GitHub platform says that “sooner than later, 80% of the code is going to be written by Copilot [su asistente de inteligencia artificial]. And that does not mean (…) that the developer will be replaced. That means the developer has more time to focus on the 20% that they are writing.” Right now the percentage of code written by Copilot in your company is 50%.
I hope I’m wrong, but the times I’ve seen a comrade run out of 80% of his work, he’s been closer to being sent home than reporting a war on the ground. In that late capitalism seems quite predictable. But nothing ensures that the sticks of our goal are going to stay still. It may be that the advertising market changes, that the internet fills up with so much garbage that it becomes an unusable place, or that search engines as we know them disappear.
What is certain is that in the coming years journalists will have to sit down to negotiate at all levels, starting with ourselves and our own limits and capabilities. Within the media it will be necessary to adapt style books, editorial statutes. Professional associations will have to review the codes of ethics. There will be work for the unions with the agreements. Media associations will have to agree on what practices are acceptable. It will be necessary to defend interests before the own government, even if it transposes what Europe dictates and it is possible to agree on a minimum with the US. It will be time to review copyright with the technology companies, establish limits and agree on compensation for feeding the language models. Perhaps journalists will join artists or photo agencies in their denunciations. Big media may be able to benefit from individual deals.
It is inevitable to feel that this future is near. This week we learned that OpenAI, Google, Microsoft and Adobe are meeting with News Corp, Axel Springer, The New York Times or The Guardian to discuss copyright issues in their chatbots and image generators. Europe’s best-selling newspaper, the German Bild (Axel Springer) has announced cuts worth €100 million (200 jobs) for the reorganization of its local business, and has warned the newsroom that it expects to make further editorial cuts due to “opportunities of artificial intelligence”. Midjourney has launched a photography magazine that brings together the best images created by its community, all of them artificial. GPT-Author allows you to create a 15-chapter fantasy novel for $4 in a few minutes.
And, at the same time, the signs of the apocalypse have not just arrived, as if we were sitting on a terrace full of people waiting for a recession. The annual report that Reuters makes on the state of journalism warns that the “new technological disruption of artificial intelligence is just around the corner, threatening to generate a wave of personalized but potentially unreliable content”, but it hardly reports cases of use.
The Economist has proposed to closely monitor the signs of job destruction caused by AI and sees no news at the moment. McKinsey has recently published a long and interesting report, The economic potential of generative AI: The next productivity frontier, saying that generative AI could add between 2.6 and 4.4 trillion dollars of annual productivity worldwide… but It will not be today or tomorrow. They estimate that half of current work activities could be automated between 2030 and 2060. “The current capabilities of generative AI, together with those of other technologies, have the potential to automate certain work activities that today absorb between 60% and 70 % of employee time. Generative AI has accelerated previous estimates by the McKinsey Global Institute, according to which technical automation had the potential to cover activities that take up half the time employees spend working,” they say.
What else has happened this week
– Well, now we know what Altman did on his European tour: just what we imagined. While he publicly called for regulation, he privately lobbied for fewer restrictions on his OpenAI company. Specifically, he managed to get large general AI systems out of the default “high risk” status in the draft, but it’s still early days. A Time magazine exclusive.
– Paloma Abad asks herself a suggestive question in her newsletter: will the cosmetic industry discover its holy grail and find the product that really works thanks to AI?
– Continuing with the marketing of danger: Meta says that its voice model, Voicebox, is too good to be made public without risk. Meanwhile, ElevenLabs already allows you to create an audiobook with your voice in minutes and has raised 19 million to continue its audio developments.
– The opening credits of the Marvel series for Disney Plus Secret Invasion have been created with artificial intelligence. The result looks like this and artists and fans have not liked it for various reasons. Pixar has used AI to imitate fire (something that is apparently very complicated in animation) in Elemental.
– A flood of AI-generated child pornography images has been detected on the dark web. It seems to be relatively easy to bypass the filters in the tools and create these types of images. An increase in volume makes it harder for investigators to find the real victims. It is also unclear, under US law, how illegal a virtual pedophile image is. Washington Post report, in English.
– The company The Harmony uses Dai-Chan, a 30-centimeter stuffed animal that looks like a child capable of conversation, in five nursing homes in Japan.
– Attention to this: the Catalan health system will use AI to help its doctors in four issues: chest x-rays, diabetic retinopathy screenings, melanoma diagnosis and the rational use of drugs. “The machines will not replace the diagnosis of the professional,” said the director of the Clínic’s skin cancer unit, Josep Malvehy, but “now in Europe there are not enough skin specialists to see all the patients.”
– Good news: Artificial intelligence can predict certain early cases of pancreatic cancer.
– A Sónar very focused on AI, as we mentioned in the last newsletter, has left us with good interviews in La Vanguardia. I don’t understand the artist Daito Manabe, but I would like: “the vector information in the latent space is going to be the seed for the next art. Latent space is a very high-dimensional space that only AI can understand.” Kate Darling, robotics expert at MIT, delivered the most disturbing sentence of the week: “What worries me is not that your sex robot will replace your partner but that the robot will manipulate and take advantage of you. Soon we will have to face that.”
– To all this: if it is with an AI, does it count as infidelity? A report from The Guardian tells how some users are using the Replika service to create lovers. Testimonials range from those who believe it has saved their marriage because it discourages them from real affairs to people who have genuine doubts about the ethics of their own conduct. In English.
IAnxiety levels this week: high. These are going to be interesting years in my trade.
subscribe to ‘artificial’