Different countries have projects to regulate artificial intelligence. / Photo: AFP
Different governments advanced this week with the idea of regulating Artificial Intelligence (AI) given its effects on issues such as security, democracy, misinformation and the labor market, but the very creation of a legal framework is a challenge due to the rapid evolution of this technology. , its scope and disagreements between countries to reach a common convention.
an attempt to achieve a form of intercontinental regulation It was announced last Wednesday by the European Union (EU) and the United States that “very soon” they will present the draft of a “code of conduct” for companies in the sector to sign on a voluntary basis, with the idea that “all countries of like-minded” adopt it as well.
The United States and the European Union announced that they will soon present a code of conduct for companies in the sector to sign on a voluntary basis
This lack of obligation could detract from its weight.although there is a precedent of the will of the sector’s own actors for some type of intervention: at the end of March, a group of academics, experts and businessmen, including Elon Musk (CEO of SpaceX, Tesla and Twitter) and Steve Wozniak (Apple co-founder), called for a six-month moratorium on AI research, warning of “great risks to humanity.”
“What’s interesting about the current discourse around regulation is that, Typically, governments and big tech companies took opposing views. What is unique in the case of AI (particularly ChatGPT) is that experts from many organizations are leading the call to pause,” he explained to Télam. Stan KaranasiosProfessor of Information Systems at the University of Queensland (Australia).
“Governments should take advantage of this opportunity to develop regulations,” said the academic, who is dedicated to studying how technology affects organizations and society.
“Governments should take this opportunity to develop regulation”Stan Karanasios
In that framework, The EU is preparing a series of mandatory rules that would come into effect at the earliest by the end of 2025a distant date for a sector in constant evolution that, although it is dominated by giants such as Microsoft (main shareholder of OpenAI, the firm that operates ChatGPT), Meta or Google, is populated by new platforms that appear every day thanks to open source .
China also has regulatory plans, notably a “safety inspection” of AI tools..
The challenge of the regulations that arise is that they do not become obsolete.
The issue is on the political agenda of the Asian giant: President Xi Jinping this week led a meeting of the Communist Party in which they discussed the need to “devote efforts to safeguard political security and improve the governance of Internet and AI data security,” the news agency reported. local Xinhua news.
In Costa Rica, meanwhile, deputies from different parties presented last Thursday a bill to regulate AI, with the curiosity that it was drafted through ChatGPT. “Technology is an instrument at the service of the human being and as such we must control it,” said legislator Vanessa Castro, one of the promoters of the project.
An initiative along the same lines was also presented in Brazil.or, Canada was a pioneer in seeking to advance a regulation, which is also under debate in the United Kingdom, Australia or Japan, while the G7 leaders meeting two weeks ago in Hiroshima called for “advancing with the discussions to achieve a trustworthy AI , in keeping with shared democratic values”.
In the background, there is the first ethical framework on artificial intelligence approved by UNESCO in November 2021, with a series of recommendations to its 193 member states to take advantage of technology and reduce the risks it entails.
In the background, there is the first ethical framework on artificial intelligence approved by UNESCO in November 2021, with a series of recommendations to its 193 member states to take advantage of technology and reduce the risks it entails.
But beyond this common agreement, the appearance of different proposals in various latitudes already form a barrier to impose effective regulations before a technology of planetary scope.
“A more harmonized approach is needed on a global scale, but it is clear that reaching a global consensus will be difficult”Karanasios said.
Another obstacle is the points to regulate, since “experts do not agree on the dangers” of AI, although the academic listed some issues that should be included such as security, privacy, misinformation, knowledge theft and labor market disruptions.
In that sense, Michele Finckprofessor in the chair of Law and Artificial Intelligence at the University of Tübingen (Germany), told this agency that it is “difficult to answer in the abstract” what AI issues should be regulated.
“It is a general-purpose technology that can take the form of software or be integrated into other products. As such, it can be applied in many different ways: it can be a surgical robot, software that generates deepfakes (video, image or voice files) that were manipulated to look hyper-realistic, but are fake) or a tool that offers songs based on personal musical preferences,” he said.
And he countered: “While a deepfake that is spread on a large scale through social networks can cause political unrest and alter democratic processes (for example, compromising images of politicians before an election), in the case of surgical robots it is something that should be encouraged if they allow procedures that currently cannot be performed.”
On the other hand, he relativized the current situation by stating that AI “does not operate in a legal vacuum”, since they must be governed by existing regulations, such as personal data protection laws (used by Italy to temporarily ban ChatGPT), liability for defective products (in the case of medical devices) or defamation rules (for the damages that a deepfake can generate).
Beyond this, There is another challenge for all regulations, which is not to become obsolete in the face of constant technological change..
lightning speed
“Time is certainly an issue as AI continues to evolve at a rapid rate. Although this is challenging, it does not mean that regulation is impossible. The EU bill, for example, is an attempt at a definition AI, which goes back to a list of specific technologies that can be upgraded over time,” Finck said.
Karanasios does not share that view: “The time lag between government action and the rapid pace of change make post-hoc (after the fact) regulation a toothless tiger. By then, AI and the companies that develop are too powerful and too integrated into society.
“The creation of regulation once these AI systems are fully integrated into society will not be effective. We have learned this from the case of digital platforms such as Facebook, etc. It is a conversation that we have to have now as a society,” he concluded. .