The entire who’s who of American high technology was on Capitol Hill in Washington this Wednesday. For the first time, the great heavyweights of the sector, from the CEO of Tesla Elon Musk to the CEO of Meta (Facebook), Mark Zuckerberg, through the leader of Alphabet (Google) Sundar Pichai, and the founder of Microsoft Bill Gates, appeared as a group in the Senate. The reason, to participate with your suggestions in a closed-door session on one of the hottest issues of the moment: the regulation of artificial intelligence. A goal that everyone agrees on. The differences are in how, and to what extent, to do it. One thing does seem clear: there are not many enthusiasts of the European model.
Elon Musk upon his arrival at the Senate. JULIA NIKHINSON (REUTERS)
That the twenty executives, whose joint income statements exceed the GDP of a few countries, attended the call launched by the leader of the Democratic majority in the Senate, Charles Schumer, already points to the importance of the matter. Several of them have spoken out on various occasions in favor of control measures for an area on which investments are pouring in and which has aroused enormous interest throughout society since the launch of the ChatGPT chatbot less than a year ago.
“It is important that we have an arbitrator,” Musk declared to the press upon leaving the meeting, where he described this technology as “a double-edged sword.” The figure of a regulator — he noted — is necessary “to guarantee that companies take measures that are safe and that benefit the public.” Zuckerberg, for his part, noted that “it is best that those who set the standards be American companies that can collaborate with our government to shape these models on important issues.”
Schumer aspires to approve legislation throughout next year – before the presidential elections in November, to avoid possible interference of this technology in the electoral process – that allows, on the one hand, to encourage the rapid development of artificial intelligence and receive its benefits. . And, on the other hand, it puts a stop to the dangers posed by this sect before its full incorporation into daily life is a fait accompli. Legislators aspire to control risks such as electoral interference, the spread of false information or attacks on key infrastructure.
The idea is to avoid a repeat of what happened with other technological sectors such as social networks, which were allowed to expand without regulation. Now, already converted into tools of regular use among the population, they bring with them a whole series of problems – dissemination of fake news and harmful content, accusations of promoting mental health problems among adolescents and children – but they have proven complicated when it comes to putting them restrictions in the United States. Numerous attempts in the US Congress to pass bills that limit them have ended up coming to nothing, at least for the moment. Partly due to pressure from powerful technology companies, and partly due to disagreements between legislators themselves.
On this occasion, it remains to be seen if the congressmen are finally successful. New Jersey Senator Cory Booker indicated that at the forum all participants agreed that “the government has a regulatory role.” But he also clarified that writing a bill that goes forward will represent a challenge.
The regulation of artificial intelligence, Schumer himself pointed out on the eve of the session in an interview for the AP agency, will be “one of the most difficult issues we can ever tackle.” This is due to its technological complexity, its constant changes and its “enormous, broad impact throughout the world.”
“Today we begin a huge, complex and fundamental task: creating a foundation for an artificial intelligence policy that has the support of both parties (Democrat and Republican) and that Congress can approve,” Schumer declared in his speech of opening of the session. “Congress must play a role, because without Congress we will not get the most out of artificial intelligence or reduce its risks.”
Later he made it clear that his ideas did not include reproducing the EU’s way of acting. “If you go too fast, you can hurt things. The European Union has moved too fast,” he considered, referring to the legislation on this technology that the European Parliament approved in June and which is now awaiting approval in the Council and consultations with the 27 member countries.
The European measure on artificial intelligence, the first in the world, affects any product or service that uses these tools and classifies them into four levels of risk, from minimal to unacceptable. It requires making clear when material generated by artificial intelligence is included and includes safeguards against illicit content. But in an open letter, more than 160 business leaders argue that the bill endangers the EU’s competitiveness and technological sovereignty.
Among those present at the Capitol this Wednesday were, in addition to those already mentioned, the CEOs of Nvidia, Jensen Huang; of IBM, Arvind Krishna; from Open AI, Sam Altman, or from Microsoft, Saya Nadella. Along with them, the president of the AFL-CIO union center Liz Shuler.
The presentation of the ChatGPT chatbot less than a year ago sparked interest in a sector and capabilities that until recently might have sounded like science fiction. These content generation systems can create images and sound, computer programs or text indistinguishable from those produced by a human hand. While these tools open up enormous possibilities for people and businesses, they also raise fears about how they can be used and their impact on existing jobs, and have sparked calls for greater transparency about their use.
In March, Elon Musk and a group of entrepreneurs and artificial intelligence experts had called for a six-month pause in the development of systems more powerful than Open AI’s GPT-4, citing possible risks to society. In May, Altman warned Congress that “my biggest fear is that this technology will go wrong. And if it goes wrong, it can go very wrong.” That same month, 350 businessmen and experts in the sector warned that this technology represents a “risk of extinction” for humanity. George Hinton, one of the fathers of this technology, left Google because he believes that these programs can lead to the end of civilization in a matter of years. A report by the market analysis company Forrester estimates that by 2030 artificial intelligence could replace 2.4 million jobs in the United States.
Senators will not necessarily accept all the suggestions of technological leaders. But session participants hoped the meeting would lead to a better understanding in Congress of the realities of the sector, its risks and benefits, and what can be done.
Some concrete proposals have already been presented, including a key project that would require the inclusion of warnings in, for example, electoral propaganda generated by artificial intelligence that includes sounds or images that could lead to misunderstandings. Another initiative foresees the creation of a regulatory body that studies certain artificial intelligence systems before granting them an operating license.
In July, the White House proposed a series of voluntary commitments to artificial intelligence companies, which seek to ensure that the capabilities of that sector are not used for harmful purposes. Among other things, it is mandatory to include a seal or watermark in the content generated by artificial intelligence, given the difficulty – or impossibility – of distinguishing between images or text produced by these programs and reality. The White House also addressed artificial intelligence in an executive order.
On Tuesday, eight companies in the sector, including Adobe, IBM and Nvidia, announced their adherence to the voluntary commitments requested by the presidential office.
You can follow EL PAÍS Tecnología on Facebook and Twitter or sign up here to receive our weekly newsletter.
#entire #technology #royalty #Congress #discuss #regulation #artificial #intelligence