Spanish English French German Italian Portuguese
Social Marketing
HomeIAOpenAI's Altman and other AI giants back the warning...

OpenAI's Altman and other AI giants back warning about 'extinction' risk of advanced AI

Make way for another headline-grabbing AI policy intervention: Hundreds of AI scientists, academics, tech CEOs and public figures -from Sam Altman, CEO of OpenAI, and Demis Hassabis, CEO of DeepMind, to Geoffrey Hinton, veteran AI computer scientist, MIT's Max Tegmark and Jaan Tallinn, co-founder of Skype, to musician Grimes and podcaster populist Sam Harris, to name a few- have joined a statement urging global attention to the existential risk of AI.

The statement, hosted on the website of a privately funded, San Francisco-based nonprofit called the Center for AI Safety (CAIS), purports to equate the risk of AI with the existential harms posed by the apocalypse. and calls on policymakers to focus attention on mitigating what they say is a "catastrophic" level AI extinction risk.

Here is his summary statement:

Mitigating the extinction risk of AI should be a global priority alongside other societal risks such as pandemics and nuclear war.

According to a brief explanation on the CAIS website, the statement has been kept "succinct" because its authors are concerned that its message about "some of the most serious risks of advanced AI" is not drowned out by debate about others. “important and urgent AI risks” that nonetheless suggest that they stand in the way of the AI ​​extinction risk debate.

However, in recent months we have heard the same concerns loudly and repeatedly, as the AI ​​hype has grown thanks to increased access to generative AI tools like ChatGPT and OpenAI's DALL-E. , which has led to a glut of headlines about the risk of “superintelligent” killer AIs. (Like this one, from earlier this month, in which Hinton, a signatory to the statement, warned of the "existential threat" of AI taking over. Or this one, from last week, in which Altman called for regulation to prevent AI from destroying humanity).

There is also the open letter signed by Elon Musk (and dozens of others) in March, which called for a six-month pause on the development of AI models more powerful than OpenAI's GPT-4 to allow time for protocols to be designed. shared security standards and applied to advanced AI, warning of the risks posed by "increasingly powerful digital minds that no one - not even their creators - can reliably understand, predict or control".

So, in recent months, there has been a barrage of highly publicized warnings about AI risks that don't exist yet.

This bombardment of hysterical headlines has arguably diverted attention from a deeper examination of the existing damage. For example, the free use of copyrighted data to train artificial intelligence systems without permission or consent (or payment); or the systematic extraction of personal data online in violation of people's privacy; or the lack of transparency from artificial intelligence giants regarding the data used to train these tools. Or, indeed, defects such as misinformation (“hallucination”) and risks such as bias (automated discrimination). Not to mention AI-induced spam. And the environmental toll of the energy spent training these AI monsters.

It is not surprising that, following a meeting last week between the British Prime Minister and a number of top AI executives, including Altman and Hassabis, the government appears to be changing course on AI regulation, with a sudden interest in existential risk, reports The Guardian.

Talk of the existential risk of AI also distracts from issues around market structure and dominance., as Jenna Burrell, Research Director at Data & Society, pointed out in this recent Columbia Journalism Review article analyzing media coverage of ChatGPT, arguing that we need to stop focusing on red herrings, such as the possible “ “sensitivity” of AI, to focus on how AI is further concentrating wealth and power.

So, of course, there are clear commercial motivations for the AI ​​giants to want to direct regulatory attention toward the distant theoretical future, talking about an AI-powered doomsday, as a tactic to divert policymakers' attention. of more fundamental competition and antitrust considerations in the here and now. (And data exploitation as a tool to concentrate market power is nothing new.)

It certainly says a lot about existing AI power structures that executives at AI giants like OpenAI, DeepMind, Stability AI, and Anthropic are so happy to come together and chat together when it comes to publicly amplifying conversations about existential risk. of AI. And how much more reluctant to come together to discuss the damage their tools may be causing right now.

OpenAI was a notable non-signatory of the aforementioned open letter (signed by Musk), but several of its employees support the CAIS statement (while Musk apparently does not). So the latest statement appears to offer an (unofficial) business response from OpenAI (and others) to Musk's previous attempt to hijack the AI ​​existential risk narrative for his own interests (which no longer favor OpenAI leading the charge. AI).

Rather than calling for a pause in development, which could freeze OpenAI's leadership in the field of generative AI, the statement puts pressure on policymakers to focus on risk mitigation, and does so while OpenAI is funding simultaneously efforts to shape “democratic processes for directing AI,” as Altman put it. So the company is actively positioning itself (and applying the wealth of its investors) to influence the shape of any future mitigation barriers, alongside current in-person lobbying efforts directed at international regulators. Recently, Altman also publicly threatened to withdraw the OpenAI tool from Europe if the draft European AI regulation was not relaxed to exclude its technology.

Then again, some of the signatories to the above letter have simply been happy to take another publicity opportunity, stamping their names on both (hi Tristan Harris!).

But who is PIER? There is little public information about the organization that hosts this message. However, he admits that it puts pressure on policy makers. Its website says its mission is to "reduce AI risks on a societal scale" and states that it is dedicated to fostering research and field creation to this end, including research funding, as well as having a stated role. policy advocacy.

The website's FAQ offers limited information about who supports it financially (it is said to be funded by private donations). In response to the question "Is the CAIS an independent organization?", it is briefly stated that it "serves the public interest":

The CAIS is a non-profit organization financed entirely by private contributions. Our research policies and guidance are not determined by individual donors, ensuring that our goal remains to serve the public interest.

We have contacted CAIS to ask questions.

In a Twitter thread accompanying the release of the statement, CAIS Director Dan Hendrycks expands on the aforementioned statement's explanation, naming "systemic bias, misinformation, malicious use, cyberattacks, and weaponization" as examples of “important and urgent risks of AI… not just the risk of extinction.”

"These are all important risks that must be addressed", also suggests, downplaying concerns that policymakers have limited bandwidth to address the harms of AI by arguing: “Societies can manage multiple risks at once; It is not 'either/or', but 'yes/and'. “From a risk management perspective, just as it would be unwise to prioritize only current harms, it would also be unwise to ignore them.”

The thread also credits David Krueger, Assistant Professor of Computer Science at Cambridge University, with the idea of ​​having a one-sentence statement on the risk of AI and "jointly" helping its development.

RELATED

Leave a response

Please enter your comment!
Please enter your name here

Comment moderation is enabled. Your comment may take some time to appear.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

SUBSCRIBE TO TRPLANE.COM

Publish on TRPlane.com

If you have an interesting story about transformation, IT, digital, etc. that can be found on TRPlane.com, please send it to us and we will share it with the entire Community.

MORE PUBLICATIONS

Enable notifications OK No thanks