Spanish English French German Italian Portuguese
Social Marketing
HomeTechnologyArtificial IntelligenceCan 'we the people' keep AI in check?

Can 'we the people' keep AI in check?

Technologist and researcher Aviv Ovadia he is not sure that generative AI can be governed, but he believes that the most plausible means of keeping it in check might be to entrust those who will be affected by the AI ​​with the collective decision on ways to curb it.

That means you; means me. It's the power of large networks of people to solve problems faster and more equitably than a small group of people could alone (even, for example, in Washington). It is essentially based on the wisdom of crowds, and it is happening in many fields, including scientific research, business, politics, and social movements.

In Taiwan, for example, civic-minded hackers formed a platform, “Virtual Taiwan”, in 2015, which “brings together representatives of the public, private and social sectors to discuss policy solutions to problems mainly related to the digital economy”, such as HE explained in 2019 by Taiwan's digital minister Audrey Tang in the New York Times. Since then, vTaiwan, as it is known, has addressed dozens of issues "based on a combination of online debate and face-to-face discussions with stakeholders," Tang wrote at the time.

A similar initiative is Oregon Citizens' Initiative Review, which was signed into law in 2011 and informs the state's voting population of ballot measures through a citizen-driven "deliberative process." Approximately 20 to 25 citizens who are representative of the entire Oregon electorate meet to debate the merits of an initiative; they then collectively write a statement about that initiative that is sent to other voters in the state so they can make more informed decisions on election day.

So-called deliberative processes have also successfully helped address issues in Australia (water policy), Canada (electoral reform), Chile (pensions and health care), and Argentina (housing, land ownership), among other places.

“There are obstacles to making this work” when it comes to AI, acknowledges Ovadya, an affiliate of Harvard's Berkman Klein Center whose work is increasingly focused on the impacts of AI on society and democracy. “But empirically, this has been done on every continent in the world, at every scale” and “the faster we can implement some of these things, the better,” he notes.

Letting people decide what are acceptable guidelines around AI in particular may sound strange to some, but even technologists believe it's part of the solution. Mira Murati, CTO of prominent artificial intelligence startup OpenAI, publish Time  In a new interview, "[We are] a small group of people and we need a lot more input into this system and a lot more that goes beyond the technologies, specifically regulators, governments and other similar entities."

Asked if Murati fears that government involvement could stifle innovation or if he thinks it's too early for lawmakers and regulators to get involved, he replied: “It's not too early. It is very important that everyone starts to get involved given the impact these technologies are going to have.”

In the current regulatory vacuum, OpenAI has taken a stand-alone approach for now, creating guidelines for the safe use of its technology and releasing new iterations in a trickle, sometimes to the frustration of the general public.

Meanwhile, the European Union has been drafting a regulatory framework, the AI's Lae, which is making its way through the European Parliament and aims to become a global standard. The law would assign AI applications to three risk categories: applications and systems that create "unacceptable risk"; "high-risk applications," such as a "CV scanning tool that ranks job applicants" that would be subject to specific legal requirements; and applications not explicitly prohibited or classified as high risk that would largely remain unregulated.

The US Department of Commerce also wrote a voluntary framework to guide businesses, but there are no regulations, nothing, when urgently needed. In addition to OpenAI, tech giants like Microsoft and Google, despite being burned by previous versions of his own AI that they failedAre competing publicly again to launch AI-enabled products and applications, so they don't get left behind.

A kind of World Wide Web Consortium, an international organization created in 1994 to set standards for the World Wide Web, would apparently make sense. In fact, in that Time interview, Murati observes that "different voices, such as philosophers, social scientists, artists, and people in the humanities" must come together to answer the many "ethical and philosophical questions that we must consider."

Perhaps the industry will start like this, and the so-called collective intelligence will fill many of the spaces between generalist visions.

Some new tools can help achieve that end. Open AI CEO Sam Altman is also a co-founder, for example, of a retina scanning company in Berlin called WorldCoin that wants to make it easier to authenticate someone's identity. Questions have been raised about the privacy and security implications of WorldCoin's biometric approach, but its potential applications include distributing a global universal basic income, as well as empowering new forms of digital democracy.

Either way, Ovadya believes that resorting to deliberative processes involving large numbers of people from around the world is the way to create boundaries around AI while giving industry players more credibility.

“OpenAI is getting some criticism right now from everyone” including for its perceived liberal biassays Ovadia. “It would be helpful [for the company] to have a really concrete answer” about how it sets its future policies.

Ovadya similarly takes aim at Stability.AI, the open source AI company whose CEO Emad Mostaque has repeatedly suggested that Stability is more democratic than OpenAI because it is available everywhere, while OpenAI is available only on countries right now where you can provide "secure access"

Says Ovadya: “Emad from Stability says he is 'democratising AI'. Good, Wouldn't it be nice to use democratic processes to find out what people really want?

RELATED

Leave a response

Please enter your comment!
Please enter your name here

Comment moderation is enabled. Your comment may take some time to appear.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

SUBSCRIBE TO TRPLANE.COM

Publish on TRPlane.com

If you have an interesting story about transformation, IT, digital, etc. that can be found on TRPlane.com, please send it to us and we will share it with the entire Community.

MORE PUBLICATIONS

Enable notifications OK No thanks

Welcome to TRPlane.com

install
×