Spanish English French German Italian Portuguese
Social Marketing
HomeIAAI's biggest questions require an interdisciplinary approach

AI's biggest questions require an interdisciplinary approach

When Elon Musk announced the team behind his new artificial intelligence company XAI, whose mission is supposedly to “understand the true nature of the universe,” stressed the critical importance of responding to existential concerns about the promises and dangers of AI.

Whether the newly formed company can truly align its behavior to reduce the potential risks of the technology, or if its only goal is to gain an advantage over OpenAI, its formation raises important questions about how companies should actually respond to concerns about AI. Specifically:

  1. Who internally, especially at larger model companies, is really asking questions about the short-term and long-term impacts of the technology they are building?
  2. Are they approaching problems with the right perspective and experience?
  3. Are they adequately balancing technological considerations with social, moral and epistemological issues?

It can be imagined, until now, as two rooms. In one of them, with people who think deeply about ethics (“What is right and what is wrong?”), ontology (“What is really there?”) and epistemology (“What do we really know?”). On the other, people who do algorithms, codes and mathematics.

This combination is not so inharmonious in the context of how companies should think about AI. The stakes in the impact of AI are existential, and companies must make an authentic commitment worthy of those risks.

Ethical AI requires a deep understanding of what there is, what we want, what we think we know, and how intelligence develops.

This means equipping leadership teams to be adequately equipped to analyze the consequences of the technology they are building, which goes beyond the natural expertise of engineers writing code and enforcing APIs.

AI is not an exclusively computing, neuroscience or optimization challenge. It is a human challenge. To address it, we must adopt an enduring version of a “meeting of the minds on AI,” equivalent in scope to The Oppenheimer Interdisciplinary Meeting in the New Mexico desert in the early 1940s.

The collision of human desire with the unintended consequences of AI results in what researchers call the “alignment problem,” described by experts in Brian Christian from the book "The problem of alignment." Essentially, machines have a way of misinterpreting our most complete instructions, and we, as their supposed masters, have a poor track record of getting them to fully understand what we think we want them to do.

The net result: Algorithms can promote bias and misinformation and therefore corrode the fabric of our society. In a more dystopian long-term scenario, they can make the decision "treacherous turn” and the algorithms to which we have ceded too much control over the functioning of our civilization surpass us all.

Unlike the challenge of Oppenheimer, who was a scientist, ethical AI requires a deep understanding of what exists, what we want, what we think we know, and how intelligence develops. This is a task that is certainly analytical in nature, although not strictly scientific. It requires an integrative approach rooted in critical thinking from both the humanities and sciences.

Thinkers from different fields need to work closely together, now more than ever. The ideal team for a company looking to do this could really be:

  • Head of AI and Data Ethics: This person would address short- and long-term issues with data and AI, including, but not limited to, the articulation and adoption of data ethics principles, the development of reference architectures for its ethical use, the rights of citizens regarding how consume their data and used by the AI, and protocols to properly shape and control the behavior of the AI. This should be separate from the chief technology officer, whose role is largely to execute a technology plan rather than address its implications. It is a high-level position on the CEO's staff that bridges the communication gap between internal decision makers and regulators. You cannot separate a data ethicist from a chief AI ethicist: data is the precondition and fuel for AI. AI itself generates new data.
  • Chief Philosopher Architect: This role would address long-term existential concerns with a primary focus on the “Alignment Problem”: how to define safeguards, policies, backdoors, and kill switches so that AI aligns as closely as possible with human needs and goals. . .
  • Chief Neuroscientist: This person would address critical questions about sentience and how intelligence develops within AI models, which models of human cognition are most relevant and useful for AI development, and what AI can teach us about human cognition.

Fundamentally, to turn the dream team's output into responsible and effective technology, we need technologists who can translate abstract concepts and questions posed by “The Three” into working software. As with all technology groups working, this depends on the product leader/designer seeing the big picture.

A new generation of inventive product leaders in the “Age of AI” must move comfortably through new layers of technology infrastructures that encompass the model infrastructure for AI, as well as new services for things like model tuning and development. patented. They must be inventive enough to imagine and design “Human in the Loop” workflows to implement safeguards, backdoors, and kill switches as prescribed by the chief philosopher architect. They need to have the ability of a renaissance engineer to translate the AI ​​and data ethics chief's policies and protocols into systems that work. They need to appreciate the chief neuroscientist's efforts to move between machines and minds and properly discern findings with the potential to lead to more intelligent and responsible AI.

Let's look at OpenAI as one of the first examples of a fundamental, extremely influential and well-developed model company struggling with this staffing challenge: they have a the chief scientist (who is also its co-founder), a head of global policy or with a General councilor.

However, without the three positions described above in executive leadership positions, the biggest questions around the implications of their technology remain unanswered. If Sam Altman is worried In terms of approaching the treatment and coordination of superintelligence in a broad and thoughtful way, building a holistic alignment is a good starting point.

We need to build a more responsible future where companies are trusted stewards of people's data and where AI-driven innovation is synonymous with good. In the past, legal teams led the way on issues like privacy, but the brightest recognize that they cannot solve the problems of ethical data use in the age of AI on their own.

Bringing different, broad-minded perspectives to the place where decisions are made is the only way to achieve ethical data and artificial intelligence in the service of human flourishing, while keeping machines in their place.

RELATED

SUBSCRIBE TO TRPLANE.COM

Publish on TRPlane.com

If you have an interesting story about transformation, IT, digital, etc. that can be found on TRPlane.com, please send it to us and we will share it with the entire Community.

MORE PUBLICATIONS

Enable notifications OK No thanks