Spanish English French German Italian Portuguese
Social Marketing
HomeTechnologyArtificial IntelligenceAnthropic in search of a better and understandable AI

Anthropic in search of a better and understandable AI

Less than a year ago, in 2021, anthropic was founded by the former vice president of research for OpenAI, Dario Amodei, with the intention of conducting research in the public interest to make AI more trustworthy and explainable. Its funding of $124 million was amazing at the time, but nothing could have foreseen the company. would raise 580 million dollars less than a year later.

“With this fundraiser, we are going to explore the predictable scaling properties of machine learning systems, while taking a closer look at the unpredictable ways in which capabilities and security issues can emerge as the system grows,” he said. Amodei.

His sister Daniela, with whom he co-founded the public benefit corporation, said that after building the company, “we are focusing on ensuring that anthropic have the culture and governance to continue to responsibly explore and develop secure AI systems as we grow. ”

Again the key term: growth. Because that is the category of the problem that initially motivated anthropic: How to better understand AI models increasingly used across industries as they grow beyond our ability to explain their logic and outcomes.

The company has already published several papers looking at, for example, reverse engineering the behavior of language models to understand why and how they produce the results. Something like GPT-3, probably the best-known language model in existence, is certainly impressive, but there is something worrying about the fact that its inner workings are essentially a mystery even to its creators.

As the funding announcement itself explains:

The purpose of this research is to develop the technical components necessary to build large-scale models that have better implicit safeguards and require fewer post-training interventions, as well as to develop the necessary tools to drill down into these models to be sure that the safeguards really work. .

If you don't understand how an AI system works, you can only react when it does something wrong, for example, it shows a face recognition bias or tends to draw or describe men when asked about physicians and CEOs. That behavior is built into the model, and the solution is to filter your results instead of preventing you from having those wrong "notions" in the first place.

It's a kind of fundamental change in shape in which AI is built and understood, and as such requires big brains and big computers, neither of which are cheap. There's no doubt that $124 million was a good start, but apparently the early results were promising enough for Sam Bankman-Fried to lead this huge new round, along with Caroline Ellison, Jim McClave, Nishad Singh, Jaan Tallinn and the Center for Emerging Risk Research (CERR).

It's interesting to see none of the usual tech investors in that group, but of course Anthropic isn't has as purpose profit, which is a decisive factor for venture investors.

The last Anthropic investigations can be followed here.

RELATED

SUBSCRIBE TO TRPLANE.COM

Publish on TRPlane.com

If you have an interesting story about transformation, IT, digital, etc. that can be found on TRPlane.com, please send it to us and we will share it with the entire Community.

MORE PUBLICATIONS

Enable notifications OK No thanks