Spanish English French German Italian Portuguese
Social Marketing
HomeIATrustworthy AI: can hallucinations be overcome?

Trustworthy AI: can hallucinations be overcome?

When we are children we can answer elementary school math problems simply by completing the answers.

But when the work was not “demonstrated,” the teachers deducted points; the correct answer wasn't worth much without an explanation. However, those high standards of explainability in long division somehow don't seem to apply to AI systems, even those that make crucial decisions that impact lives.

The major AI players making today's headlines and fueling stock market frenzies (OpenAI, Google, Microsoft) operate their platforms in models black box. A query goes in one way and a response comes out the other, but we have no idea what data or reasoning the AI ​​used to provide that answer.

Most of these black box AI platforms are based on a decades-old technological framework called a “neural network.” These AI models are abstract representations of the large amounts of data with which they are trained; They are not directly connected to the training data. Therefore, black box AIs infer and extrapolate based on what they believe is the most likely answer, not actual data.

Sometimes this complex predictive process gets out of control and the AI ​​“freaks out.” By nature, black box AI is inherently untrustworthy because it cannot be held accountable for its actions.. If you can't see why or how the AI ​​makes a prediction, you have no way of knowing whether it used false, compromised, or biased information or algorithms to reach that conclusion.

While neural networks are incredibly powerful and here to stay, there is another under-the-radar AI framework that is gaining prominence: instance-based learning (IBL). And it is everything that neural networks are not. IBL is an AI that users can trust, audit and explain. IBL traces each decision back to the training data used to reach that conclusion.

IBL can explain every decision because AI does not generate an abstract model of the data, but rather makes decisions from the data itself. And users can audit the AI ​​built on IBL, interrogate it to find out why and how it made decisions, and then intervene to correct errors or biases.

All of this works because IBL stores training data (“instances”) in memory and, aligned with “nearest neighbors” principles, makes predictions about new instances given their physical relationship to existing instances. IBL is data-centric, so individual data points can be directly compared to each other to gain insight into the data set and predictions. In other words, IBL “shows the work you do.”

The potential for such comprehensible AI is clear. Enterprises, governments, and any other regulated entity that wants to implement AI in a reliable, explainable, and auditable way could use IBL AI to meet regulatory and compliance standards. IBL AI will also be particularly useful for any applications where the allegations are biased: recruitment, university admissions, legal cases, etc.

Companies are already using IBL. For example, a commercial IBL framework used by customers such as large financial institutions can be created to detect anomalies in customer data and generate auditable synthetic data that complies with the EU General Data Protection Regulation (GDPR).

Of course, IBL is not without its challenges. The main limiting factor for IBL is scalability, which was also a challenge that neural networks faced for 30 years until modern computing technology made them viable. With IBL, each piece of data must be queried, cataloged, and stored in memory, which becomes more difficult as the data set grows.

However, researchers are creating quick query systems based on advances in information theory to significantly speed up this process. This cutting-edge technology has allowed IBL to directly compete with the computational feasibility of neural networks.

Despite these challenges, the potential of IBL is clear. As more and more companies pursue secure, explainable, and auditable AI, black-box neural networks will no longer be enough. A company, whether a small startup or a larger company, should start working on IBL models, for which some tips are provided below.

Agile and open mentality

With IBL, it works best to explore your data for insights that can be generated, rather than assigning it a particular task, such as “predict the optimal price” of an item. Keep an open mind and let the IBL guide your own learning. IBL may say that it can't predict an optimal price very well from a given data set, but it can predict the times of day when people make the most purchases, or how they communicate with the company and what items it is. more likely to buy.

IBL is an agile AI framework that requires collaborative communication between decision makers and data science teams, not the usual “throw a question at the transom, wait for its answer” that we see in many organizations implementing AI today. day.

“Less is more” for AI models

In traditional black-box AI, a single model is trained and optimized for a single task, such as classification. In a large enterprise, this could mean there are thousands of AI models to manage, which is expensive and difficult to manage. In contrast, IBL allows for versatile, multitasking analysis. For example, a single IBL model can be used for supervised learning, anomaly detection, and synthetic data generation, while still providing full explainability.

This means IBL users can create and maintain fewer models, allowing for a more agile and adaptable AI toolbox. If IBL is being adopted, programmers and data scientists are needed, but there is no need to invest in hundreds of senior technicians with AI experience.

Combine the AI ​​toolset

Neural networks are great for any application that does not need explanation or auditing. But when the AI helps companies make decisions Major events, such as spending millions of dollars on a new product or completing a strategic acquisition, must be explainable. And even when AI is used to make smaller decisions, like hiring a candidate or promoting someone, explainability is key. No one wants to find out that they missed out on a promotion because of an inexplicable, black box decision.

And companies will soon face litigation in these types of cases. You have to choose AI frameworks according to your application; opt for neural networks if you only want fast data ingestion and rapid decision making, and use IBL when reliable, explainable and auditable decisions are needed.

Instance-based learning is not a new technology. Over the past two decades, computer scientists have developed IBL in parallel with neural networks, but IBL has received less public attention. Now IBL is gaining new attention amid the ongoing AI arms race. IBL has shown that it can scale while maintaining explainability: a welcome alternative to hallucinations of neural networks that spit out false, unverifiable information.

With so many companies blindly adopting neural network-based AI, the next year will undoubtedly see many data breaches and lawsuits over accusations of bias and misinformation.

Once the mistakes made by black box AI start to affect companies' reputations… and their bottom lines! Hopefully the slow and steady IBL will have its moment of glory. We all learned the importance of “showing our work” in elementary school, and we can certainly demand that same rigor from the AI ​​that decides the directions of our lives.

RELATED

SUBSCRIBE TO TRPLANE.COM

Publish on TRPlane.com

If you have an interesting story about transformation, IT, digital, etc. that can be found on TRPlane.com, please send it to us and we will share it with the entire Community.

MORE PUBLICATIONS

Enable notifications OK No thanks