CAN WE TRUST ARTIFICIAL INTELLIGENCE? (CAN WE TRUST OURSELVES?)

Artificial intelligence (AI) has the capability to positively reshape the way we do many things. However, like with most technologies, it is not without its pitfalls.

NEED HELP - LET’S TALK!

AI systems are becoming more and more sophisticated, but they are still susceptible to errors and inaccuracies. But AI is not to blame for them all!

How can AI be wrong?

There are three main ways that AI has proven it can produce incorrect or inaccurate results.

Errors

If AI makes a mistake or produces inaccurate output compared to the expected or correct result, this is an error.

Errors are typically inaccuracies in the broad sense, covering a wide range of issues, from simple misclassifications to more complex systematic failures in understanding or processing data.

Hallucinations

A specific type of error particularly relevant in natural language processing or content generation, where the AI generates plausible but entirely fictional or incorrect information, is called a hallucination.

Let’s be clear amongst ourselves: hallucination is a fancy name for error and and error is an error!

Hallucinations generally suggest a deeper misunderstanding or misrepresentation of reality by the AI, potentially indicating issues with how the AI has learned to represent and generate information.

Bias

Bias refers to a systematic error or unfair preference in an AI’s outputs, often arising from the data the AI was trained on or the methods used to develop it. See this blog post about bias for more detail.

What are the risks of AI errors?

Depending on the context and how AI is used, the risks of AI errors may have significant consequences, for example:

  • Errors or bias in AI systems making decisions about people’s lives, such as predictive policing or hiring tools, could lead to their illegal unfair treatment

  • Errors in autonomous vehicles could lead to accidents

  • A misdiagnoses by an AI system in healthcare could delay treatment or lead to unnecessary anxiety or incorrect medical treatment

  • Errors in AI used in financial forecasting or trading could lead to poor investment decisions and significant financial loss

What are the reasons behind AI errors?

Imperfect data

The data the AI has been trained on may be incorrect, biased, not diverse enough, or incomplete. This can lead to the AI ‘learning’ things that aren’t true and for it to make up (or hallucinate) outputs based on the flawed patterns it has detected in its training data.

For example, if the training dataset is bias/not representative, the AI will replicate these biases, incorrectly labelled datasets or datasets with errors can lead to incorrect assumptions/decisions, and insufficient variability may mean the AI struggles to handle edge case/less common scenarios.

Overfitting/underfitting

This is what happens when the AI model has been trained too specifically or not specially enough. If the AI model learns the specifically details from its training data, without understanding the underlying principles, it may perform poorly on new data. Conversely, if the model is too simple to learn the underlying pattern of the data (e.g. because it is not complex enough or has been trained on too little data), it can fail to capture essential trends, even in training data.

Unsuitable model

Choosing the wrong type of model for a particular task can lead to poor performance (e.g. using a linear model for a complex, non-linear problem). The AI’s settings - in particular weights and biases - also need to be adjusted correctly.

Out of date model

Many AI systems are trained on historical data. In many situations, this is sufficient and the answer/future events reflects past patterns. However, in dynamic environments, where conditions are changing, AI can struggle to adapt.

Lack of firepower

Sometimes the hardware and software infrastructure that the AI model is run on can affect its performance - which is one of the main reasons we are seeing a growth in AI specific hardware and datacentres.

Humans (of course!)

The way the model is set up, how it learns, how it is maintained/updated and how it is used, all depends on human decisions. These decisions aren’t always without flaws, which can influence the accuracy of the AI’s output.

Who is liable for AI errors?

Liability might be established under contracts, or under applicable laws (such as the EU’s AI Act).

Developers and training data providers may be liable if errors can be traced back to poor design, inadequate testing, biased or flawed data, etc.

Users and deployers of these systems can also be held liable if the errors produced cause their customers, patients or third parties harm/damage.

Many AI providers will include a disclaimer of liability for errors produced by the AI, so users/deployers should check this and take care accordingly. Do not blindly trust that the contract terms will be on your side if you are a user of someone else’s AI product.

Mitigating the risks of AI errors

From a technical perspective, using diverse, high-quality data and auditing it regularly is important. As is undertaking rigorous testing and validation across various scenarios, and using cross-validation techniques to reduce reliance on a single model.

From an organisational perspective, it is important to be aware of and conduct due diligence on the AI supply chain. Who is involved, who has what role, how have they done it and what assurances to they give? It is crucial to decide on an appropriate level of human oversight.

As well as the above, the role of AI literacy is critical. AI is a tool, used by humans, and the humans are the first line of defence.

THANKS FOR READING

NEED HELP - LET’S TALK!

Previous
Previous

WHO OWNS THE OUTPUT OF GENERATIVE ARTIFICIAL INTELLIGENCE?

Next
Next

BIAS & DISCRIMINATION IN ARTIFICIAL INTELLIGENCE - IS IT ALL DOOM & GLOOM?