Sunday, July 3, 2022
Home Insider AI has a dangerous bias problem — here’s how to manage it

AI has a dangerous bias problem — here’s how to manage it

AI now guides numerous life-changing decisions, from assessing loan applications to determining prison sentences.

Proponents of the approach argue that it can eliminate human prejudices, but critics warn that algorithms can amplify our biases — without even revealing how they reached the decision.

This can result in AI systems leading to Black people being wrongfully arrested, or child services unfairly targeting poor families. The victims are frequently from groups that are already marginalized.

Greetings humanoids

Subscribe now for a weekly recap of our favorite AI stories

Alejandro Saucedo, Chief Scientist at The Institute for Ethical AI and Engineering Director at ML startup Seldon, warns organizations to think carefully before deploying algorithms. He told TNW his tips on mitigating the risks.


Machine learning systems need to provide transparency. This can be a challenge when using powerful AI models, whose inputs, operations, and outcomes aren’t obvious to humans.

Explainability has been touted as a solution for years, but effective approaches remain elusive.

“The machine learning explainability tools can themselves be biased,” says Saucedo. “If you’re not using the relevant tool or if you’re using a specific tool in a way that’s incorrect or not-fit for purpose, you are getting incorrect explanations. It’s the usual software paradigm of garbage in, garbage out.”

While there’s no silver bullet, human oversight and monitoring can reduce the risks.

Saucedo recommends identifying the processes and touchpoints that require a human-in-the-loop. This involves interrogating the underlying data, the model that is used, and any biases that emerge during deployment.

The aim is to identify the touchpoints that require human oversight at each stage of the machine learning lifecycle. 

Ideally, this will ensure that the chosen system is fit-for-purpose and relevant to the use case. 

Alejandro Saucedo is discussing AI biases on July 16 at the TNW Conference