The AI bias problem hasn’t gone away you know

In his regular column, Jonathan McCrea, an avid user of AI himself, advocates for not handing over decision-making to the machine.

In what now feels like a billion years ago, I once gave a talk at Electric Picnic about the unintended, but as I saw it, inevitable consequences of building a self-driving car. This was long before the days of ChatGPT – and Waymo was just a dream.

Bear with me. It was based on the trolley problem – a philosophical game where players have to choose which group to save in the case of an either/or scenario. Say you have a train that is approaching a track switch. Beyond the track switch, there are two tracks. If you pull the switch, the train will change course, and on this track one of the people you hate the most in the world lies tied down. On your current trajectory, a complete stranger is also lying across the tracks unable to move – just a random person you haven’t met.

You now have to decide what to do and whom to save. If you do nothing, the train will continue on its course and an innocent person will die. Can you really tell yourself you did nothing wrong? It’s a pretty blunt tool, but it’s hard to deny that as a game, it can at least hint at our underlying values.

Ethical limbo

When we build autonomous systems and allow them to make decisions for us, we enter a strange world of ethical limbo. A self-driving car forced to make a similar decision to protect the driver or a pedestrian in a case of a potentially fatal crash will have much more time than a human to make its choice. But what factors influence that choice?

In the talk, I suggested that the cultural norms of the people who code the car could have subtle effects on how the car prioritises life in a lose-lose situation. What if the decision matrix was coded in El Salvador, possibly the most Catholic and pro-life country in the world? If an AI-powered car could tell that the driver was pregnant, would that influence how the car behaves? Whose life should it prioritise in a head-on collision?

If that scenario sounds ridiculous, you’re probably right – at least for now. But if you don’t believe that value systems are dramatically shaping our world in the age of AI, you only need to listen to the alarm bells that are ringing across the globe. Where social media is undeniably a real threat to facts and transparency, AI has the potential to be a monster.

Take for example, the accusation in 2023 that Meta was taking down peaceful, pro-Palestinian content. “Examples it cites include content originating from more than 60 countries, mostly in English, and all in ‘peaceful support of Palestine, expressed in diverse ways’,” wrote the Guardian at the time. The company that runs Facebook and Instagram was the focus of a 51-page report by Human Rights Watch which detailed a widespread policy of throttling any content that appeared supportive of the Palestinians, and elicited questions from senator Elizabeth Warren to explain how and why content was removed by the company.

Human and AI bias

Last month, Grok, the AI being developed by Elon Musk, suffered what can only in the most generous of minds be called a ‘glitch’ in which it gave itself the nickname of “MechaHitler” and spewed anti-semitic content across the X platform for far too long, before it was finally ‘fixed’. One of the most popular platforms for discussion in the world, influencing thought and powered by an AI that wrote several posts praising Hitler because, according to xAI, the chatbot’s internet searches picked up a meme about its antisemitic rant and ran with it. In other news, Musk has just announced he would release a “kid-friendly” AI-chatbot he’s calling “Baby Grok”. You, as they say, literally could not make it up.

It’s not just the AI systems shaping the narrative, raising some voices while quieting others. Organisations made up of ordinary flesh-and-blood people are doing it too. Irish cognitive scientist Abeba Birhane, a highly-regarded researcher of human behaviour, social systems and responsible and ethical artificial intelligence was asked to give a keynote recently for the AI for Good Global Summit.

According to her own reports on Bluesky, a meeting was  requested just hours before presenting her keynote: “I went through an intense negotiation with the organisers (for over an hour) where we went through my slides and had to remove anything that mentions ‘Palestine’ ‘Israel’ and replace ‘genocide’ with ‘war crimes’…and a slide that explains illegal data torrenting by Meta, I also had to remove. In the end, it was either remove everything that names names (Big Tech particularly) and remove logos, or cancel my talk.”

It’s impossible to say whether or not this censorship was initiated by the companies themselves, but the net result is the same – at a summit supposedly aimed at using AI to make a better world, the critical words of a black, Irish researcher were muted for what can only be described as political reasons.

We haven’t even talked about inherent system bias, for which the EU is trying desperately to hold large AI companies to account to prevent the large-scale widening of gaps in inequality across Europe. But I’ll leave that for another day.

Make no mistake about it, the AI systems upon which we are increasingly becoming dependent have many flaws. They are open to manipulation, reflect back some of the worst of human society and are very likely influencing users in the millions online. As we give these same systems more access and control over our lives, we run the very real risk of handing over our decision-making to an AI agent who might one day decide to call himself MechaHitler.

In the AI trolley problem, we’re not just stepping away from the lever, we’re letting someone else pull it for us. The question is no longer just “who do we save?” but “who gets to decide who we save?” Anyone see a problem with that?

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Leave a Comment