It is not, at first glance, what you would call a miscarriage of justice. You’ve been waiting at a bar for over 10 minutes. All around you people are shouting for attention as the bartender glides back and forth, as capricious as a Greek god. You watch them bestow their favour on those who arrived long after you, chatting for what seems like an aeon. And you wonder: what makes them more deserving than you?
Small beer it may be, but this is a matter of justice – of the fair apportionment of goods, rights and privileges. Most people would probably agree that every bar customer has an equal right to be served. We would also agree that this right can be revoked for the under-age and the excessively drunk. We might differ as to what qualifies one person to be served before another, but we probably all think there should be a system for it.
Which is what makes Thursday’s announcement from DataSparQ, a British technology company, so interesting. DataSparQ claims that the average British drinker spends over two months of their life waiting at a bar, so it has developed a face recognition system which tracks customers, assigns them a place in a virtual queue and lets bar staff know who to serve first – as well as spotting people who look under-25.
On the surface, this seems like a positive use of Artificial Intelligence (AI). Humans have always attempted to outsource our moral decisions to automated systems. A queue is just an algorithm for deciding who gets served first. Indeed, the problem with existing bar systems is that they are not automated enough, leaving too much to the judgment and favouritism of human staff.
Law itself is a similar kind of automated system, removing decisions from individual chieftains and outsourcing them to a system of rules. As Morpheus, an AI character in the science-fiction video game Deus Ex, puts it: “God was a dream of good government.”
Nevertheless, there are reasons to be very cautious of automated justice. For instance, DataSparQ’s system will also identify how drunk people are in order to “avoid fights”. But current face recognition systems suffer from known biases, misidentifying black people at a much higher rate than white people and women at a higher rate than men. Often these biases are the result of the data on which the AI has been trained, but no data set is neutral.
It’s easy to see how a bar AI trained primarily on one ethnic group might systematically misattribute drunkenness to another, or how an AI trained mainly on people with restrained body language might unfairly single out people with more extroverted mannerisms, or indeed people with motor disabilities.
Such biases can be fixed with careful engineering, yet they highlight the broader futility of trying to design AI systems that are totally fair. AI is like a genie, or the brooms from The Sorcerer’s Apprentice: it does exactly what we tell it to, whether or not that’s actually what we want. When AI goes “wrong” that usually means it has exposed gaps, flaws or special exceptions in the instructions we give it.
How many human ethical systems have no such holes – have not been punctured by a thought experiment that follows their logic to a conclusion which is intuitively abhorrent? AI can follow such systems to a letter, but it cannot identify a perverse result and it can’t tell us which we should use.
That should be fine, because obviously we should only use AI as a tool to support human decision-making, not leave it unsupervised to exercise the judgment of Solomon. Except that, very often, we do the latter. David Walliam’s enduring line “computer says no” is not just a joke: we’ve all encountered situations where systems designed to automate decisions are treated as unquestionable.
The Home Office has just had to pay £45,000 in compensation to a man detained for five months based on mistaken identity. Numerous US citizens have been wrongly imprisoned or deported due to computer errors. Worse, AI’s inner workings are often guarded as a trade secret by the companies that sell them, impeding democratic oversight of their decisions.
Although in one sense these are pathologies of bureaucracy, not AI, it supercharges them, because AI operates with the illusion of impartiality. Since moving to the Telegraph's Silicon Valley bureau, I have spoken to many tech workers, high and low, who are responsible for policing online speech. That is a tough job, requiring them to balance free expression against safety and pick between the competing rights of different parties. What surprises me is how often they appear to believe that these questions actually have a correct answer that can be implemented by technology. I'm not sure that is correct.
So perhaps the French were on to something when they chose to call computers “ordinateurs” – a theological term meaning “the one who puts things in their right order”. In our technocentric culture we are eager to believe that a machine can eliminate the bias from which we know we all suffer. We want to believe that problems of justice have objectively right answers which we can reach with enough computing power.
But they don’t, and we can’t, and there is no escape from the messiness of morality. We have built the god of Morpheus with our own hands. Yet it cannot give us justice if we are not just.