Terence Mauri, an artificial intelligence (AI) expert and self-described “global disruption thinker” claimed last month that robot judges would be commonplace in UK courts within the next 50 years.  Legal Cheek reported:

In a legal setting, AI will usher in a new, fairer form of digital justice whereby human emotion, bias and error will become a thing of the past. Hearings will be quicker and the innocent will be far less likely to be convicted of a crime they did not commit.”

For years now, authors and media have been trying to convince us that the AI revolution would see driverless cars and mass unemployment coming our way.  Robots will perform surgery, eliminating the risk of error.  Even human writers and painters will be one day redundant, their talent unable to compete with algorithms.

AI and machine learning have been shaping the legal process, especially in the area of disclosure, for many years.  These forces are now subtly sliding their way into dispute resolution.  For example, in the US and other jurisdictions such as Mexico, predictive analytics developments that enable forecasts to be made on the outcome of litigation have been employed.  And in 2017, the British Ministry of Justice consulted on the introduction of an automatic online conviction procedure which would allow some defendants in appropriate cases to resolve their matters entirely online.  The responses to the consultation were mixed and no further action has been taken to roll out the scheme.

Nothing is more human than the fear of the unknown.  It is natural to feel repelled by the thought of algorithms alone making legal decisions, especially in criminal law.  But could machines, free from the universally common human traits of prejudice and bias towards particular situations and people, achieve the utopian principle that every person is equal before the law?

Does judicial bias exist?

The phrase ‘nemo judex in causa sua’ (or ‘nemo judex in sua causa’) “no-one is judge in his own cause” is often attributed to Lord Coke.  However, the principle has been dated as far back as the 1400s.   Expanding on the principle, Sir Mathew Hale, Chief Justice of the King’s Bench between 1671 and 1676, created a list of “Things Necessary to Continually had in Remembrance” sometime in the 1660s.  The fourth of these declared:

“That in the execution of justice, I carefully lay aside my own passions, and will not give way to them however provoked”.

However, as Tom Bingham in The Rule of Law points out:

“Of course, since judges and other decision makers are human beings and not robots, they are inevitably, to some extent, the product of their own upbringing, experience, and background.  The mind to which they bring to the decision of issues cannot be a blank canvass.”

If we look at the background of those who make up the English judiciary today, the canvass Mr Bingham refers to is undeniably one of exceptional privilege.  Figures from 2016 showed nearly three quarters (74%) of the top judiciary were educated at independent schools and the same proportion (74%) went to Oxbridge.  In 2020, women represent just over a quarter (26%) of judges in the senior Courts (the figure in the Tribunals is more promising at 47%).  And only 8% of court judges and 12% of tribunal judges identified as BAME on 1 April 2020.

Looking at the statistics, the English judiciary is overwhelmingly made up of white, wealthy males.  The question, given these stark statistics, is not “does bias exist” but “how could it not?”

Reliable empirical evidence of judicial bias is, unfortunately lacking.  The old maxim of correlation does not equal causation applies.

For example, one useful source of information regarding judicial bias is the 2017 Lammy Review, which gave a damning indictment on BAME communities experience within the criminal justice system.  It reported that although juries’ decisions were consistent in their decision making irrespective of the ethnicity of the defendant, sentencing in criminal cases was not so steady.  BAME offenders were significantly more likely to receive a custodial sentence than white offenders for comparable crimes.  Likewise, in Magistrates Courts, BAME women were on average 24% more likely to be found guilty than white women, with the range going from 22% more likely for black women to 43% for Chinese/other women.

However, one of the limitations of the Lammy Review was that while it showed sentencing variability between BAME and white offenders, the reasons why were not established.  Well established sentencing guidelines significantly reduce the potential for bias within the criminal justice system.  It may be that a large proportion of BAME defendants’ backgrounds result in more aggravating than mitigating factors being present.  If this is the case, societal level change is needed – no amount of anti-bias training or further transparency will make a difference to sentencing outcomes.

The Courts themselves also provide checks and balances against bias.  The recent case of Serafin v Malkiewicz  [2020] UKSC 23 demonstrated that where there is apparent bias on the part of a judge, senior courts will take the matter seriously.  In this case, the Supreme Court ordered a full retrial after it was concluded (following a full review of the oral hearing), that the trial judge had shown bias.  The Court confirmed that during the presentation of oral evidence, Judges should interfere as little as possible.  Litigants in person were less able to withstand judicial pressure than were professional advocates, and judges had to temper their conduct accordingly.

“…when one considers the barrage of hostility towards the Claimant’s case, and towards the Claimant himself acting in person, fired by the Judge in immoderate, ill-tempered and at times offensive language at many different points during the long hearing, one is driven, with profound regret, to uphold the Court of Appeal’s conclusion that he did not allow the claim to be properly presented; that therefore he could not fairly appraise it; and, that, in short, the trial was unfair. Instead of making allowance for the Claimant’s appearance in person, the Judge harassed and intimidated him in ways which surely would never have occurred if the Claimant had been represented. It was ridiculous for the defendants to submit to us that, when placed in context, the Judge’s interventions were “wholly justifiable”.

It appears that despite the existence of a fertile ground for judicial bias existing, current safeguards leave little room for it to flourish.  The Lammy Report makes for excellent Guardian headlines but does not provide any solid evidence as to the causes related to the differences in judicial sentencing decisions, only that they exist.  Whilst it could not be concluded that there is no bias within the judiciary, there is no evidence that it is running rampant.  However, that is not to say improvements are not required, especially in terms of diversity on the bench and the treatment of litigants in person.

Therefore, given where we are at, could we, and if so, should we, entrust machines/algorithms with delivering justice?  Or when it comes to enhancing the judicial system, should our focus be elsewhere?

Robot judges – reality or sci-fi fantasy?

To eliminate bias, AI needs to move beyond being programmed by fallible humans, allowing it to learn independently.  Furthermore, those machines need the intelligence to read people’s emotions and interact with compassionate and agile responses.  At present, such AI super-intelligence does not exist to any meaningful extent.

In Judge v Robot,? Artificial Intelligence and Decision Making, Tania Sourdin references David Harvey’s interpretation on the processes a robot led court would take:

“AI judge would be required to take using the example of algorithms already present in legal databases.  These databases employ natural language processing to assist with the sourcing of relevant material based on search terms. An AI judge would be required to go further than these databases, by reducing returned sources to a manageable and relevant sample and then deploying tools to compare these sources of law to a present case and engaging in analysis to make a determination of the outcome.  Harvey explains that this final step requires ‘the development of the necessary algorithms that could undertake the comparative and predictive analysis, together with a form of probability analysis to generate an outcome that would be useful and informative’. However, human judge decision-making is largely retained in Harvey’s model.”

According to Ms Sourdin:

“… as AI researchers have had a number of clear successes outside of the legal field, these successes suggest that predictive analysis, even where are significant variations in terms of novelty, can be ‘learned.”

We can therefore conclude that robot judges could exist in some form.  But should we rely on future technology to eliminate what little bias there appears to be in the judiciary?  Or is there another way?

What about the human element of justice?

Given the enormous pay cut many successful lawyers must accept in order to join the bench, for some, at least, the opportunity to provide fair access to justice and give back to society must play a part in accepting a judicial position.  This motivation, along with the compassion, insight into the complexities of the human condition, and intuition all play a part in how a judge makes decisions.  Often legislation leaves room for judicial discretion.  As it does not, and many argue could not, possess the nuances that make us human, it is difficult to see how algorithms could make decisions in cases that turn on their own facts.  No amount of machine learning can cover every human situation.  People are too extraordinary for that.

Final words

Technology can make a significant difference in helping vulnerable people access justice and eliminating much of the dull, repetitive work of the law, leaving legal professionals with more mental and physical energy to work creatively for their clients.  As for the risks of bias and prejudice generated by the narrow pool from which judges are selected from, rather than develop machines to eliminate judicial bias, energy must be focused on encouraging that girl or boy (or non-binary person) with the cut-glass mind who grew up on a housing estate and attended their local state school to set their ambitions on the bench.

And keep them there.

The Legal Copywriting Company is dedicated to helping law firms and barristers achieve their marketing goals by creating engaging, SEO friendly content for their website and marketing materials and managing their social media.  To find out more, please fill in our contact form or email corinne@thelegalcopywritingcompany.co.uk or phone 01691 839661.