Internet, Legal business, Legal Practice Management, LegalTech
comment 1

Legal profession concerned about algorithmic bias

Algorithms, the set of instructions computers use to carry out a task, have become an integral part of everyday lives, and it is immersing itself in law. In the U.S. judges in some states can use algorithms as part of the sentencing process. Many law enforcement officials in the U.S. are using them to predict when and where crimes are likely to occur. They have been used for years in law firm recruitment. And with advancements in machine learning they are also being used to conduct legal research, predict legal outcomes, and to find out which lawyers win before which judges.

Most algorithms are created with good intentions but questions have surfaced over algorithmic bias at job hunting web sites, credit reporting bureaus, social media sites and even the criminal justice system where sentencing and parole decisions appear to be biased against African Americans.

And the issue is likely to gain traction as machine learning and predictive coding become more sophisticated, particularly since with deep learning (which learn autonomously) algorithms can reach a point where humans can often no longer explain or understand them, said Nicolas Vermeys, the assistant director at Cyberjustice Laboratory in Montreal.

AlphaGO is a case in point. When AlphaGO, Google’s artificial intelligence system, defeated the 18-time world champion in the complex and highly intuitively game of the ancient Chinese board game GO, it was not just a demonstration of yet another computer beating a human at a game. GO, a game with simple rules but profound complexity, has more possible positions than there are atoms in the universe, leading some to describe it as the Holy Grail of AI gaming. It was a remarkable feat because AlphaGO was not taught how to play Go. It learned how to play, and win, by playing millions of games, using a form of AI called deep learning, which utilizes neural networks that allow computer programs to learn just like humans. More than that, the victory showed that computers are now able to rely on its own intuition, something that was thought only humans could do.

Another example is Deep Patient. The brainchild of a research group at Mount Sinai Hospital in New York, it is a machine learning tool that was trained to detect illness from data from approximately 700,000 patients. Deep Patient turns out to be good at detecting hidden patterns in the hospital data that indicate when people are becoming ill. It also appears to be really good at anticipating the onset of schizophrenia, a very difficult disease for physicians to predict. But the people behind Deep Patient do not yet understand why Deep Patient seems to be good at predicting schizophrenia and do not understand how it works.

“We have no idea how algorithms arrived at their decision and therefore cannot evaluate whether the decision has value or not,” said Vermeys, whose research institution is studying the issue of algorithmic bias. “There is a risk to relying completely on machines without necessarily understanding its reasoning.”

No human is completely objective, and so it is with algorithms as they have been programmed by programmers, noted Ian Kerr, a law professor at the University of Ottawa and the Canada Research Chair in Ethics, Law and Technology. Programmers operate on certain premises and presumptions that are not tested by anybody else which leads to results based on those premises and presumptions which in turn gives rise to bias, added Kerr.

On top of that it is very difficult to challenge such decisions because “whoever owns the algorithms has trade secrets, isn’t likely to show you the source code, isn’t likely to want to talk about the secret source and what makes the algorithm work,” said Kerr. “What justifies the algorithm is its success or perceived success which is very different from whether or not it operates in biased ways.”

Aaron Courville, a professor with the Montreal Institute for Learning Algorithms, shares those concerns. “We are really in a phase where these algorithms are starting to do interesting things, and we need to take seriously the issues of responsibility,” said Courville.

Europe is taking a serious look at these issues. Under the European Union’s new General Data Protection Regulation (GDPR), automated individual decision-making that “significantly affect” users will be restricted, argue Bryce Goodman of the Oxford Internet Institute and Seth Flaxman of the University of Oxford’s Department of Statistics in a paper. Expected to be in force in 2018, the GDPR will also effectively create a “right to explanation,” according to the authors. In other words, users can ask for an explanation of algorithmic decision that was made about them.

“This is where Europe and the U.S. go wild in their disagreements,” explained Kerr, who has also written about the issue of a right to explanation. “Europe starts with this principled approach that makes sense. If a decision is about me and it has sort of impacts on my life chances and opportunities, I should be able to understand how that decision was made. It invokes large due process concerns.

“The due process idea is that no important decision should be made about me without my own ability to participate. I have a right to a hearing. I have a right to ask questions. So all of these kinds of rights are kind of bound up in this notion of the duty to an explanation. And the hard thing is that an algorithm isn’t in the habit of explaining itself, which means that if that kind of law prevails then people who use algorithms and design algorithms will have to be a lot more forthcoming about the mechanisms behind the algorithm.”

 

Further reading:

 

Machine bias: There’s software used across the country to predict future criminals. And it’s biased against blacks by ProPublica, an American independent, nonprofit news organization.

Chief Justice John Roberts is a Robot by University of Ottawa law professor Ian Kerr.

 

 

And for the technologically-inclined:
Mastering the Game of Go with Deep Neural Networks and Tree Search by David Silver, the lead researcher on the AlphaGo project.

1 Comment

  1. Pingback: AI initiative seeks to improve access to justice | Law in Quebec

Leave a Reply

Your email address will not be published. Required fields are marked *