Crowdsourcing “can accurately predict court decisions 80% of time” says study

Print This Post

8 January 2018


Judicial decisions: crowdsourcing accurate study finds

Crowdsourcing is an accurate predictor of court judgments, at best proving accurate in over eight out of ten cases, according to a rigorous analysis.

A team of academics arrived at the conclusion after assessing the results of a massive public competition to predict the outcome of US Supreme Court cases, involving cash prizes of up to $10,000 (£7,375) for the winners.

The authors called the study “one of the largest explorations of recurring human prediction to date”.

They noted that “the field of predictive analytics is a fast-growing area of both industrial interest and academic study”.

As well as producing a detailed statistical model of their own, they examined results from the FantasySCOTUS competition, which started in 2009 and has produced 600,000 predictions from over 7,000 participants, relating to 10 separate Supreme Court justices.

Analysing competition results between 2011 and 2017, they found that crowdsourced views on the likely outcome of Supreme Court decisions “robustly outperforms” a model in which guesswork was eliminated.

The authors explained that this model – known as “always guess reverse” – is based on the reality that the Supreme Court nearly 62% of the time chooses cases for a decision in order to “to correct an error below, not to affirm it”.

Instead, easily beating this baseline, the best-performing crowdsourcing arrived at the correct outcome with a level of 80.8% accuracy.

The academics said their research provided “support for the use of crowdsourcing as a prediction method”.

They concluded that by applying “empirical model thinking” to the question of crowdsourcing for the first time, they could “confidently demonstrate that… crowdsourcing outperforms both the commonly accepted ‘always guess reverse’ model and the best-studied algorithmic models”.

The authors of Crowdsourcing accurately and robustly predicts Supreme Court decisions were Daniel Katz, Michael J Bommarito II, and Josh Blackman, respectively of universities in Chicago, Stanford and Houston.

In 2016 British and American academics deployed artificial intelligence to predict decisions of the European Court of Human Rights with 79% accuracy.

 

Tags:



3 Responses to “Crowdsourcing “can accurately predict court decisions 80% of time” says study”

  1. As ever, Dan Katz, and his colleagues work is interesting and valuable. It is worth bearing in mind one thing about their model. It is based in part on their algorithm selecting the best judges amongst a large pool of other judges and then relying more on the ‘super-judges’ judgments. It’s quite difficult to see how this model would work in practice – where would one have a large pool of people willing to predict the outcome of cases with reasonable incentives towards taking it seriously. Perhaps that’s a failure of imagination on my part.

  2. Richard Moorhead on January 8th, 2018 at 3:54 pm
  3. Thanks Richard — this particular paper is not based upon our algorithm (that paper is here – http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0174698 ) but rather just Fantasy SCOTUS crowdsourcing alone. If you review Figure 3, one of the interesting results is that there is an optimal crowd configuration at around 10-12 individuals (https://arxiv.org/pdf/1712.03846.pdf)

    More generally, although this paper is about #SCOTUS as a use case — the formalization is in the general form with implications for a wide variety of problems including #Crypto #Oracles #Crowdsourcing. For example, the paper was recently highlight in the Augur Weekly Development Update https://medium.com/@AugurProject/augur-weekly-development-update-january-3rd-1eb60e6a7580 Augur is one example of a low friction method to create and sustain crowds through incentives.

    The current industrial organization of the legal profession is certainly a barrier to deploying some of these ideas. In particular, this is a field where we do not really keep score and track the long term performance of experts. Instead, we use weak signals such as pedigree and reputation. My interest in the intersection of law and finance has been driven by the idea that we need problems (such as litigation funding, etc.) where folks actually have a financial incentive to care about material improvements in prediction/performance. See Fin Legal Tech here —
    https://www.slideshare.net/Danielkatz/fin-legal-tech-laws-future-from-finances-past-some-thoughts-about-the-financialization-of-the-law-professors-daniel-martin-katz-michael-j-bommarito

  4. Daniel Martin Katz on January 9th, 2018 at 4:48 pm
  5. Thanks for replying Dan.

    I used algorithm to mean the model(s) you talk about in your paper – apols for the confusion, I was commenting from memory – although I think the models can also properly be described as algorithms, I see why you want to make the distinction between your algorithmic (fancy pants machine learning on social science dataset) and crowdsourced methods (crowd plus simpler algorithms/equations/models – whatever language you prefer).

    I am afraid I could not see where in the Augur post I could see how Augur might work. I’d be really interested in sensing how it might work.

    I’m not sure it’s the industrial organisation that’s a barrier to this. I am interested in where a sustainable crowd of (collectively) super-predictors might come from with sufficient incentive to pay attention to legal decisions that need taking or predicting. It’d be interesting to hear what it might look like. What information might they need to have, how would they be incentivised to take a decision, how big might the crowd be, etc etc.

    That’s the point I was trying to make. As you might expect, I agree we should not laud reputation or weak signals.

  6. Richard Moorhead on January 9th, 2018 at 6:59 pm

Leave a comment

* Denotes required field

All comments will be moderated before posting. Please see our Terms and Conditions

Legal Futures Blog

Small claims 2013 v 2018: What has changed?

Brett Dixon APIL

Successive governments have considered increasing the small claims limit for personal injury claims, at the behest of the insurance industry lobby, from £1,000 to £5,000. But the lower limit remains unchanged because, so far, evidence and reasoning have prevailed. The last time the government tried to implement an increase was in 2013 when it concluded that it would keep the issue under consideration for implementation “when appropriate”. Nothing has happened to suppose a small claims limit of £5,000 is any more “appropriate” in 2018 than it was in 2013.

January 15th, 2018