Crowdsourcing “can accurately predict court decisions 80% of time” says study


Judicial decisions: crowdsourcing accurate study finds

Crowdsourcing is an accurate predictor of court judgments, at best proving accurate in over eight out of ten cases, according to a rigorous analysis.

A team of academics arrived at the conclusion after assessing the results of a massive public competition to predict the outcome of US Supreme Court cases, involving cash prizes of up to $10,000 (£7,375) for the winners.

The authors called the study “one of the largest explorations of recurring human prediction to date”.

They noted that “the field of predictive analytics is a fast-growing area of both industrial interest and academic study”.

As well as producing a detailed statistical model of their own, they examined results from the FantasySCOTUS competition, which started in 2009 and has produced 600,000 predictions from over 7,000 participants, relating to 10 separate Supreme Court justices.

Analysing competition results between 2011 and 2017, they found that crowdsourced views on the likely outcome of Supreme Court decisions “robustly outperforms” a model in which guesswork was eliminated.

The authors explained that this model – known as “always guess reverse” – is based on the reality that the Supreme Court nearly 62% of the time chooses cases for a decision in order to “to correct an error below, not to affirm it”.

Instead, easily beating this baseline, the best-performing crowdsourcing arrived at the correct outcome with a level of 80.8% accuracy.

The academics said their research provided “support for the use of crowdsourcing as a prediction method”.

They concluded that by applying “empirical model thinking” to the question of crowdsourcing for the first time, they could “confidently demonstrate that… crowdsourcing outperforms both the commonly accepted ‘always guess reverse’ model and the best-studied algorithmic models”.

The authors of Crowdsourcing accurately and robustly predicts Supreme Court decisions were Daniel Katz, Michael J Bommarito II, and Josh Blackman, respectively of universities in Chicago, Stanford and Houston.

In 2016 British and American academics deployed artificial intelligence to predict decisions of the European Court of Human Rights with 79% accuracy.

 

Tags:




    Readers Comments

  • Richard Moorhead says:

    As ever, Dan Katz, and his colleagues work is interesting and valuable. It is worth bearing in mind one thing about their model. It is based in part on their algorithm selecting the best judges amongst a large pool of other judges and then relying more on the ‘super-judges’ judgments. It’s quite difficult to see how this model would work in practice – where would one have a large pool of people willing to predict the outcome of cases with reasonable incentives towards taking it seriously. Perhaps that’s a failure of imagination on my part.

  • Daniel Martin Katz says:

    Thanks Richard — this particular paper is not based upon our algorithm (that paper is here – http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0174698 ) but rather just Fantasy SCOTUS crowdsourcing alone. If you review Figure 3, one of the interesting results is that there is an optimal crowd configuration at around 10-12 individuals (https://arxiv.org/pdf/1712.03846.pdf)

    More generally, although this paper is about #SCOTUS as a use case — the formalization is in the general form with implications for a wide variety of problems including #Crypto #Oracles #Crowdsourcing. For example, the paper was recently highlight in the Augur Weekly Development Update https://medium.com/@AugurProject/augur-weekly-development-update-january-3rd-1eb60e6a7580 Augur is one example of a low friction method to create and sustain crowds through incentives.

    The current industrial organization of the legal profession is certainly a barrier to deploying some of these ideas. In particular, this is a field where we do not really keep score and track the long term performance of experts. Instead, we use weak signals such as pedigree and reputation. My interest in the intersection of law and finance has been driven by the idea that we need problems (such as litigation funding, etc.) where folks actually have a financial incentive to care about material improvements in prediction/performance. See Fin Legal Tech here —
    https://www.slideshare.net/Danielkatz/fin-legal-tech-laws-future-from-finances-past-some-thoughts-about-the-financialization-of-the-law-professors-daniel-martin-katz-michael-j-bommarito

  • Richard Moorhead says:

    Thanks for replying Dan.

    I used algorithm to mean the model(s) you talk about in your paper – apols for the confusion, I was commenting from memory – although I think the models can also properly be described as algorithms, I see why you want to make the distinction between your algorithmic (fancy pants machine learning on social science dataset) and crowdsourced methods (crowd plus simpler algorithms/equations/models – whatever language you prefer).

    I am afraid I could not see where in the Augur post I could see how Augur might work. I’d be really interested in sensing how it might work.

    I’m not sure it’s the industrial organisation that’s a barrier to this. I am interested in where a sustainable crowd of (collectively) super-predictors might come from with sufficient incentive to pay attention to legal decisions that need taking or predicting. It’d be interesting to hear what it might look like. What information might they need to have, how would they be incentivised to take a decision, how big might the crowd be, etc etc.

    That’s the point I was trying to make. As you might expect, I agree we should not laud reputation or weak signals.


Leave a Comment

By clicking Submit you consent to Legal Futures storing your personal data and confirm you have read our Privacy Policy and section 5 of our Terms & Conditions which deals with user-generated content. All comments will be moderated before posting.

Required fields are marked *
Email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Reports

No larger firm can ignore the demands of innovation – that was the clear message from our most recent roundtable: “The law firm of the future”, sponsored by LexisNexis Enterprise Solutions. It comes in many forms, predominantly but not just technology, and is not simply a case of automating process. Expertise and process are not mutually exclusive.

Blog

20 September 2018
Simon McCrum

Why don’t lawyers do what you ask them to do?

Having been team leader, department head, division head and managing partner, I understand well the frustration (and anger) that managing partners and CEOs voice to me: “We’ve asked them a dozen times, but still they aren’t doing what we need!”

Read More