Algorithmic Anxiety: let’s stop fighting the last war and focus on the future

I have been teaching about the ethics of innovation this week. As a result, I have been doing a good deal of reading. It is an area which, burgeoning in Science and Technology Studies for some time, is beginning to develop in the legal sphere too. With artificial intelligence, and online dispute resolution, moving on apace, it is well and truly time.

My overarching message is that there are issues here rather more important than how independent the SRA is, or whether there should be a mega-regulator for lawyers. Those issues are the scope of legal services regulation and the readiness of legal service regulators to regulate genuine innovation, most especially the application of artificial intelligence in legal services.

Let me briefly deal with scope first. That scope is determined by whether services are provided by approved providers (solicitors, barristers, legal execs, and so on) and/or whether the services provided fall into the rather narrow confines of reserved legal services. The vast majority of interesting innovation either does not involve, or can work around, reserved legal services, and one particular area of importance (online dispute resolution) is excluded from regulation under the Legal Services Act (unless it is provided by approved persons). So ODR is likely impossible to regulate as a legal service unless only lawyers do it. Not much chance of that.

Now let me turn to the issue of ethics and technology. There are acres of things to say here, many of which I am not ready to say yet. Quite a lot of what is emerging from my reading in the area gets very well captured by a piece on automated copyright enforcement by Niva Elkin-Koren and others (referenced at the end, and well worth a read). [1]

What does it tell us? Algorithms are non-transparent by nature; their decision-making criteria are concealed behind a veil of code that we cannot easily read and comprehend.  Additionally, these algorithms are dynamic in their ability to evolve according to different data patterns.  Their decision-making models evolve and change.  Moreover, algorithms that enforce online activity are mostly implemented by private, profit-maximizing entities, operating under minimal transparency obligations. They can be self-interested, incompetent, virtuous or skilled.

Because of this, “We do not know what decisions are made, how they are made, and what specific data and principles shape them.”Algorithmic decision-making is written in (to most of us impenetrable) code, often protected by trade secrecy, and further enveloped (usually) in mathematical complexity. Because algorithms adapt to the data they learn on, their ‘rules’ or meanings change as they learn and they may be founded upon, “immense volumes of unintelligible data.” Disclosure of the ‘rules’ or ‘data’ of a decision are thus likely to be banal, mathematically complex, unintelligible and overwhelming. Transparency may be, “partial, biased or even misleading.” It will not work on its own, if it can work at all.

The Blockchain has prompted a great deal of excitement about smart contracts. Enthusiasts wonder at the potential for contracts to self-execute. Yet, according to the paper, systems which apply and execute law without direct human agency are already well-established in automated copyright enforcement. Algorithms are widely used, “by all major intermediaries as algorithms …to monitor, filter, block, and disable access to allegedly infringing content.”[2] They apply discretionary standards rather than bright line rules to develop ‘codish’ interpretations of concepts such as “originality”, “substantial similarity,” and “permissible use”. They decide automatically when material is posted online (e.g. on Twitter or Facebook or YouTube), or by searching (using web crawling robots) for material that may breach copyright, or through responding to complaints, whether material posted online may breach copyright.  These complaints may themselves be generated by robots, leading to high volumes of AI generated, proto-legal claims to be tackled at high-volume by automated decision-makers. All this virtual activity has a potentially significant impact on intellectual property, and its commercial exploitation, but also on free speech. Given the variation in performance of these algorithms that the authors found, we must also wonder at the quality of some of them. There is the potential for manipulation, abuse of power, barriers to competition and innovation, and damage to basic rights. There is also the potential for just being a bit rubbish.

If we imagine the application of such systems into legal services, automated decisions about decisions to complain or what steps to take in litigation might, if inappropriate, constitute vexatious litigation or taking advantage of third parties or inadequate professional services or worse. Decisions to advise or defend criminal proceedings may constitute weakening or strengthening of the rule of law, of defendant rights. Automated negotiation may discriminate between one type of party over another. Data which enters the system from one source, or more likely one set of sources at volume, may pose conflicts of interest with another. I could go on.

My point is not to suggest that these systems are bad. I think they are fascinating and potentially liberating. Whilst lawyers like to highlight stories about the biases that algorithms have, we know too that they can correct for, or drain out, social prejudice from noisy, flawed, human systems. Similarly, automated or intelligent systems may protect against or correct for problems where they are not defeated by the social world’s complexities. Imagine, for instance, a contractual explanation system like nift being used to test whether clients read client care information, or even better whether they understood it, or better again that assessed the fairness of their terms and renegotiated them. My point is that the evidence shows that each system is different and its application is a human phenomena – whether these systems are good or bad are unknown and, in important senses, unknowable – especially at the individual level of the consumer or a complaint.

The potential sophistication of such systems is not purely scientific. The errors and skills of their designers, their values and presumptions about what data to look at and how to weight it contain sometimes ineluctable value judgments. Some of those value judgments will very likely relate to the central values of the legal system and the provision of legal services. An interesting question will be whether this leads not just to the merging of professional regulators (yaaaaaawn) but to a merging of responsibilities for regulating legal services and the legal system as a whole. Understanding this services-system singularity (if it comes) will be important to fighting the the battle after next, but for now, we need to ask some more basic questions about the fitness of the current regulatory settlement to engage with innovation in legal services; to understand it, where some feel they are more inclined to simply worship it; and to know whether and how they should regulate it. I suspect the powers of the regulator and the type of thinking going on needs a step change. And there is not yet an app for that.


[1] Niva Elkin-Koren and others, ‘BLACK BOX TINKERING: Beyond Transparency in Algorithmic Enforcement’ <; accessed 3 March 2017.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s