Algorithmic government: Does it have all the answers?

30 May 2017 -


If algorithms ran government, we could make a constant stream of evidence-driven decisions that met clearly identified customer needs. Management of public services would be transformed and the public wouldn’t be able to complain of bias, right? Well, right…ish

Charles Orton-Jones

The ancient Greek city of Locris employed a simple method to deter foolish law-making. The proposer would stand with his head in a noose. If the law failed to gain approval, the man would be strangled on the spot. It worked.

In more than 200 years, only a single statute passed. The historian Polybius called Locris “a most well-governed city”. It was a harmonious place indeed, credited with the first written laws in Greece. Lawmakers today have no such restraints. In fact, we’re amid a deluge of law-making.

In 2010, 3,506 new laws were introduced in the UK, a record high. Similarly, the 2015-16 editions of Tolley’s Yellow Tax Handbook and Orange Tax Handbook together come in at 21,602 pages.

The problem isn’t just one of quantity. The decaying fabric of the Palace of Westminster is an emblem of wider issues, of arcane processes and questionable results. The EU referendum result in Britain can be read as the final rejection of the traditional, bureaucratic system of government.

Evidently we need another way to make laws. But what? Experiments are already under way...


In 2012, Iceland drafted a new constitution. In a world-first move it delegated the task to the Icelandic people. It used a three-step election process. First, a National Forum comprising 950 randomly selected citizens gathered for a one-day meeting to list the values and principles they wished to see enshrined. It was all pretty obvious stuff, such as democracy, human rights, equal access to healthcare and, since this was after the financial crisis, a strongly regulated financial sector.

Next, an assembly of 522 citizens generated a group of constitutional drafters of 10 women and 15 men. Politicians were excluded. Each draft was opened up to the wider public through social media. Comments came through Facebook, Twitter and email. More than 3,600 comments arrived.

Finally, the new constitution was put to a public vote. Two-thirds of Icelanders said ‘yes’. Alas, the Bill stalled in parliament. But the point was made. The wisdom of the crowd can be used for making laws.

One close observer was Tanja Aitamurto, a Finnish academic who helped run a similar project in her homeland. “I proposed to the parliament of Finland, to the Committee of the Future, that we do something like this. And it worked!”

The subject was typically Finnish: off-road traffic laws. Snowmobiles and all-terrain vehicles drive off piste, and the laws needed updating. The Ministry of Environment created an online platform, describing the legal need, and offering a simple way to participate.

Contributors could be anonymous, or state their name. Visitors could vote and leave comments. Around 700 users generated 500 ideas and 4,000 comments, with 24,000 up or down votes.

“We had no problem with trolls,” says Aitamurto. “People think that’s because we are Finland, and everyone behaves well. Actually, we have done something similar in California doing a planning project and the results are also really good.”

In Chile, a crowdsourcing project to reform the country’s constitution generated 30,000 submissions. Again, good sense predominated.

“Crowdsourcing is efficient,” says Aitamurto. “Law-making is slow and expensive. It means hiring consultants to find information and do research. And that research might not reflect people’s experiences. Now we can get information from the crowd. We save money and time, and create policies that better address people’s needs.”

More and more countries are being guided by petitions. Finland has the New Citizens Initiative Act – when a petition passes 50,000 signatures in six months it must be discussed in parliament. The US has a platform called ‘We the People’: more than 100,00 signatures gathered in a month triggers a response from a White House expert.

Under the UK’s system, a petition attracting more than 100,000 signatures is considered for debate in Parliament. Such systems really can fine-tune government operations.

The UK’s Red Tape Challenge was set up to address the perennial moan that businesses are hamstrung by pointless legislation. The coalition government asked for public comments through an online portal, divided into industry sectors. It led to 132 regulations being improved, and 73 earmarked for repeal.


In the film Minority Report, criminals are picked up before they’ve committed a crime. The ‘Precrime’ unit uses psychics called ‘precogs’ to identify malefactors. Today, a Californian algorithm company offers a similar service to US police forces.

PredPol uses crime data, crime locations and time patterns to provide forecasts for the times and locations where crimes are most likely to occur. Beat officers can be sent to areas of 500 square yards (‘boxes’) to await villains.

The Los Angeles Police Department experimented with PredPol. Division Captain Sean Malinowski said: “We told [officers] to go into the boxes and use their knowledge, skills and experience to determine what should be done… They may stay there for just 15 minutes to a half-hour and let people see them walking around the area. Would-be offenders see the police activity and are deterred from committing a crime there. All we are trying to do is deny them the opportunity to commit that crime in that time and place.

“During our test, we probably disrupted criminal activity eight to 10 times a week.”

New York established a big-data office to engage with exactly this type of work. By looking at hundreds of data sets relating to housing, it was possible to identify dwellings likely to be home to illegal tenants. The hit rate of inspectors rose from 13% to 70%.


The new frontier for data is criminal sentencing. More than 20 US states rely on data programs for risk assessment when sentencing. Indiana uses a system called LSI-R to weigh 54 items, such as criminal history, financial status and marital status, to help predict recidivism.

The direction of travel is clear. The criminal justice system is moving towards relying on data to determine the optimum prison sentence and parole conditions for each criminal.

It’s not just crime. The Bank of England uses a ‘bipartite matching algorithm’ to analyse the housing market. Milton Keynes Council is working with Indian consultancy Tech Mahindra to send rubbish collections along optimal routes (waste bins with sensors send alerts when they are full), as part of its transformation into a smart city run by algorithms.

Opposition exists. In the US, in 2014, then attorney general Eric Holder set out his concerns about data-driven law enforcement. He argued that using education and employment records may entrench racial divisions.

America already gives young black males a fifth longer in jail for the same crime as their white counterparts. Holder said data will benefit “those on the white-collar side who may have advanced degrees and who may have done greater societal harm – if you pull back a little bit – than somebody who has not completed a master’s degree, doesn’t have a law degree and isn’t a doctor”.

There’s also the issue of transparency. Who understands the algorithm? Judges almost certainly don’t. A hallmark of big-data analytics is that the results may be true, but inexplicable.

Supermarkets using big data have found, for example, that avocados are commonly bought with soy sauce. Why is anyone’s guess; maybe the same demographic enjoys certain food groups – perhaps tartare with avocado. It’s not necessary to know the link for a discount special offer to be effective.

When it comes to justice, however, we need to know. Irish academic Dr John Danaher published a widely read work last year called The Threat of Algocracy. He warns of algorithms creeping into public life with little transparency about how they work.

They also threaten our sense of democratic participation. “We may be on the cusp of creating a governance system that severely constrains and limits the opportunities for human engagement,” he warned.

Yet the potential gains may make it too much to resist, just as we accept computer decisions in other areas, such as mortgage applications. Danaher concluded: “This may be necessary to achieve other instrumental or procedural gains, but we need to be sure we can live with the trade-off.”

The military must reflect on these trade-offs every day. Drone strikes, say, can be triggered by facial recognition, with no human in the loop. UN special rapporteur Christof Heyns told the UN Human Rights Council in 2013: “Machines lack morality and mortality, and as a result should not have life and death powers over humans.”

The Atlantic Council, a US body that advises NATO, argued in a 2013 paper that autonomous weapons were being normalised: “The Obama administration employs so-called ‘signature’ strikes, wherein ‘intelligence officers and drone operators kill suspects based on their patterns of behaviour – without positive identification’. With signature strikes, the CIA doesn’t know the identity of the persons it is killing.”

The paper says broadcaster NBC quoted “a former senior intelligence official”, as claiming that “at the height of the drone programme in Pakistan in 2009 and 2010, as many as half of the strikes were classified as signature strikes”.

The Atlantic Council’s conclusion was damning: “Algorithms are essential and mostly reliable, but the world is highly vulnerable should they be misused, hacked, sabotaged or simply fail. Measures to protect society and government from algorithm risks start with recognition of… the risks they pose, the range of systems and types of algorithms that run them, and the vulnerability of these algorithm-powered systems.”

The same logic will apply to algorithms across government. They can be gamed, hacked, miscalculated and exclude vital data – just like politicians, but without the ability to haul them in front of a committee.

Crowdsourcing and algorithms clearly offer a lot for lawmakers. As devices for consultation, they are terrific. But for the act of law-making? Even a hardcore advocate such as Tanja Aitamurto sees the limitations: “It’s important to note that we aren’t talking about crowd voting,” she says. “The crowd is one data point. It’s not about direct democracy.”


Philosopher AJ Ayer suggested a reason why we’ll always have political disagreements. Morality, said Ayer, is unprovable and untestable. There are dozens of competing models: Jews use the Torah, Benthamite utilitarians use the greatest happiness of the greatest number, human rights advocates rely on unchallengeable golden rules, and so on.

There is no way of resolving this. Government “will never run the way Silicon Valley runs because, by definition, democracy is messy”, declared former US president Barack Obama.

Continuing his address to the White House-hosted Frontiers Conference in October 2016, he said: “So sometimes I talk to CEOs, and they come in and they start telling me about leadership, and here’s how we do things. And I say: ‘Well, if all I was doing was making a widget or producing an app, and I didn’t have to worry about whether poor people could afford the widget, or I didn’t have to worry about whether the app had some unintended consequences – setting aside my Syria and Yemen portfolio – then I think those suggestions are terrific.’

“But the reason I say this is sometimes we get, I think, in the scientific community, the tech community, the entrepreneurial community, the sense of we just have to blow up the system, or create this parallel society and culture because government is inherently wrecked. No, it’s not inherently wrecked; it’s just government has to care for, for example, veterans who come home. That’s not on your balance sheet, that’s on our collective balance sheet, because we have a sacred duty to take care of those veterans. And that’s hard and it’s messy, and we’re building up legacy systems that we can’t just blow up.”

Nevertheless, Obama published a list of 23 recommendations for how the government can work with algorithms in Preparing for the Future of Artificial Intelligence. But the president’s caution was clear in the document.

Current methods of law-making are flawed. But moving to a new system is perilous. The noose of Locris served a purpose.