The woes of government algorithms

Font Size:

Monika Sarder, Monash University

Algorithmic decision-making has enormous potential to do good. From identifying priority areas for first response after an earthquake hits, to identifying those at risk of COVID-19 within minutes, their application has proven hugely beneficial.

But things can go drastically wrong when decisions are trusted to algorithms without ensuring they adhere to established ethical norms. Two recent examples illustrate how government agencies are failing to automate fairness.

1. The algorithm doesn’t match reality
This problem arises when a one-size-fits-all rule is implemented in a complex environment.

The most recent devastating example was Australia’s Centrelink ‘robodebt’ debacle. In that case, welfare payments made on the basis of self-reported fortnightly income were cross-referenced against an estimated fortnightly income, taken as a simple average of annual earnings reported to the Australian Tax Office, and used to auto-generate debt notices without any further human scrutiny or explanation.

This assumption is at odds with how Australia’s highly casualised workforce is actually paid. For example, a graphic designer who was unable to find work for nine months of the financial year but earned $12,000 in the three months before June would have had an automated debt raised against her. This is despite no fraud having occurred, and this scenario constituting exactly the kind of hardship Centrelink is designed to address.

The scheme ultimately proved to be a disaster for the Australian government, which must now pay back an estimated $721 million in wrongly issued debts after the High Court ruled the scheme unlawful. More than 470,000 debts were wrongfully raised by the scheme, primarily against low income earners, causing significant distress.

2. Inputs embed racism
The stunning scenes of police violence in US cities have underscored the extent to which systemic racism influences law and order processes in the United States, from police patrols right through to sentencing. Black individuals are more likely to be stopped and searched, more likely to be arrested for low-level infractions, more likely to have prison time included in plea deals, and incur longer sentences for comparable crimes when they do go to trial.

This systemic racism has been repeated, more insidiously, in algorithmic processes. One example is COMPAS, a controversial ‘decision support’ system designed to help parole boards in the United States decide which prisoners to release early, by providing a probability score of their likelihood of reoffending.

Rather than rely on a simple decision rule, the algorithm used a range of inputs, including demographic and survey information, to derive a score. The algorithm did not use race as an explicit variable, but it did embed systemic racism by using variables that were shaped by police and judicial biases on the ground.

Applicants were asked a range of questions about their interactions with the justice system, such as the age they first came in contact with police, and whether family or friends had previously been incarcerated. This information was then used to derive their final ‘risk’ score.

As Cathy O’Neill put it in her book Weapons of Math Destruction: “It’s easy to imagine how inmates from a privileged background would answer one way and those from tough inner streets another.”

What is going wrong?
Using algorithms to make decisions isn’t inherently bad. But it can turn bad if the automated systems used by governments fail to incorporate the principles real humans use to make fair decisions.

People who design and implement these solutions need to focus not just on statistics and software design, but also ethics. Here’s how:

– consult those who are likely to be significantly affected by a new process before it is implemented, not after

– check for potential unfair bias at the process design phase

– ensure the underpinning rationale of the decisions is transparent, and the outcomes are relatively predictable

– make a human accountable for the integrity of decisions and their consequences

It would be ideal if the developers of social policy algorithms put these principles at the core of their work. But in the absence of accountability in the tech sector, numerous laws have been passed, or are being passed, to deal with the problem.

The European Union data protection law states that algorithmic decisions that have significant consequences for any person must involve a human review component. It also requires organisations to provide a transparent explanation of the logic used in algorithmic processes.

The US Congress, meanwhile, is considering a draft Algorithmic Accountability Act that would require institutions to consider “the risks that the automated decision system may result in or contribute to inaccurate, unfair, biased, or discriminatory decisions impacting consumers”.

Legislation is a solution, but it is not the best one. We need to develop and embed ethics and norms around decision-making into organisational practice. For this we need to boost the public’s data literacy, so they have the language to demand accountability from the tech giants to which we are all increasingly beholden.

A transparent and open approach is vital if we are to make the most of the technologies on offer in our data-rich world, while retaining our rights as citizens.The Conversation

Monika Sarder, Senior Strategic Analyst, Monash University

This article is republished from The Conversation under a Creative Commons licence. Read the original article.

If you enjoy our content, don’t keep it to yourself. Share our free eNews with your friends and encourage them to sign up.

RELATED LINKS

How to tell if you are due to receive a Centrelink refund

Stuart was issued with a robo-debt notice and wants to know if he can expect a refund.

Environment the biggest concern in the latest Ipsos Issues Monitor

Older Australians lead the attitudinal swing towards the nation's biggest concern.

Calls for government to urgently refund botched debts

ACOSS calls on the government to immediately cancel all robo-debts and refund the money.

Written by The Conversation



SPONSORED LINKS

Sign-up to the YourLifeChoices Enewsletter

continue reading

Podcast

Podcast: Banishing winter blues with cold water companionship

COVID lockdowns can do funny things to people, but last year Melburnian Belle Galloway decided to do something she had...

Travel

Soothe your soul with these stunning images of Japan's cherry blossoms

It's sakura season, the perfect opportunity for locals to indulge in hanami, the traditional Japanese custom of 'flower viewing'. On...

Travel

Discover New Zealand's best winter attractions

To mark the start of the trans-Tasman bubble, here are some of the best things to do in New Zealand...

Travel

Check your passport's expiry date before booking

International travel seems to be slowly but surely returning. With the trans-Tasman bubble and talks of another bubble opening with...

Travel

Tips for getting a good night's sleep on holiday

Travel can seriously mess with our sleep. The jet lag that disrupts our body clocks may be the most obvious...

Travel

Tasmania's top spots for shopping and markets

The Margate Train offers shopping with a difference at Margate, southern Tasmania. Margate is 20 minutes south of Hobart (19km)...

Travel

What will it take for travel to get restarted again?

Maraid is encouraged by the travel bubble with New Zealand opening up and wonders what needs to happen before we...

Travel

The unbelievable software flaw that led to a major flight incident

When it comes to flight engineering, you would hope that every possibility has been factored in to perfection, but a...

LOADING MORE ARTICLE...