Artificial intelligence and machine learning have enabled widespread automation of government decision-making in Western liberal democracies. Yet vulnerable populations have been seriously harmed because of the difficulties they face in challenging automated decisions.
In Australia, the social security automated system pejoratively dubbed as “Robodebt” erroneously identified overpayments deemed to be owed by social security beneficiaries. Errors of methodology led to inflated debt calculations for over 600,000 individuals. Desperate people falsely accused of high debts rang the social security agency, Centrelink, only to be put on hold for eight hours.
These incorrect calculations have led to grave repercussions for vulnerable low-socioeconomic debtors, including individuals experiencing severe mental health issues and even suicide. In 2019, the Federal Court held that the basis for raising debts under the Robodebt program was irrational and therefore unlawful. Following this, the Australian government agreed to settle a class action on behalf of 600,000 people affected by Robodebt for $112 million, without any admission of liability. The judge presiding over the case decried the Commonwealth’s handling of its online compliance system as a “very sorry chapter in Australian public administration”. The Robodebt Royal Commission lambasted the scheme as an “extraordinary saga” of “venality, incompetence and cowardice”.
All the way across the globe, in Michigan, Republican Governor, Rick Snyder, a former venture capitalist who vowed to run government like a business, decided to put into place a new automated social security system. Unemployment agency staff were laid off to achieve cost savings. In their place, a $46 million automated fraud detection system was introduced in 2001: the Michigan Integrated Data Automated System (MiDAS). MiDAS was programmed to treat any data discrepancies as evidence of illegal conduct and to trigger an automatic default finding against individuals. The program has been a windfall for the Michigan budget line, collecting over $60 million in just four years, bringing the State budget into the green.
However, it turned out that the AI system had a 93% error rate. As a result, between 2013 and 2015, the system falsely identified more than 40,000 Michigan residents of suspected fraud, and seized their tax refunds, garnisheed their wages and imposed civil penalties 400% the amount the person was owing, plus interest. These people wrongfully lost access to unemployment payments, and faced fines as high as $100,000.
As a result of being wrongfully accused of unemployment insurance fraud, 1,100 people filed for bankruptcy, losing jobs, homes and livelihoods. With a quadruple penalty plus interest, people went into significant debts. As their credit ratings plummeted, there was a stain on their record for seven years. Consequently, they were unable to get jobs, rent houses, buy cars or homes. A couple of people attempted suicide.
In a 2017 settlement, Michigan’s unemployment agency agreed to stop using MiDAS’ automated functions without human review. The settlement also made the agency reverse and refund certain fraud determinations. In 2022, the Michigan Supreme Court ruled that people in Michigan can sue state agencies for monetary damages for violating their constitutional rights.
In the UK, there was the Post Office scandal, where 736 sub-postmasters were wrongfully prosecuted for theft, fraud, and false accounting in their branches, between 2000 and 2014. The Post Office Ltd, a wholly owned British government company, aggressively pursued these sub-postmasters through the courts, resulting in numerous criminal convictions, at least four suicides, some being jailed (including a teenager and pregnant woman), and many facing bankruptcy and financial ruin.
However, none of them had done anything wrong; the Horizon accounting software system designed by Fujitsu had produced incorrect accounting shortfalls, due to bugs, errors and defects in the IT system that automated the post office network. Widely decried as one of the biggest miscarriages of justice in British history, this multi-million pound IT disaster has spanned over twenty years and has generated widespread condemnation.
A question that immediately arises is: how did things go so wrong in these advanced liberal democracies, with their sophisticated bureaucracies and well-developed systems of checks and balances?
For each of these large-scale catastrophes to occur, deliberate choices were made at key decision points by elected politicians and public servants to procure and deploy faulty technology at a large scale on vulnerable populations. Hundreds of thousands of people were harmed for years due to these harmful governmental decisions.
These scandals demonstrate the need to more closely scrutinise the regulation of automated decision-making across government, including how the system is designed, how AI technologies are rolled out, and whether appropriate auditing practices are put into place to mitigate any potential ill effects. My new book Combatting the Code: Regulating Automated Government Decision-Making in Comparative Context thus examines the legal, political and managerial controls that are needed to prevent large-scale AI disasters from occurring.
The legal controls are focussed on judicial review for individual redress and rationality of AI decisions, anti-discrimination laws for challenging AI inputs, privacy legislation for the protection of individual data, and the FOI Act for transparency.
There are three main forms of challenging government AI decision-making: challenges to the input, output and use of data.
First, a person affected by AI decisions could challenge the inputs to the decision-making, in terms of faulty code and data. The person can argue based on anti-discrimination principles that the data inputted into AI systems is biased and produces discriminatory outcomes. This has not proven to be a fruitful avenue for challenging automated decision-making, as anti-discrimination tests based on intention, causation and impact fail to capture the nuances of AI systems and are difficult to prove in an AI context.
Another method is to challenge the outputs or results of the decision-making, if they can show that the computer produces decisions that are substantively flawed. This can be done through challenges of the rationality of the decisions in the UK and Australia, or due process in the US, which considers the relationship between the inputs and outputs of government decision-making. All the jurisdictions covered have had successful challenges on this basis.
Finally, another avenue is to challenge the use of the data in terms of data sharing and retention. A person can claim privacy protections in the government’s use, dissemination and retention of AI data. With supranational requirements that come from the General Data Protection Regulation (GDPR) and European Convention on Human Rights (ECHR), the UK is the only jurisdiction examined that has successfully deployed this ground of challenge, suggesting a need for specially calibrated laws.
Political controls will cover the operation of oversight bodies such as parliamentary committees, ombudsmen, and auditors. Political controls ensure that decisions are transparent and assist in the rationality of decision-making, for example through the process of auditing.
Managerial controls focus on the internal controls within agencies to regulate their AI decision-making. The actors that enforce managerial controls are the departments and agencies that sit within the executive branch, including central departments and line agencies. Managerial controls primarily emphasise efficiency and consistency of agency decision-making, but can also be used to enhance the transparency and participatory processes for government automated systems.
My book argues that this three-pronged approach of legal, political and managerial controls is required to address all dimensions of the design, implementation and auditing of automated technologies through both internal and external oversight processes, as well as prospective and retrospective measures of control.
In short, governments in liberal democracies have jumped onto the AI bandwagon and have been enthusiastic adopters of new technologies. However, the use of automation in government has often been trialled and deployed punitively on the most vulnerable sections of society; to predict and punish criminal recidivism and unemployment fraud, and to automate social security debt notices, rather than to predict insider trading by the wealthy.
The then United Nations Special Rapporteur on extreme poverty and human rights Professor Philip Alston warned about the risks of the digital welfare state, where ‘digital data and technologies … are used to automate, predict, identify, surveil, detect, target and punish’. In this context, it is time to seize the gauntlet and address one of the most significant problems of our time – how to ensure that automated government decision-making is implemented in a fair, effective and accountable manner, without harming vulnerable citizens.
Latest Comments
Have your say!