Lies, damned lies, and AI

As algorithmic decision-making spreads across a broadening range of policy areas, it is beginning to expose social and economic inequities that were long hidden behind “of cial” data The question now is whether we will use these revelations to create a more

As algorithmic decision-making spreads across a broadening range of policy areas, it is beginning to expose social and economic inequities that were long hidden behind “of cial” data.

The question now is whether we will use these revelations to create a more just society. AMBRIDGE – Algorithms are as biased as the data they feed on. And all data are biased. Even “offi cial” statistics cannot be assumed to stand for objective, eternal “facts.” The fi gures that governments publish represent society as it is now, through the lens of what those assembling the data consider to be relevant and important. The categories and classifi cations used to make sense of the data are not neutral. Just as we measure what we see, so we tend to see only what we measure.

As algorithmic decision-making spreads to a wider range of policymaking areas, it is shedding a harsh light on the social biases that once lurked in the shadows of the data we collect. By taking existing structures and processes to their logical extremes, artifi cial intelligence (AI) is forcing us to confront the kind of society we have created. The problem is not just that computers are designed to think like corporations, as my University of Cambridge colleague Jonnie Penn has argued. It is also that computers think like economists. An AI, after all, is as infallible a version of homo economicus as one can imagine. It is a rationally calculating, logically consistent, ends-oriented agent capable of achieving its desired outcomes with fi nite computational resources. When it comes to “maximizing utility,” they are far more effective than any human.

“Utility” is to economics what “phlogiston” once was to chemistry. Early chemists hypothesized that combustible matter contained a hidden element – phlogiston – that could explain why substances changed form when they burned. Yet, try as they might, scientists never could confi rm the hypothesis. They could not track down phlogiston for the same reason that economists today cannot offer a measure of actual utility. Economists use the concept of utility to explain why people make the choices they do – what to buy, where to invest, how hard to work: everyone is trying to maximize utility in accordance with one’s preferences and beliefs about the world, and within the limits posed by scarce income or resources. Despite not existing, utility is a powerful construct. It seems only natural to suppose that everyone is trying to do as well as they can for themselves. Moreover, economists’ notion of utility is born of classical utilitarianism, which aims to secure the greatest amount of good for the greatest number of people. Like modern economists following in the footsteps of John Stuart Mill, most of those designing algorithms are utilitarians who believe that if a “good” is known, then it can be maximized. But this assumption can produce troubling outcomes. For example, consider how algorithms are being used to decide whether prisoners are deserving of parole.

An important 2017 study fi nds that algorithms far outperform humans in predicting recidivism rates, and could be used to reduce the “jailing rate” by more than 40% “with no increase in crime rates.” In the United States, then, AIs could be used to reduce a prison population that is disproportionately black. But what happens when AIs take over the parole process and African Americans are still being jailed at a higher rate than whites? Highly effi cient algorithmic decision-making has brought such questions to the fore, forcing us to decide precisely which outcomes should be maximized. Do we want merely to reduce the overall prison population, or should we also be concerned about fairness? Whereas politics allows for fudges and compromises to disguise such tradeoffs, computer code requires clarity.

That demand for clarity is making it harder to ignore the structural sources of societal inequities. In the age of AI, algorithms will force us to recognize how the outcomes of past social and political confl icts have been perpetuated into the present through our use of data. Thanks to groups such as the AI Ethics Initiative and the Partnership on AI, a broader debate about the ethics of AI has begun to emerge. But AI algorithms are of course just doing what they are coded to do. The real issue extends beyond the use of algorithmic decisionmaking in corporate and political governance, and strikes at the ethical foundations of our societies. While we certainly need to debate the practical and philosophical tradeoffs of maximizing “utility” through AI, we also need to engage in selfrefl ection. Algorithms are posing fundamental questions about how we have organized social, political, and economic relations to date. We now must decide if we really want to encode current social arrangements into the decisionmaking structures of the future. Given the political fracturing currently occurring around the world, this seems like a good moment to write a new script

Related articles...
Latest news
alveria

Alveria leads the HR Tech world into the future

Singapore: the new blue zone fostering longevity and quality of life

The 10 best films about US Presidential Elections

How to win a Nobel Prize: the path to one of the most prestigious awards

US Election 2024: a simple guide to the presidential vote

A quick guide to Kamala Harris: From vice-president to potential president

Newsletter

Sign up now to stay updated on all business topics.