Should Algorithms Decide?

14 August 2018

Our topic this week, the ethics and morality of algorithms. Strictly speaking, an algorithm is not a moral agent. It’s just a step-by-step, foolproof, mechanical procedure for computing some mathematical function. Think, for example, of long division as a case in point. From this perspective it may seem a bit like a category mistake or contradiction in terms to talk about the ethics or morality of algorithms.   

But what’s really relevant isn’t the strict mathematical notion of an algorithm, but a broader notion of an algorithm, which is roughly coextensive to whatever it is that computers do. And as it happens, we are delegating more and more morally fraught decisions to computers and their algorithms. In the strict sense of that term ‘algorithm’ there is no algorithm that would allow us to precisely compute the value of a human life in a mechanical, step-by-step, foolproof manner.

But that doesn’t stop us from programming a computer to assigned weights to various factors like say age or income or race, health status, performing some calculations, and spitting out a number that tell us whether we ought or ought not give a person a potentially life-saving treatment. Many find the prospect of such a thing truly alarming. It’s hard to blame them for that. After all, how many of us would be willing to trust our own lives to a computer algorithm?

But in fact, we do so all the time—every time we fly on an airplane, for example. The air traffic control system is basically the domain of the computer. It’s a domain in which, for example, human pilots are called upon to act only on an as needed basis. And thanks to the wonders of modern computation, human intervention is, in fact, rarely needed. And the remarkable thing is that air travel is much safer than it otherwise would be as a result.  

Now one might object that air traffic control is a relatively easy case. Basically, all the computers have to do is to keep the planes far enough apart to get them from point A to point B without bumping into each other. But we are approaching new frontiers of algorithmic decision making that are both much more computationally difficult and morally fraught than air traffic control. Think of the coming onslaught of self-driving cars. All those crowded city streets, with cars and pedestrians and cyclists traveling every which way in much closer proximity to each other than planes ever get. Do we really want computers to decide when a car should swerve and kill its passenger in order to save a pedestrian?  

Unfortunately, I doubt that it matters what any of us want. The day is coming, and fast, when the degree of computer automation in what we might call the ground traffic control system will rival or exceed the degree of automation in the air traffic control system. Some will say that the day is coming much too fast and in too many spheres. Computers are already in almost complete control of the stock market. They’re gradually taking over medical diagnosis. Some even want to turn sentencing decisions over to them. Perhaps things are getting out of control.   

But let’s not forget that humans aren’t exactly infallible decision makers ourselves. Think of the mess that our human centered sentencing system has become. Given the mess that judges have made, why should we trust them over a well programmed computer? I grant that human judges have something that computers lack. Judges are living, breathing, human beings, with a sense of duty, and responsibility, and empathy. But those very same judges can also be full of  racial biases, hidden political agendas, and overblown emotional reactions. With a computer, we just encode the sentencing guidelines specified by the law into your algorithm and let the computer decide without fear or favoritism.

If only it were really that simple, though. Unfortunately, it is not. Until we develop fully autonomous, fully self-programmed computers, we’re stuck with human beings, with our racial biases and hidden agendas, doing the bulk of the programming. And although some programmers may think of themselves as young Mr. Spock’s—all logic and no emotion—in the end they are just humans too. And they are unfortunately just as prone to bias and blinders as the rest of us. Nor can we easily train them to simply avoid writing their own biases into their algorithms. Most humans aren’t even aware of their biases.

In the old days, the days of so-called good-old fashioned AI, this might not have been such a big deal. Even if we couldn’t eliminate the biases from the programmers, we could still test, debug and tweak their programs. By contrast, you can’t rewrite a judge’s neural code when you discover he’s got this thing against black people. If you think about it that way, who wouldn’t still take the computer over the judge any day?  

Unfortunately, good old fashioned style AI programming (GOFAI)—the kind where you had to explicitly program every single line of code in order to stuff humanlike knowledge into the computer sort of by “brute force” is quickly becoming a thing of the past. GOFAI is rapidly giving way to machine learning algorithms in many, many spheres. With this kind of computational architecture, instead of trying to stuff the knowledge into the computer line-by-line, you basically give the computer a problem and let it figure out how to solve the problem on its own. In particular, you give the machine a bunch a bunch of training data, it tries to come up with the right answer. If it gets the wrong answer, you (or the world) give it an error signal in response. The network then spontaneously adjusts its network, basically without human intervention, until it gets the right answers on all the data in the training set. Then we turn it loose on the world to confront brand new instances of the problem category not in the original training set. It’s a beautiful and powerful technique.

But now suppose you’ve got some tech bros in Silicon Valley training a machine to do, say, face recognition. Maybe they pick a bunch of their friends to be the training set. We can be pretty sure the training set won’t be representative of the population at large! Now this means that, at the very least, if we don’t want to introduce biases into the network’s representation of the problem domain, we have to make sure to use statistically sound methods to design our training sets. But that, as we discuss in the episode, is much easier said than done, at least in the general case. That’s because the only data reasonable available to us, in, for example, the case of sentencing decisions, may be riddled with the effects of a history of bias.

But there’s an even harder problem. These networks can sometimes be inscrutable black boxes. That’s because the network basically decides on its own how to partition and represent the data and what weights to assign to what factors. And these “decisions” may be totally opaque to their human “teachers.” That means that if something goes wrong, we can’t even get in there and debug and tweak the network, as with old-fashioned AI. At least with those systems, we knew exactly what the algorithm was supposed to be doing.

 

Now I don’t want to sound like a luddite. I recognize the decided upsides of moving beyond human decision to automated decision making. And even though I still have a soft spot for old-style knowledge representation from the heyday of GOFAI, I appreciate the amazing success of newfangled machine learning architectures. Still, I’m not really in a hurry to farm out too much of our moral agency to machines. I think before rushing pell-mell into the breach, we need to slow down and think this through much more systematically. Perhaps you can help us out.  

Comments (5)


Michael Haddon's picture

Michael Haddon

Wednesday, August 15, 2018 -- 9:50 PM

During your presentation on

During your presentation on the Ethics of Algorithms, reference was made to the difference in arrests for drugs between blacks and whites, while rates of drug use are approximately equal. This was claimed to demonstrate a bias in the way drug arrests occur. Having recently read Thomas Sowell’s Discrimination and Disparities, I recalled purchasing drugs in my youth. Most purchases were made in someone’s house, or perhaps in their parent’s garage. ‘Someone,’ in this case being a white dealer. I never heard of a white dealer selling on the streets. Black drug dealers on the sidewalks were at the time rather obvious in certain parts of Oakland, where I grew up. I presume you’re much more likely to get arrested selling on the street than in your parent’s garage. (The statute of limitations has long since run on my youthful indiscretions.)

I’m not sure y’all talk to the folks at the Hoover Institution, but a presentation regarding how you determine bias with Thomas Sowell as a guest would certainly be lively.

Beloved'lil'king's picture

Beloved'lil'king

Monday, August 20, 2018 -- 11:05 AM

The greatest concern I have

The greatest concern I have regarding the implementation of algorithms or algorithmic mechanisms to expedite the current criminal justice system (CJS) is for at least two parts:
A. One (1) and negative one (-1): No matter how great you could ever construct any algorithm or how impervious you might build a tool to wield your altogether flawless algorithm even with some purposefully double-blind mechanisms to offer unfettered constraint where may be applicable; even in that, there are always a minimum of two separate variables required to compute anything:
1. Therefore, even if for this example, the social wisdom or intellectual value of the algorithm was 100,000,000,000 (100 Billion) times the value of anything it judged/digested to social interpretation to achieve a better more balanced CJS; the ultimate end equals solution will be also and equally contingent upon then second variable, that is the thing being judged/examined.
2. Because the notion of the algorithm in construct is to alleviate the burden upon CJS to expedite ‘more’ or somehow ‘better’ justice upon the people(s), whatever the concern in question to be judged, the total second variable value is not an ‘absolute value’ because people are always hurting out of hurt and criminalized by crimes so the second variable is essentially negative.
3. While the algorithm may not see bias in the way people see bias, because of the inherently flawed dataset where all people will develop at slightly different rates and their experiences, even those of equal social force/value, will cause varying impressions which may cause differing effects; any algorithmic mechanisms applied will observe and apply a simple value to each circumstance without the understanding to know which evolutionary processes have or have not been impressioned upon.
4. Talent is fashioned to a world lottery system but opportunity is always cultivated by the more just society. For this reason, people selected from a group of peers to offer reasoned and compassionate verdict to then be further examined and sentence/justice applied is the best solution for equitable fair-minded CJS.
5. Variable 1 cannot be infinite fashioned from the metaphysical realm despite near-infinite capability. Variable 2 always drawn from a broken/negative/selfish/criminal world.
B. The current burden of CJS was cultivated by racial and discriminatory prejudice(s) e.g. “redlining” and/or lack of social mobility for all peoples/creeds/nations/cultures being under the jurisdiction of the CJS and moreover this discrimination has been taught and/or instilled by the very educational institutions established to protect and perpetuate a notion of free-indiscriminate CJS. Turning to/utilization of any algorithmic mechanism or machined learning to expedite the solution is as though cheating on a test or turning to the cat in the hat for some magical non-logical solution rather that apologizing and putting in the more difficult work of social-equality for all under the jurisdictional umbrella. Analogous to taking hydroxy-cut instead of practiced exercise and then wondering why after it seemed to be working your heart suddenly fails. Trial/Temptation to Perseverance to Proven Character to Hope. Everyone needs Hope.

Beloved'lil'king's picture

Beloved'lil'king

Tuesday, August 21, 2018 -- 8:03 AM

IF, bent upon the application

IF, bent upon the application of algorithms to vacate personal responsibility: You must first account for;
A. Physiologically dormant hibernation disorder(s) and the varying rates under which those disorders are prompted for expression based upon measured social interaction.
B. Generationally dormant psychologically dormant disorder(s) which may only be expressed or ‘activated’ every other or third or sometimes even more subtle down to the fourth generation and be able to quantify their correlated intermittent social interactions.
C. Determining social values and implementation of the attributed or ‘perfected’ person to become cannot be referenced by any singularity or steady control because with each passing generation the history, carved in battle and selfish ambition usually almost always, reset(s) and complicates the variables in question by manipulating the control through the progression of time. In other words, no one society ever lasts long enough to establish a proper control to identify the ‘perfected’ person.
D. No one generation is ever ‘perfect’, not a single one. There is no ah-ha and now all past history is irrelevant. Every culture/tongue/creed/people(s) have value. The perfect was created and then because of selfish illogical behavior was marred and defaced beyond recognition from what purity of blessed character was originally established by your Maker. For this reason, what appears as petty or under the guise of pride to be wasteful by some, that is to Love all people indiscriminately without measure and help one another in loving-kindness and compassionate support; this is actually the accurate metric for reestablishing the foundation of humanity by imitating Jesus, who is the Christ, Son of God and Son of Man. He, Jesus, brought the essential road map and compass down to teach those, both male and female, He Loved how to restore the Joy of Salvation and Peace inexplainable which overflows from a life Honoring the Love of God.
E. Virtue: The nature of all virtue is divine. Manipulated or warped virtue tainted by selfishness is unholy and cannot serve any benevolent purpose. In other words, generosity is not generous if you carve for yourself some personal (in)tangible gain. But when you seek to love one another in effort to Honor the Love of God do so in secret no showing or boasting but work so that others do not notice because God the Father, who sent us his only Son of God, Jesus, watches and takes note of all things even to reward each according to what he/she/they have done according to His Mercy.
E1. The virtues were provided for to resonate within person(s) in symmetrical extension from the chief virtue, Love, as it has been made intelligible to us. Within Love there is Justice, Righteousness, Loving-Kindness, Compassion, Faithfulness established in Holy promise by God, your God.
E2. By extension from the Perfect Holy Pious Love down to the created people(s), to ascend into the divine order you must first harness at least two virtuous faculties and ‘back’ into their higher parent virtue. This is as though climbing a latter not by sight but by faith in trust for what there is good reason to hold is all Goodness, Righteousness and Truth. The lens of algorithm(s) is mathematics and cannot accurately quantify the condition of the heart.

Tim Smith's picture

Tim Smith

Saturday, March 27, 2021 -- 2:15 PM

Algorithms can and do

Algorithms can and do quantify conditions of the heart. Virtue is enacted by humans. Previous generations have never faced machine learning or artificial intelligence. Perfection is not the standard. Genetic or epigenetic disorders can be screened by algorithms.

Tim Smith's picture

Tim Smith

Saturday, April 10, 2021 -- 9:14 AM

Algorithms are not the

Algorithms are not the problem, nor are they unethical by nature or use. It is the height of rationality to use an algorithm to solve uncertainty. Much of what we consider inevitable without proof is likely not certain at all. Though algorithms often don't provide certainty in their answers, they can prove the best solutions possible to philosophical questions that themselves defy logical proof. These are some of the most interesting questions in philosophy. What is rational? What is the nature of our thought? How to Live?

COMPAS, the public safety assessment (PSA) program featured in Holly's report, was featured in a story by ProPublica in 2016. This specific program was accurate and precise in predicting recidivism across Black and White defendants in 61% of cases. However, in the 39% of the cases where it failed, COMPAS was twice as likely to classify White defendants as low risk – who then later committed new crimes, while Black defendants were twice as likely not to recidivate when rated as high risk. This injustice is a huge problem. I'm not sure how the judicial system did before using the program, but it seems this reflects the training set. If algorithms in the best case only point this out we shouldn't use them to reinforce these models.

The case of Loomis v. Wisconsin is similarly discomfiting as the judge in this case, using the same PSA system, which was intended for bail hearings only, instead uses COMPAS as a sentencing tool.

No one is going to deny the goodness of a computer. We are all too immersed in the standard of living computers provide for us all. Something is alluring about algorithms and computational artifice.

Iris Berent's 'The Blind Storyteller' (she was a guest on the PT: The Examined Year: 2020 ) posits dualism as innate human behavior. Berent's argument is based on her own and cited studies reflecting experimental philosophy at its best. I would extend this innate dualism to an uninterrogated and unhealthy surrender of humanity to the power of models, both internal and external, as suggested and validated by algorithms.

In addition to exposing existing models of bias and truth, algorithms can also create false models beyond the human minds' capacity to understand. Inaccurate AI-driven models of thought and action are the best argument not to surrender matters of justice, ethics, and morality to an algorithm. I wish Ken and Josh had discussed this along with the capacity for algorithms to derive truth beyond human understanding.

I share Ken's resignation that this human surrender to the coming singularity is unavoidable. A few brains are working on algorithmic safety as a project. The keys to this safety are transparency, fairness, and accurate representation of human qualia. Built on these three pillars – I see a safe path for humanity. Barring this, we need to bow down to the one we serve. Regardless, we are going to get the one we deserve.

Cross post with the 2021 show - https://www.philosophytalk.org/shows/ethics-algorithms