The Ethics of Algorithms

Sunday, April 18, 2021
First Aired: 
Sunday, August 12, 2018

What Is It

Recent years have seen the rise of machine learning algorithms surrounding us in our homes and back pockets. They're increasingly used in everything from recommending movies to guiding sentencing in criminal courts, thanks to their being perceived as unbiased and fair. But can algorithms really be objective when they are created by biased human programmers? Are such biased algorithms inherently immoral? And is there a way to resist immoral algorithms? Josh and Ken run the code with Angèle Christin from Stanford University, author of Metrics at Work: Journalism and the Contested Meaning of Algorithms.

Transcript

Transcript

Josh Landy  
Would you be willing to trust your life to an algorithm?

Ken Taylor  
Well, aren't computers less prone to bias than human beings?

Josh Landy  
Do we really want to turn over our moral agency to software?

Comments (16)


Harold G. Neuman's picture

Harold G. Neuman

Friday, August 10, 2018 -- 10:34 AM

There are, many sorts of

There are, many sorts of algorithm, beyond machine learning and, in fact, long previous thereto. Inasmuch as algorithms are tools, systems and protocols for apprehending and solving problems with levels of comfort and reliability: your auto mechanic has an established and comfortable 'algo' for repairing a worn timing belt mechanism on your car. If you have been paying attention to that strange noise, perhaps a whirring roar coming from your engine compartment, he will gladly disassemble the mechanism and install new parts for somewhere around $1,000-$1500. You might be inclined to think this PRICE, uh, immoral. But, it is the cost for his services, and probably better than buying a new car? Morality is a slippery subject. As are web journalism and criminal justice. There is something worth reading on page 28 of John Rawls' A Theory of Justice (Original Edition)---not because it is ABOUT morality, but because it contrasts nicely with what I have said about slippery subjects generally. The paragraph begins with ..."Justice denies that the loss of freedom for some is made right..." and ends with..."are not subject to political bargaining or to the calculus of social interests."... Check it out. Justice and morality are related, and if it walks, talks, thinks and acts, it must be human, don't you think?

Tim Smith's picture

Tim Smith

Saturday, April 10, 2021 -- 1:58 PM

Harold,

Harold,

As was brought up in the most recent episode on John Rawls, the relation between justice and morality in his work was behind the "veil of ignorance" where a person would frame justice to allow their moral sense volition under whatever roles they might have the luck or misfortune to be born. In this sense, morality relates to justice, but morality is subordinate to it.

Here is the entire paragraph which you reference. - the passage you are referencing is toward the end.

"It has seemed to many philosophers, and it appears to be supported by the convictions of common sense, that we distinguish as a matter of principle between the claims of liberty and right on the one hand and the desirability of increasing aggregate social welfare on the other; and that we give a certain priority, if not absolute weight, to the former. Each member of society is thought to have an inviolability founded on justice or, as some say, on natural right, which even the welfare of everyone else cannot override. Justice denies that the loss of freedom for some is made right by a greater good shared by others. The reasoning which balances the gains and losses of different persons as if they were one person is excluded. Therefore in a just society, the basic liberties are taken for granted and the rights secured by justice are not subject to political bargaining or to the calculus of social interests." - A Theory of Justice – Chapter I Justice as Fairness 6. Some Related Contrasts – which starts on page 24 of the revised edition.

In his preface to the revised edition, Rawls clarifies his utilitarianism concept to 'the principle of (average) utility,' which replaced his use of 'the difference principle.' In no reference frame does he discuss 'doing the best you can' regarding justice or science overreach as you most recently do below. There is nothing slippery here, especially in light of Rawl's revisions and later works.

cohenle's picture

cohenle

Sunday, August 12, 2018 -- 11:42 AM

This is not an either/or

This is not an either/or question. Algorithms can be quite helpful in humans make decisions. I am a physician and rely on algorithms to point out when I may have forgotten something or help with clinical judgment. A judge can similarly use an algorithm which may point out when his/her decision is way off base or not.

Anotherstudent's picture

Anotherstudent

Thursday, April 22, 2021 -- 11:12 PM

I agree that algos can be a

I agree that algos can be a useful tool, but doesn't the medical community already have one called "standard of care" that make physicians hands tied to the decisions insurance companies?

I don't believe a human body should be considered like a piece of machinery. One doctor can have 10 patients with the same disease, but depending on their individual body and the cause of that disease, it may require them to be treated different ways. If we algorithm the medical system more than they already have we might as well all just be cattle waiting for the slaughter.

We need to stop dehumanizing humanity.

[edit] Just heard this episode tonight and just realized it aired in 2018. So maybe the conversation would be different now.

Tim Smith's picture

Tim Smith

Sunday, April 25, 2021 -- 7:50 AM

Moo!

Moo!

Not too different and way way more germane. Your point is well taken. Standard of care isn't quite AI or GOFAI but algorithm it is.

Cohenle is fooling themself thinking the algorithm is informing rather than dictating in the long run.

On the flipside, I have avoided some bills by looking up my warts before biting the medical apple.

Let's see what the humans of 2024 have to say. Hopefully that will be different. Complacency is not going to get it done.

chwarden's picture

chwarden

Monday, August 20, 2018 -- 2:55 PM

Algorithms go thru versions

Algorithms go thru versions throughout their lives: 1.0, 1.1, 1.2, 2.0, 2.1, 2,2, 3.0, 3.1, 3.2 and so on forever. This means that using algorithms for legal purposes must deal up front and before use with what to do when mistakes happen. What happens when version 1.1 says you released people who should still be in prison, but were released based on prior analysis with 1.0? What happens when version 2.0 says you re-imprisoned someone after 1.1 who should actually be free? Algorithms may work well in medicine, because medicine is explicitly experimental, but law does not seem so welcoming to correction of mistakes. My impression is that even though version 10 and up might be fair and blind to race, gender and other group identities, getting there is not currently possible for law. Continuous accurate open collection of data is required for improvement and I simply think that egos, money and traditions will prevent continuous quality improvement for legal uses of algorithms.

Harold G. Neuman's picture

Harold G. Neuman

Saturday, March 13, 2021 -- 5:15 AM

Other comments on this

Other comments on this question are well-received and insightful, IMHO. I regard algorithms on what I think is a more fundamental level. Machine learning is, a human invention, a foundation of the
AI phenomenon. Morality and ethics are, likewise, human constructs, useful or not so much, depending upon who is talking about/ applying/ enforcing them. And just what the motivation(s) of those factions may be. One claim I have heard and considered says that algorithms are TOOLS. That seems to comport with the discussion. Thinking about another PT post, would/should we think of them as an overreach of science? I'm not certain that could make sense to anyone other than a die hard techno-phobe.: when we speak of tool-making, we are usually talking about improvement of one sort or another. Should someone elect to use a tool for nefarious or self-serving reasons, that is not a concern of the tool, which obtains no ethical or moral responsibility, whatsoever.

Granted. There are layers of culpability; responsibility. Those rest on tool-makers and the tool-users.
Airliners crash, lives and property are lost. Boeing still builds them and we(some of us) still fly... I strongly suspect this will continue, unless some better algorithm comes along...

Harold G. Neuman's picture

Harold G. Neuman

Saturday, March 13, 2021 -- 7:01 AM

A few last remarks. Let's

A few last remarks. Let's assess the proliferation of 'social media over the last half-dozen years or so. Creators of this system likely had good intentions as well as a healthy sense of capitalism. Fair enough. But, there are always some who will exploit a good thing to further motives that may be questionable. Trolling and bullying emerged, corrupting the tool for others. Worse than this, an elected official repeatedly abused privilege. When the late Christopher Hitchens avowed religion poisoned everything, we was only looking at ONE iceberg. Or, was he? Algorithms are only as good (or bad) as we make them.

Harold G. Neuman's picture

Harold G. Neuman

Tuesday, March 23, 2021 -- 8:13 AM

I wanted to say something

I wanted to say something more on algos and science overreach. First, I will pose some questions:
1. Is language an algorithim or does it facilitate our development of then2
2, Is science overreach harmful, or is that just an example of doing the best you can, with what you have and what you know?
See, to me they are connected. And again, I am not sure where ethics ends and utility begins. Let's consider the use of an unapproved vaccine for emergency application. There is an acronym for that which I knew when getting a pandemic vaccination...EUA or something like that. So, here is an algorihim which has definite ethical and moral implications. Worldwide. While some of us were/are hesitant to recieve the vaccine(s), others have followed the science and embraced an alternative to sickness and/or death. Utility trumps risk, seems to me. Pragmatism trumps ethics/morality(?)...

Harold G. Neuman's picture

Harold G. Neuman

Thursday, April 8, 2021 -- 1:23 PM

...and, lastly, reality

...and, lastly, reality trumps the Duke of Denial. Hopefully, as with RMN, we won't have him to kick around anymore (???????????????). If you don't remember RMN, Google it---the initials ought to be all you need.

Tim Smith's picture

Tim Smith

Saturday, April 10, 2021 -- 2:05 AM

Algorithms are not the

Algorithms are not the problem, nor are they unethical by nature or use. It is the height of rationality to use an algorithm to solve uncertainty. Much of what we consider inevitable without proof is likely not certain at all. Though algorithms often don't provide certainty in their answers, they can prove the best solutions possible to philosophical questions that themselves defy logical proof. These are some of the most interesting questions in philosophy. What is rational? What is the nature of our thought? How to Live?

COMPAS, the public safety assessment (PSA) program featured in Holly's report, was featured in a story by ProPublica in 2016. This specific program was accurate and precise in predicting recidivism across Black and White defendants in 61% of cases. However, in the 39% of the cases where it failed, COMPAS was twice as likely to classify White defendants as low risk – who then later committed new crimes, while Black defendants were twice as likely not to recidivate when rated as high risk. This injustice is a huge problem. I'm not sure how the judicial system did before using the program, but it seems this reflects the training set. If algorithms in the best case only point this out we shouldn't use them to reinforce these models.

The case of Loomis v. Wisconsin is similarly discomfiting as the judge in this case, using the same PSA system, which was intended for bail hearings only, instead uses COMPAS as a sentencing tool.

No one is going to deny the goodness of a computer. We are all too immersed in the standard of living computers provide for us all. Something is alluring about algorithms and computational artifice.

Iris Berent's 'The Blind Storyteller' (she was a guest on the PT: The Examined Year: 2020 ) posits dualism as innate human behavior. Berent's argument is based on her own and cited studies reflecting experimental philosophy at its best. I would extend this innate dualism to an uninterrogated and unhealthy surrender of humanity to the power of models, both internal and external, as suggested and validated by algorithms.

In addition to exposing existing models of bias and truth, algorithms can also create false models beyond the human minds' capacity to understand. Inaccurate AI-driven models of thought and action are the best argument not to surrender matters of justice, ethics, and morality to an algorithm. I wish Ken and Josh had discussed this along with the capacity for algorithms to derive truth beyond human understanding.

I share Ken's resignation that this human surrender to the coming singularity is unavoidable. A few brains are working on algorithmic safety as a project. The keys to this safety are transparency, fairness, and accurate representation of human qualia. Built on these three pillars – I see a safe path for humanity. Barring this, we need to bow down to the one we serve. Regardless, we are going to get the one we deserve.

Harold G. Neuman's picture

Harold G. Neuman

Sunday, April 18, 2021 -- 6:20 AM

A lot has been said about

A lot has been said about algorithms, in this post on moraliy and another on ethics. I have no moe to add, so I'll not repeat myself. Wittgenstein said it best: whereof one does not know; thereof one must not speak.

TaylorSpencePhD's picture

TaylorSpencePhD

Sunday, April 25, 2021 -- 10:17 AM

Hello!? POC much? Ruha

Hello!? POC much? Ruha Benjamin and Safiya Noble have written extensively on this topic.

Tim Smith's picture

Tim Smith

Monday, April 26, 2021 -- 8:26 PM

Dr. Spence,

Dr. Spence,

Hello.

POC, as a verb, casts shade.

Listen again.

This show aired in 2018 before either Ruha or Safiya published their latest books. Angele Christin's book came out in 2020. However, both Benjamin and Noble's works are in the bibliography, with discussion molded and attributed to the thought of both.

What points would you add on Ruha or Safiya's behalf? No one is overlooking their work here.

Let's bring light to the ethics of algorithms. Let's discuss this. That is the project here.

Other researchers have written on this, from Joy Buolamwini to Merril Flood. Still, there is more to discuss.

What exactly is your Point Of Concern? Let's discuss that without shade and much light or at least lightness.

Blog much? Let's do this.

Best to you.

Harold G. Neuman's picture

Harold G. Neuman

Monday, April 26, 2021 -- 2:52 PM

Whoops. Not familiar with the

Whoops. Not familiar with the acronym, professor. Those you mention are not writers ( thinkers?) I have read. And respect. Have encountered folks like you before. No worries here. You like acronyms? Suss this one, then: My stepson suffers from post traumatic stress disorder. Do you know what that is? My advice to him was PSDP.--- providence smiles on determination and purpose.. Turns things around a bit, doesn't it? But, you may not know that, right? What you know, or don't know is i only as important as what,I do or do not know. Good luck and good night. I did not say that. Look it up.

Tim Smith's picture

Tim Smith

Monday, April 26, 2021 -- 8:32 PM

Dr. Spence,

Re - done with my replies in their place, and posts to their point.

Blogging is fun.