The Ethics of Algorithms

Sunday, August 12, 2018

What Is It

Recent years have seen the rise of machine learning algorithms surrounding us in our homes and back pockets. They're increasingly used in everything from recommending movies to guiding sentencing in criminal courts, thanks to their being perceived as unbiased and fair. But can algorithms really be objective when they are created by biased human programmers? Are such biased algorithms inherently immoral? And is there a way to resist immoral algorithms? Josh and Ken run code with Angèle Christin from Stanford University, author of "Algorithms in Practice: Comparing Web Journalism and Criminal Justice."

Comments (3)

Harold G. Neuman's picture

Harold G. Neuman

Friday, August 10, 2018 -- 10:34 AM

There are, many sorts of

There are, many sorts of algorithm, beyond machine learning and, in fact, long previous thereto. Inasmuch as algorithms are tools, systems and protocols for apprehending and solving problems with levels of comfort and reliability: your auto mechanic has an established and comfortable 'algo' for repairing a worn timing belt mechanism on your car. If you have been paying attention to that strange noise, perhaps a whirring roar coming from your engine compartment, he will gladly disassemble the mechanism and install new parts for somewhere around $1,000-$1500. You might be inclined to think this PRICE, uh, immoral. But, it is the cost for his services, and probably better than buying a new car? Morality is a slippery subject. As are web journalism and criminal justice. There is something worth reading on page 28 of John Rawls' A Theory of Justice (Original Edition)---not because it is ABOUT morality, but because it contrasts nicely with what I have said about slippery subjects generally. The paragraph begins with ..."Justice denies that the loss of freedom for some is made right..." and ends with..."are not subject to political bargaining or to the calculus of social interests."... Check it out. Justice and morality are related, and if it walks, talks, thinks and acts, it must be human, don't you think?

cohenle's picture


Sunday, August 12, 2018 -- 11:42 AM

This is not an either/or

This is not an either/or question. Algorithms can be quite helpful in humans make decisions. I am a physician and rely on algorithms to point out when I may have forgotten something or help with clinical judgment. A judge can similarly use an algorithm which may point out when his/her decision is way off base or not.

chwarden's picture


Monday, August 20, 2018 -- 2:55 PM

Algorithms go thru versions

Algorithms go thru versions throughout their lives: 1.0, 1.1, 1.2, 2.0, 2.1, 2,2, 3.0, 3.1, 3.2 and so on forever. This means that using algorithms for legal purposes must deal up front and before use with what to do when mistakes happen. What happens when version 1.1 says you released people who should still be in prison, but were released based on prior analysis with 1.0? What happens when version 2.0 says you re-imprisoned someone after 1.1 who should actually be free? Algorithms may work well in medicine, because medicine is explicitly experimental, but law does not seem so welcoming to correction of mistakes. My impression is that even though version 10 and up might be fair and blind to race, gender and other group identities, getting there is not currently possible for law. Continuous accurate open collection of data is required for improvement and I simply think that egos, money and traditions will prevent continuous quality improvement for legal uses of algorithms.