Social credit systems


#1

In the past couple years, I heard China is implementing a “social credit” system that determines how trustworthy people are in the eyes of the government. More recently, New Jersey opted to get rid of cash bail in favor of a system that determines how much of a flight risk the defendant is.

I’ve been conflicted about both of these systems. First, let’s tackle China’s social credit system. In order for this system to exist, mass government surveillance must exist, as well as corporate surveillance which collects data on purchasing behavior, tastes, interests, likes, dislikes…but of course, much of this information can be inferred from social media behavior. Is it really so strange to score citizens based on their social media behavior? I’m undecided – sway me.

As for New Jersey’s bold move to eliminate cash bail (for the most part), there are some interesting implications, the first and most important being that wealthy criminals can’t hide behind their money anymore. The worst of criminals often have large cash reserves from their criminal activity, and they use this to post bail, where the poorer prison population cannot afford such a luxury (even if they’re unlikely to be a flight risk). Another positive element of this system is that repeat offenders and flight risks are more likely to be rejected by the system.

However (and there’s always a “however”), what if you come from a family that is less-than-friendly with the law? What if you associate with some unsavory characters short-term – should that ruin your standing in society? What if you come from a neighborhood that is statistically more likely to engage in criminal activity? Are you to be coldly cast aside because you didn’t meet the minimum heuristics of an unfeeling computer system? Or is a computer better able to make these decisions because it can be programmed to be impartial?

If the latter is true, then we as software developers have to have a higher moral and empathetic standard than usual, because the systems we design can make or break someone’s very life. Is it worth it?


#2

I’d be concerned about the longterm effects of labeling citizens as “untrustworthy”. Let’s say we do a study with 500 high school kids. We randomly label 250 of the kids as “trustworthy” and label the other kids as “untrustworthy”. Through their 3 years of high school, each student is required to wear their labels as large visible pins daily.

I would gladly bet that the group labeled as “trustworthy” would get better grades, commit less crime and generally be more successful in life.

It’s very dangerous when we empower one group by disempowering another group. Why can’t we come up with a system that is generally empowering to all citizens?


#3

Excellent point. I agree that visibly labeling people in that way would be destructive, no doubt. It would be even worse if the scores were secret. And who’s in control of these social credit scores? Who determines what constitutes trustworthiness?

Please note, when I said I was undecided, I was flippantly playing devil’s advocate. I wanted to see what the arguments might be in favor of such a system. I am absolutely, positively against any government or corporate system that disempowers any group of citizens.

I am still in favor of New Jersey’s elimination of the cash bail system, with a few caveats: I want the system to be open, for criticism and improvement; I want the system to do no harm, i.e. I want it to approve people for bail only – any case where bail might be denied would go to a human judge; and I want the systems to be regulated to meet minimum standards of human rights.

I think we’re going to have to face the idea of algorithms determining our fate, whether through credit, bail, or other systems. It’s already starting, and it’s kind of scary. I was talking to my sister about machine judges a few months ago, AIs that determine your fate. Can you imagine? I can only hope that the systems are open, although if they’re government or corporate systems we can probably count that out.

If these systems are to be built, if they must be built, how do we as developers build empathy into them? Can we? Or are we doomed to extinction at the hands of our AI masters? (I’m being flippant again. I need to stop that. What I’m really asking is, what are the possibilities in between those two extremes?)

I’d like to echo your final question as well: how can we come up with systems that are empowering to all citizens? Is this possible with the systems I’ve described above? How can we create systems that automate away a human “judge” (for lack of a better word) and at the same time, preserve or improve upon the empathy of the humans they replace?


#4

What about allowing the community involved to vote on the bail price? People who live closest to the person have the highest at stake and should get more say in the vote. Of course not everyone knows the person so many members will likely choose not to vote. Or maybe a member knows and trusts someone who is familiar with the person so they can anonymously give their vote to the trusted member.

Even better, this whole system could work nicely with AI. Maybe as a member, you don’t trust any members of the community to make a fair decision but you do trust a particular algorithm so you give your vote to that algorithm. The final vote could be a mix of algorithms and community members.


#5

This is a great idea. I wonder whether it’s better to have an impartial jury, or people who know the defendant? Should family get more of a say? Can anyone be truly impartial? Or should we measure which way each person leans, and try to balance out the resulting voter pool?

I’m incredibly intrigued by the idea of replacing decision makers with AIs, but it scares me at the same time. I wonder who will hold the keys, who will make the decisions about the AIs. Who the hell am I, as a software developer, to write code that makes decisions that could destroy someone’s life? Are my bosses any better equipped to make such a decision?

That’s where distributed consensus makes a difference, where it matters most. The more people that are voting on an issue, or delegating their decision to an AI, the less risk there is that a bad decision will be made.

How would you train an AI that determines bail price? What sort of data would you feed it?