AI Use Cases in Enterprise. Why AI (self driving car) will kill every fat British man…
I am tagging https://www.ai.gov/ here in hopes they will somehow get pings by the robots to include this use case into their catalog of “sh*t, we probably should put guardrails on this use case.”
One of our professors had fun yesterday asking us to resolve moral dilemmas. You can see the example above. I can tell you that no one from the class ( Business Ethics and Social Responsibility as per Harvard Business School curriculum) enjoyed the exercises.
If you have taken any philosophy/law/ethics classes, you have seen variations of the scenario above in some way, form, or fashion. Do you know what triggered uncomfortable feelings for me in this thing? A few days ago, I was reading research papers about self-driving cars and how they were building use cases of what the vehicle has to do when facing a choice of killing humans. They DO train the robots on the very same use cases they give students. Do you know the implications of this?
Here is some AI archeology ( https://willkoehrsen.github.io/neural%20network/data%20science/project/facial-recognition-using-googles-convolutional-neural-network/). This paper has information on the research/concern by now raised by many people — your AI is only as good as what you train it on. The robot was perfect in identifying one face. Why? Because that is what it has the most data on. Because back then, for that particular timeframe, the majority of journalists were taking pictures of one person for the majority of the time. There is plenty of literature on this specific subject and how AI models should be monitored for this particular data concern.
Okay, so what happens when a self-driving car has to make a choice of crashing into something? What will it choose? What is the probability somewhere in the deep nets of the black box it might just happen to be a fat British man?( “We don’t know how it works” — hm… you do, you just don’t want to do the work, QA/QC models can be engineered by the robots themselves if you are too lazy to do it yourself). But Tanya, why British? The scenario has zero information about the place of origin of the man. Well, if you google word “Trolley” it does not seem that’s a common American word. The robot trained on internet will “know” Trolley is British, so the fat man and five others are in Britain, which means it ( robot) has to kill a British man because that’s what EVERY SELF DRIVING CAR COMPANY IS TRAINING IT TO DO.
In moral philosophy, deontological ethics or deontology is the normative ethical theory that the morality of an action should be based on whether that action itself is right or wrong under a series of rules and principles, rather than based on the consequences of the action.
In ethical philosophy, utilitarianism is a family of normative ethical theories that prescribe actions that maximize happiness and well-being for all affected individuals.
Basically, depending on the viewpoint of AI trainers, the car will use ethics to decide what to do. But I don’t think that’s what will happen. The car ( robot) will pick up what it was trained on — the dataset above and most likely choose a poor fat man because, for whatever mysterious reason, his life is not as important as five others. I do not recall seeing “old lady” or “young handsome man” in those scenarios. For whatever reason, the fat man has to die in that reasoning. Like, somehow, five people are better than him.
So, should “Vernon Dursleys” of the world be sacrificed to self-driving cars? Or should humans adjust their AI data sets? “KEEP VERNON ALIVE!” is a nice hashtag, no? #keepvernonalive
What about the AI datasets returning the recommendations on whether women should be hired or promoted? What about “likelihood” scores assigned to specific categories based on “training data sets” regarding housing, education, medical care, and so on?
If you personally know me, you know my uncanny ability to find things that are very true but looked at from a very different point of view ( I guess diversity here actually works?). I did throw a curve ball to the professor to get a little revenge for making us make uncomfortable choices. I asked him about the current Canada-India crisis. Canada is obviously deontological in a matter of concern, and India is maximizing utilitarian theory. Now, as far as at least today, humans on both sides are making choices. What would happen if AI systems made the very same choice from both sides?
AI built by Canadiens and AI built by Indians might have very different ethics and diverse views on a myriad of things…
The professor gave me “the look” and changed the subject. It is way easier just to let a theoretical fat man die via being pushed from the bridge such that his fat body stops the train and saves five people vs. actually envisioning real-world scenarios of extreme discomfort…
Thoughts? I would love to have a friendly discussion ( DM if you are too radical to post publicly :P)
P.S. Now, imagine “fat man” is a CEO and “five people” are his VPs who might have done something Enron-like things ( and he(CEO) does not know), and the Board of Directors has to decide where to direct the train? Will we listen to AI, who deserves to live and who has to die?
To Be Continued…