Disclaimer: This blog is my own personal opinion. It does not reflect my employer’s view or should be related to my employer in any case.
Imagine, it is years in the future and you are in a self-driving car on the road. Suddenly, a cyclist appears in front of your car and to avoid the collision, the car will need to make a decision in a split second. Here are some scenarios:
The car steers to the pavement, avoids collision with the cyclist, but it may hit any pedestrian walking by.
The car stops immediately to avoid collision with the cyclist, but it may turnover and injure the driver (in this case, yourself).
The car keeps going straight to the driver and the pedestrian safe, but may hit the cyclist.
Which one should it choose?
The example above is just another form of the Trolley Problem – a thought experiment and moral psychology problem which was introduced long ago and still posing many questions for our society:
There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person tied up on the side track. You have two options:
- Do nothing, and the trolley kills the five people on the main track.
- Pull the lever, diverting the trolley onto the side track where it will kill one person.
Which is the most ethical choice?
Here is another form of the Trolley Problem in our modern society context, where the car has information about the pedestrian, their ages, races, profiles, criminal history, backgrounds…etc.
And the car control software can create simulation, and estimate the consequences based on these information.
What kind of information should the car consider?
The question beyond
The bigger question in my mind, will be “Who should be chosen to design the solution”?
73 years ago, when the 2 nuclear bombs dropped on Hiroshima and Nagasaki, human mankind was immediately challenged with an imminent existential threat: which countries should be allowed to own nuclear weapons or nuclear power, to what extents?
73 years later, we are again being challenged with similar questions, this time with AI.
Who should be chosen to design the legal and ethical frameworks for AI?