Our cars are being programmed to kill us.

Driverless cars will have to make life or death decisions for us - some already have - and it’s something we should probably be talking much more about.

Driverless cars present a kind of high-tech trolley problem. Imagine your car is shuttling you along the road when a street sign detaches itself and comes tumbling down toward your windscreen.

Now, imagine this is a small, single-lane road with footpaths on either side. On one footpath is a child, on the other side, an elderly person.

You have no time to take over and make the decision yourself - so your car must choose to swerve towards (and potentially kill) the child, the adult, or to collide with the falling sign.

A lot must be factored in to this kind of decision-making - how does an algorithm prioritise one life over another? Is it appropriate to do so? Should your car choose to kill its driver, and would you buy one if you knew it might?

Driverless cars will be safer than those operated by humans – but will we be comfortable swapping fatalities caused by human error for those decided by raw, probabilistic machine thinking?

MIT researchers recently set up a giant, global survey called the Moral Machine to pose these kinds of questions to people worldwide.

The experiment presented participants with unavoidable accident scenarios like the one above, imperilling various combinations of pedestrians and passengers.

Participants decided which course the car should take on the basis of which lives it would spare. The Moral Machine ended up recording almost 40 million such decisions.

It found people generally want their cars to spare the largest number of lives, prioritising the young, and valuing humans over animals.

There were some interesting findings among these – such as France exhibiting a strong preference for sparing women and athletic individuals, and participants from countries with greater income inequality being more likely to take social status into account when deciding who to spare.

Is it right to program a computer to decide that the life of an elderly person is worth less than a young life, or that the rich are more valuable than the poor?

An earlier study found most people wanted driverless cars to choose to save the lives of pedestrians – even if it meant causing its passengers to die instead.

However, most people also said they would prefer to ride in a driverless car that protected its own passengers at all costs.

Their interest in these cars decreased further in scenarios where their family members were passengers.

It is possible that the wide variety of moral standards may be reflected in the market for driverless cars.

“It is not difficult to imagine the segmentation of the autonomous car market,” says Professor Mary-Anne Williams, Director of Disruptive Innovation at the Office of the Provost at the University of Technology Sydney (UTS).

“Cars that always sacrifice the passengers might sell for 10 per cent of cars that preserve them. Wealthy people may be happy to subsidise the technology to obtain guarantees of protection.

“One can imagine a new insurance industry built on the need to service people who can pay for personal security on the roads and as pedestrians - a subscription service that prioritises life according to the magnitude of premiums.”

These questions remain hypothetical while self-driving cars are in their infancy, and the manufacturers have been pretty tight-lipped about how potentially fatal decisions will be made in the near future.

The self-driving car fatalities recorded so far have been because of human error (last-second, erratic movements, or drivers of only partly autonomous cars not maintaining appropriate levels of attention and control) and technological issues (sensors misidentifying vehicles, pedestrians and non-mobile hazards).

The drive to introduce these cars for economic gain is stronger than the drive to make them safe, so without significant public pressure it is sadly likely that moral decisions will be exposed only by picking apart fatalities after the fact.

For the software engineers who actually program these decisions, there is the IEEE code of ethics, which imposes a duty to:

“Approve software only if they have a well-founded belief that it is safe, meets specifications, passes appropriate tests, and does not diminish quality of life, diminish privacy or harm the environment.”

However, given that driverless cars will only become safer and safer compared to human drivers, it may not be long before we are questioning the ethics of having a person behind the wheel at all.

When people were asked whether they would buy an autonomous vehicle (AV) if government regulations enforced programming aimed at saving the lives of others, their interest dropped by two-thirds.

“Our results suggest that such regulation could substantially delay the adoption of AVs, which means that the lives saved by making AVs utilitarian may be outnumbered by the deaths caused by delaying the adoption of AVs altogether,” the researchers warned.

By Tim Hall, CareerSpot Editor