Top 5 This Week

Related Posts

The biggest issue facing future cars: Who gets to decide how self-driving cars make ethical decisions in emergency situations? – Car News

The trolley problem is a hypothetical moral quandary.

If you intervene in a deadly situation to alter the path of an unstoppable trolley (or tram to us) and cause it to run over one person instead of the five it would have initially, are you responsible for one death, or for saving five lives? Would you have been responsible for the inverse if you’d left it as it was?

It’s a grim scenario and some variations of the trolley problem don’t have a ‘morally correct’ answer, which is why it’s brought up in relation to the development of autonomous cars.

Say you’re driving a car with a couple of passengers and you’re being tailed closely by a truck at the moment a pedestrian suddenly appears in front of you, having stepped onto the road without looking.

You might have time to brake and avoid hitting the pedestrian, but do you also have time to check the truck in the mirror to determine how hard you brake in case it hits your car?

Now what if it’s a dog instead or a pedestrian? Or you’re on a highway and it’s a kangaroo.

As a person, you’re the one who has to decide in a split-second what the best course of action is for as many people to be as safe as possible.

But in an autonomous car, the vehicle must be programmed to make these decisions, meaning there’s a paper trail of code that leaves its manufacturer as the decision-maker.

So, who at Mercedes-Benz gets to decide how a car intervenes when there’s danger, or even if it intervenes at all?

At Mercedes-Benz’s Intelligent Drive Insight event in Melbourne the brand’s Manager for Field Validation of Driver Assistance Systems Jochen Haab, told CarsGuide it’s not as simple as programming a new function and seeing how it behaves in the real world.

“It’s a moral question, a technical question, and it’s a legal question. And we have what we call an ethical legal committee,” says Haab.

“And if you look at history and how we developed up to now, we had councils. So technology people of course, we’re reasonable people. We are responsible people, and we developed to our best knowledge.

“And then we went to the lawyers, the internal and external ones, and to ethical people, internal and external experts, to insurance people and said, ‘Hey, what do you think about it?’

2024 Mercedes-Benz EQE

“And then they said, ‘Ah good’ or ‘Uhh maybe change that’. And we changed it, or not, or whatever and decided not to do it.”

An example of the latter, where a technology or function was abandoned or greatly altered through feedback before it launched, is Mercedes’ Evasive Steering Assist, one of Jochen Haab’s favourite functions.

He says it’s “very, very seldom” a function is stepped back once it’s on the market.

“I would like to say never. We improve them, yes, we learn, but to step back once they’re on the market, I wouldn’t think of one function that we had to draw back on.”

Evasive Steering Assist was changed during its development phase, rather than on the road in the real world, once it was found that in test environments the way it was conceived on paper didn’t quite gel with instinctive human reactions.

The basic function, originally, was to intervene and steer away from a hazard in front of the car in cases where braking wouldn’t be enough, but once the steering wheel had already turned and the driver had noticed, they’d grab the steering wheel after it had turned and hold the car on a wayward trajectory.

Haab says the decision was made that the driver must intervene first, and the car would then assist by guiding the driver’s reaction into more precise motions rather than panicked swerving.

2024 Mercedes-Benz EQE

For Mercedes-Benz at the moment, the driver’s own decision is still the key to the brand’s safety systems. The car will do as much as possible to let the driver be alerted and make a decision before its systems intervene to try to minimise the danger.

Jochen Haab is acutely aware it won’t always be like this if self-driving cars are to eventuate.

“When approaching higher levels of autonomy or automation, we decided to have a committee which meets – depends on the project phase – at least every two weeks, in the high phases twice a week, and it’s consists of again, of course engineers, all different kinds of engineers: software engineers, mechanical engineers, testing engineers.

“Plus legal people, product liability experts, litigation experts, ethical experts, moral experts, if you want, again, internal external ones.”

Haab adds that it’s not just a one-and-done process – the committee is along for the whole development process for each new function or technology, assisting in collectively coming to what is hopefully the best outcome for safety and for the law.

“That’s a committee that’s always there. As you develop, as a part of the development phase, you go there as an engineer, say, ‘Hey, this is what we plan. What do you think about it?’

“Then in the process itself, even while testing, they come and visit us and see how it behaves. We decide, very responsibly, what we do with situations or with situation types. And that would be the committee deciding what to do, for example, with animals in the future.”

2024 Mercedes-Benz EQE

Haab’s team isn’t alone in navigating the difficulties of morals when it comes to autonomy in motoring, in fact a scientist working at the Massachusetts Institute of Technology (MIT) created an online platform to gather respondents’ feedback on varying moral dilemmas.

Syrian Iyad Rahwan, an alumni of the University of Melbourne and now a director at the Max Planck Institute for Human Development, created the Moral Machine, a website that uses variations on the trolley problem to gather data based on the collective morals of groups of people around the world.

Rahwan’s findings, perhaps unsurprisingly, showed a general trend that respondents wanted to minimise harm, but the way in which people choose which hypothetical people in each scenario are more deserving of protection varied based on cultural or generational differences.

In a September 2016 TEDx talk, Rahwan even pointed out an anonymous respondent who seemed to put the lives of dogs over those of humans – not a typical user of his platform, he admits, but still potentially someone who’s currently in charge of a car themselves.

The committee detailed by Jochen Haab at Mercedes has a lot of work ahead of it to find solutions for self-driving cars that are able to make these decisions in split-seconds and hopefully save lives – of humans and of dogs.

#biggest #issue #facing #future #cars #decide #selfdriving #cars #ethical #decisions #emergency #situations #Car #News

source: https://www.carsguide.com.au/car-news/the-biggest-issue-facing-future-cars-who-gets-to-decide-how-self-driving-cars-make-ethical

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Popular Articles