Menu Bar

Home           Calendar           Topics          Just Charlestown          About Us

Friday, October 26, 2018

Sometimes, they do it very poorly

How people judge good from bad
North Carolina State University

New research sheds light on how people decide whether behavior is moral or immoral. The findings could serve as a framework for informing the development of artificial intelligence (AI) and other technologies.

"At issue is intuitive moral judgment, which is the snap decision that people make about whether something is good or bad, moral or immoral," says Veljko Dubljević, lead author of the study and a neuroethics researcher at North Carolina State University who studies the cognitive neuroscience of ethics.


"There have been many attempts to understand how people make intuitive moral judgments, but they all had significant flaws. In 2014, we proposed a model of moral judgment, called the Agent Deed Consequence (ADC) model -- and now we have the first experimental results that offer a strong empirical corroboration of the ADC model in both mundane and dramatic realistic situations.

"This work is important because it provides a framework that can be used to help us determine when the ends may justify the means, or when they may not," Dubljević says. "This has implications for clinical assessments, such as recognizing deficits in psychopathy, and technological applications, such as AI programming."

Moral judgment is a tricky subject. For example, most people would agree that lying is immoral. However, most people would also agree that lying to Nazis about the location of Jewish families would be moral.

To address this, the ADC model posits that people take three things into account when making a moral judgment: the agent, which is the character or intent of the person who is doing something; the deed, or what is being done; and the consequence, or the outcome that resulted from the deed.

"This approach allows us to explain not only the variability in the moral status of lying, but also the flip side: that telling the truth can be immoral if it is done maliciously and causes harm," Dubljević says.

To test this complexity and the model, researchers developed a series of scenarios that were logical, realistic and easily understood by both lay people and professional philosophers. All of the scenarios were evaluated by a group of 141 professional philosophers with training in ethics.

In one part of the study, a sample of 528 study participants from the U.S. also evaluated different scenarios in which the stakes were consistently low. This means that the possible outcomes were not dire.

In a second part of the study, 786 study participants evaluated more drastic scenarios -- including situations that could result in severe injury or death.

In the first part, when the stakes were lower, the nature of the deed was the strongest factor in determining whether an action was moral. 

Whether the agent was lying or telling the truth mattered the most, rather than whether the outcome was bad or good. But when the stakes were high, the nature of the consequences was the strongest factor. 

The results also show that in the case of a good outcome (survival of the passengers of an airplane), the difference between a good or a bad deed, although relevant for the moral evaluation, was less important.

"For instance, the possibility of saving numerous lives seems to be able to justify less than savory actions, such as the use of violence, or motivations for action, such as greed, in certain conditions," Dubljević says.

"The findings from the study showed that philosophers and the general public made moral judgments in similar ways. This indicates that the structure of moral intuition is the same, regardless of whether one has training in ethics," Dubljević says. "In other words, everyone makes these snap moral judgments in a similar way."

While the ADC model helps us understand how we make judgments about what is good or bad, it may have applications beyond informing debates about moral psychology and ethics.

"There are areas, such as AI and self-driving cars, where we need to incorporate decision making about what constitutes moral behavior," Dubljević says. "Frameworks like the ADC model can be used as the underpinnings for the cognitive architecture we build for these technologies, and this is what I'm working on currently."