Douglas Todd: The morality of religion, driverless cars and everything else
Credit to Author: Douglas Todd| Date: Mon, 17 Feb 2020 21:46:28 +0000
Azim Shariff is becoming known as a go-to expert on the ways morality relates to religion, driverless cars, IQs, organ transplants, artificial intelligence and how we view people who are “busy.”
That doesn’t mean the University of B.C. social psychologist comes across as at all moralistic — even when I asked how he exposed the big mistake in one of the decade’s most “surprising” and widely cited studies on how religion shapes morality.
Shariff lead a team that discovered a high-profile University of Chicago research project had incorrectly concluded that children in Christian and Muslim households are more stingy than youngsters in non-religious households.
In doing so, Shariff also uncovered some favourable findings about the children of Canada.
Since his own research had suggested a religious upbringing often increases kindness towards others, he led four colleagues in asking for access to Jean Decety’s extensive data on the generosity of 1,200 children in six countries.
Shariff’s group discovered there had been a coding error: The Chicago team’s calculations in 2015 of how children performed in sharing coloured stickers with one another had, in effect and without being too technical about it, mixed up nationalities with religiosity.
What’s worse, as a few international media outlets reported in 2019, the Chicago team’s eventual retraction didn’t receive a fraction of the scientific and media coverage that trumpeted the original negative findings about religion.
In addition, virtually no media outlet has publicized a key result of the corrected data: That Canadian children emerged as the most sharing.
The Canadian children were found to be more generous than those from South Africa, Turkey, the United States, Jordan and China.
Shariff doesn’t want to overemphasize the results, however. He says the children weren’t too far apart based on nationality and, anyways, no one knows what factors produced the Canadian children’s relatively so-called “pro-social behaviour.” For instance, many offspring in China are their parents’ only children, Shariff said, which is often associated with being self-focused.
Shariff doesn’t have an axe to grind either for or against religion or any country. And he’s wary of those who do. He’s appreciating the chance to collaborate with the Chicago group to expand their research, which will now test the moral behaviour of children in 12 nations.
Born in Toronto to a Muslim family from India, the 38-year-old associate professor and Canada Research Chair says he drifted away from the family faith in his early teens. But, while studying psychology at the University of Toronto, he came intrigued by religion and ethical reasoning.
Teaching at UBC he’s noticed many students crave to be taught morality, even in a “paternalistic” way. Today’s young people seem to be reacting against, he says, the so-called “values-free” education typically offered by secular colleges and universities.
Shariff’s own original research has found when people — including Christians, Muslims, Jews, atheists and agnostics — are asked to think about religion, they become more generous not only to members of their own group, but also to those outside their group.
And a new potentially controversial international study lead by Britain’s Cory Jane Clark, in which Shariff took part, found religiosity and high-IQs were important factors in reducing violence, since each is associated with self-control.
After looking at data across many countries (not named in the paper), the scholars theorize that religion is more effective in reducing violence in societies with relatively lower average IQs. They don’t want people to jump to conclusions, however. Shariff notes IQs are determined both by genetics and environment, and a society’s average IQ tends to correlate to levels of education.
Never mind that there are many things beyond religiosity and IQ that influence morality, including the presence of police, taxation measures and secular philosophy, says Shariff.
In contrast to ancient religions, another thing increasingly swaying our ethics is high technology.
That’s particularly the case with the kind of logarithms created to make life-altering medical ethics decisions about who gets priority for a donated organ and about the moral programming of self-driving cars.
Every time a driver makes a decision about how close to veer toward a bicyclist or whether to tailgate, he or she is making a moral choice, Shariff says. So he and colleagues have been researching whether the logarithmic decisions of driverless cars are compatible with wider human morality.
MIT researcher Edmond Awad collaborated with Shariff and others to create a web-based experimental driving platform called Moral Machine, which has now been played by several million people.
Moral Machine throws virtual drivers into stark moral dilemmas, such as having to choose between crashing into innocent pedestrians or saving their own lives.
Shariff says the kinds of ethical responses that driverless car logarithms — and lawyers worried about legal liability — come up with to these somewhat “cartoonish” scenarios are not necessarily those of empathetic humans.
That said, they’re finding the morality of people can change by country.
When Moral Machine asked online drivers to choose between mowing down a business executive or a homeless man, Shariff and colleagues found people in egalitarian Nordic countries were just as likely to protect the homeless person.
But citizens of highly unequal countries, such as those in South America, were more protective of the business executive. Another Moral Machine experiment found Asians more likely than North Americans to save elderly pedestrians, not just young ones, from harm.
Since researchers are discovering humans show a startling range of responses to driving dilemmas, the question arises: Who will be granted the authority to program the life-and-death decision-making of self-driving vehicles?
On the subject of robots, Shariff has also been looking into the economic morality associated with working, particularly if many are going to lose their jobs to automation.
His experiments show that most of us, from Europe to Korea, view busyness as “a moral signal of character,” even when such people are not doing anything meaningful.
A key question Shariff is working on is how society will view the masses of people who could be replaced by Artificial Intelligence, such as truck drivers, who might end up living on a basic income supplied by taxpayers? “Will we think they are valuable members of society?”
The moral decisions linked with everything we interact with are profound — and Shariff and his adventurous colleagues are helping us pay attention to their powerful consequences.