More information doesn't necessarily help people make better decisions

The work, led by Samantha Kleinberg, associate professor of computer science at Stevens, is helping reframe the idea of how we use the mountain of data extracted from artificial intelligence and machine learning algorithms and how healthcare professionals and financial advisors present this new information to their patients and clients.

“Being accurate is not enough for information to be useful,” said Kleinberg. “It’s assumed that AI and machine learning will uncover great information, we’ll give it to people and they’ll make good decisions. However, the basic point of the paper is that there is a step missing: we need to help people build upon what they already know and understand how they will use the new information.”

For example: when doctors communicate information to patients, such as recommending blood pressure medication or explaining risk factors for diabetes, people may be thinking about the cost of medication or alternative ways to reach the same goal. “So, if you don’t understand all these other beliefs, it’s really hard to treat them in an effective way,” said Kleinberg, whose work appears in the Feb. 13 issue of Cognitive Research: Principles and Implications.

Kleinberg and colleagues asked 4,000 participants a series of questions about topics with which they would have varying degrees of familiarity. Some participants were asked to make decisions on scenarios they could not possibly be familiar with i.e. how to get a group of mind-reading aliens to accomplish a task. Other participants were asked about more familiar topics i.e. choosing how to reduce risk in a retirement portfolio or deciding between specific meals and activities to manage bodyweight.

For some participants, scenarios had a causal structure, meaning that participants could make the correct decision based on the causal relationship laid out either in text or as a diagram . The team was then able to compare whether people did better or worse with new information or just using what they already knew.

Kleinberg and her team, including former Stevens graduate student Min Zheng and cognitive scientist Jessecae Marsh from Lehigh University, found that when people make decisions in novel scenarios, such as those including mind-reading aliens, they do very well on that problem. “People are just focusing on what’s in the problem,” said Kleinberg. “They are not adding in all this extra stuff.”

However, when that problem, with the same causal structure, was replaced with information about finances and retirement, for example, people became less confident in their choices and made worse decisions, suggesting that their prior knowledge got in the way of choosing the best outcome.

Kleinberg found the same to be true when she posed a problem about health and exercise, as it relates to diabetes. When people without diabetes read the problem, they treated the new information at face value, believed it and used it successfully. People with diabetes, however, started second-guessing what they knew and as in the previous example, did much worse.

“In situations where people do not have background knowledge, they become more confident with the new information and make better decisions,” said Kleinberg. “So there’s a big difference in how we interpret the information we are given and how it affects our decision making when it relates to things we already know vs. when it’s in a new or unfamiliar setting.”

Kleinberg cautions that the point of the paper is not that information is bad. She argues only that in order to help people make better decisions, we need to better understand what people already know and tailor information based on that mental model. The National Science Foundation recently awarded Kleinberg, in collaboration with Marsh, a grant entitled, “Uniting Causal and Mental Models For Shared Decision-making in Diabetes,” to address this very issue.

“People hold a certain set of beliefs about disease and treatment, finances and retirement,” said Kleinberg. “So more information, even with explicit causal relationships, may not be enough to steer people to make the best decisions. It’s how we tailor that information to this existing set of beliefs that will yield the best results — and that’s what we want to figure out.”

https://www.sciencedaily.com/rss/all.xml