The question of providing people with enough information to make a “good” decision, without either (1) trivializing / 0ver-simplifying, or (2) overwhelming the audience is a fascinating one to me and a genuinely serious issue as information becomes cheaper and cheaper and “attention” becomes the commodity. Both the reading from Nudge and the “Humans as Stupid Terminals” readings relate to this issue. Unfortunately, I did not find the Nudge reading particularly helpful and while the “Humans as Stupid Terminals” was interesting, it was more a restatement of the issues of information glut and overload than what I consider particularly useful guidance.
There is significant information on providing information to people where there is relatively binary decision that does not require a prior value judgement. For example there is voluminous information on the best way to design traffic signs or voting booths so that drivers and voters can get to where they want for vote for whom they want. These are situations where the “user” has typically made a decision *before* the actual process is presented — “I know I want to go to Denver, I just need to know where to turn off” — “I know I want to vote for the independents, I just need to know what boxes to check.” And, of course, the Metro and stove examples in the book.
Similarly, there is an enormous amount of information on how to effectively influence individuals to a make decision in a manner that you would like. The arts of marketing, advertising, lobbying and even effective management are all built around the notion of influencing users in a direction you would like to steer them. This is the whole issue of “influence”.
However, I have yet to see a really meaningful answer to the question “How much information does a user need in order to make a rational decision?” (the “mappings” question raised in Nudge. The medical choices issue is a good one, but highlights that there are relatively few doctors a patient will deal with who do not have a bias — surgeons vs. radiation treatment (and presumably in chemotherapy). As Nudge points out, there are no experts in “wait and see.” Nor are there likely to be. Similarly, a simple comparative cell service pricing chart is likely to be simple and effective but is unlikely to ever effectively communicate the “softer” issues such as customer service. More importantly, no cell carrier would *ever* willingly participate in such a plan and would use all their own influence to prevent it.
But let’s take a simpler issue — food labeling. I was struck this weekend by the difference in food labeling in countries when we purchased a large quantity of Coca Cola imported from Mexico. It still has sugar in it, rather than corn syrup. The bottles looked virtually identical to US bottles with the exception of (1) the “made in mexico” label, and (2) an ugly, obvious sticker with nutritional information added in the US to comply with FDA laws and obviously not required in Mexico.
My question, of course, is how useful are such labels and how do you make them “good enough” that people make “the right decision”? Forgetting for a moment that a “right decision” is extremely relative, we’re going to play god for a moment and assume that the “right” decision is that Coca Cola is fine is moderate amounts (1 or 2 bottles a week) for people who do not have diabetes, sensitivity to caffeine or other ingredients, are taking care of their teeth regularly and do not have serious weight issues.
A label with calories and nutritional information does not convey that. The US cigarette label comes close, but has no comparative feature.
We know how to label poisons — “This Will Kill You” — We know, to a less extent, how to label dangerous but less-than-lethal but otherwise dangerous substances — “Do Not Operate Heavy Machinery” — We don’t seem to know how to manage beyond that — “This is moderately enjoyable, but also somewhat addictive and a certain percentage of people get sick or die each year as a result of its use. That percentage is small in the individual case and larger in the aggregate.”
In short, how do we communicate enough, clearly and not too much and yet not incorporate too much bias into the communication?
People are just plain bad at assessing risks. Some data:
“… your chances of dying in an airplane crash? A one-year risk of one in 400,000 and one in 5,000 lifetime risk. … Drowning? A one-year risk of one in 88,000 and a one in 1100 lifetime risk. In a fire? About the same risk as drowning. Murder? A one-year risk of one in 16,500 and a lifetime risk of one in 210. … And the proverbial being struck by lightning? A one-year risk of one in 6.2 million and a lifetime risk of one in 80,000.”
These numbers seem very very flawed to me. A lifetime of murder as 1 in 210? And yet, this piece suggests similar numbers.
So, no answers here, just the question, how on earth do you get people to recognize that they are 25 times more likely to be murdered than to do in a plane crash? (no jokes in poor taste here). I have no idea.