The Poorhouse was recently reading a study that showed just how illogical people are, even when the logicalness at first sight seems simplistic to adhere to. Furthermore, it specifically concerning the topic of risk assessment of major terrorist attacks, there are present and clear dangers as to decision makers falling prey to such logicalities.

Not only do we not really want to get blown up by being silly enough to ignore the risk, it is also vital not to think too *much* of the risk given certain politicians' likings for discarding civil liberties and instilling military-rule-esque regimes amongst the poor, innocent, ignorant civilian population that we apparently consist of. As a bonus, it also shows how the potential bias inherent in survey responses needs to be taken with even more seriousness than the average layperson might think.

The first illogicality in risk assessment Mandel investigates in his study "Are risk assessments of a terrorist attack coherent?" is that of the effects of unpacking a given situation into constituent parts and assigning probabilities to them individually.

For instance, if the Poorhouse bag o' fun contains 3 red balls, 4 blue balls and 3 green cubes, then a general "packed" probability assessment question might be "What is the likelihood of randomly pulling out a ball from the bag?". The answer here of course is 0.7 (70%) - that being the 7 balls divided by the 10 total objects.

The unpacked version would be "What is the likelihood of randomly pulling out either a red ball or a blue ball from the bag?". The answer is of course - apparently obviously - the same, right? Seems simple enough. But investigations show that humans are prey to an unpacking effect that results in illogical risk assessments, admittedly with slightly trickier questions concerned.

Here's the master question Mandel asked his experimental subjects about (p4):

Within the next {2, 4, or 6} months in the U.S. there will. . .

(a) be at least one major terrorist attack.

(b) be at least one major terrorist attack that is plotted by al Qaeda.

(c) be at least one major terrorist attack that is NOT plotted by al

Qaeda.

(d) NOT be at least one major terrorist attack.

The full details of the methodology are available in the full study, but to summarise, each participant was the question of risk of attack by either adding on one of the 4 endings to the main question, to be expressed as a decimal probability between 0 and 1. The median response of the population answering each question type was taken for analysis. Check the results:

(for further details on the differences between the numbered experiments see the full study).

Taking the first line of results, where the question was packed into a single "what's the probability of a terrorist attack within 2 months?", the median probability of the responses was 0.1. However the summed medians of the "what's the probability of a terrorist attack by al Qaeda?" and "what's the probability of a terrorist attack not by al Qaeda?" - two events that should have the same probability between them as the packed version of the question - was 0.14.

This was seemingly no fluke, the effect being replicated in the other questions and experiments within this study and several others. To quote the study author (p2):

...unpacking yields subadditive probability assessments because the probability assigned to the implicit hypothesis is less than that assigned to the sum of its parts when it is unpacked.

Therefore, if you want someone to think something is *more* likely than they may otherwise have, ensure that they think of it broken down into component parts rather than one whole event.

The next illogicality was termed as refocusing; namely changing from assessing the probability of something happening directly, to that derived from the assessment of it *not* happening.

Back to the bag of objects: the answer to "What is the likelihood of pulling out a ball?" (0.7 in this case) should always be one minus the probability of the answer to "What is the likelihood of pulling out something other than a ball?" (0.3). 1 - 0.3 = 0.7, so this is a logical answer. Again, human's aren't always the best at keeping this in mind.

Analysis of the results of Mandel's questionnaire were done again, this time comparing the packed probabilities of a terrorist attack happening found in the previous exercise to the median probabilities gained by asking the question "What is the probability of there not being a terrorist attack during this period?". The results were as follows:

Take the top line again. The median response to the likelihood of a terrorist attack having not happened within in 2 months was 0.50 (50%). Compare this to the previous response to how likely it is that a terrorist attack *will* take place within 2 months - 0.10 (10%). Notice the logical problem?

There is a 0.4 perceived probability of neither of these two events, one of which clearly *must* occur, happening. The SUM column on the table is the addition of the two medians, and logically one would expect to add up to 1. However, in these initial experiments it never does; it is always way below. People here believe that there is a relative un-likelihood of there being a terrorist attack, whilst at the same time believing that there is also a relative un-likelihood of there *not* being a terrorist attack.

However, to be fair, this illogicality was very much reduced or eliminated depending on the structure of the questioning. When asked the 2 questions right after each other, people generally realised that the probability of one event must be the inverse of the probability of the other. When they were distracted in some way, or the questions were asked between two different populations, the probabilities never really became close to being the logical sum of one.

Lastly, it's time for "monotonicity violations". As the study author explains:

Extensional logic requires that risk assessments across timeframes be adjusted monotonically - namely, they should never decrease as timeframe increases, nor should they increase as timeframe decreases.

To go back to our delightful balls-in-bag example, imagine that you are picking out 1 object from the bag every minute. Think about what the probability of pulling out a ball within one minute would be. Now compare that to the probability of pulling out a ball within *two* minutes under the same rules. It is probably obvious that the probability of the latter happening is higher, and most *definitely* could not possibly be lower, than the former. If something has a probability of happening within a timeframe of length X, it cannot ever have less of a probability of happening within timeframe X + 1.

But is it so self-evident humans never stray from this rule? Well, you will be unshocked to hear that no, it apparently isn't.

As we saw above, Mandel was asking his terrorism question under three different time frames - i.e. will an attack happen within 2 months, vs will it happen in 4 or 6 months? Clearly someone who thinks there is a probability of 20% of an attack happening within 2 months should logically think that an attack within 4 months is at least 20%. Mandel however noted several violations of this rule. The percentage of respondents involved are shown in the below table.

Again, you can see that the illogical answers dried up rather a lot by experiment three, where the respondents were being asked the questions one after the other with no distractions. However in the earlier experiments where they did not necessarily have their previous risk assessment answers at hand, several logical mistakes were made, where respondents considered an attack event being of a *lower* probability over a *longer* period of time.

Previous studies had suggested that people have more difficulties in thinking logically of non-events rather than events. This is borne out by these results too, where the monotonicity errors from questions regarding the probability of *no* attack were substantially higher than those of thinking about the probabilities of there being an attack. If you want to confuse people, ask them questions with plenty of negations in.

The above examples of illogicality show how easy it is for humans to think two incompatible thoughts at the same time. Whilst the risk assessments have such obvious logical errors, it stands to reason that at least one of the assessments, and possibly both, are wrong. It is an interesting psychological phenomenon to be sure, but also has practical implications in fields like those than Mandel was questioning about. Not to over-dramatise of course, these sort of risk assessments are clearly what military intelligence, policy makers et al. rely on, at least in part, in these days of vague, hidden, underground threats to society. They, and those of us they "represent", need to be extremely wary of these impossible occurrences occurring in the processes of evaluation, assessment and decision-making.

Just before you go away and feel stupid (or that everyone else is stupid), here's a couple of snippets of trivia referenced in the intro to the study (p2):

- "people are willing to pay more to reduce the probability of an improbable risk “in half” rather than reduce it from, say, .00002 to .00001"
- "People are also more likely to attend to the risk if it is described as occurring 10 times per 1,000 rather than 1 time per 100"

Reference: Journal of Experimental Psychology: Applied. 2005 Dec Vol 11(4) 277-288

NB: If anyone would like to explain to the Poorhouse how he totally misunderstood it all, is an illiterate fool and so on, please feel free to write in with some tasty corrections.

Attachment | Size |
---|---|

Are risk assessments of a terrorist attack coherent? | 98.62 KB |

## Comments

## Post new comment