Cognitive Bias
Cognitive Bias
Modern computers have limited performance. Simple estimates indicate that they cannot analyze in full details even the draughts game. Modeling laboratory experiments researchers often take advantage of parallel computing. However, in this case, they have to deal with various scale effects induced by a choice of boundary conditions. Unfortunately, these effects become dominant near phase transitions where folks have to switch to the Cyber Physical System methodology to come up with valid results. Hence, modern research often experiences the lack of resources. Therefore, reducing costs is one of its top priorities. To deal with this matter, scientists who work in the field of statistics often use samples that are smaller than theoretical estimates required. Unfortunately, Daniel Kahneman overlooked this subtlety in his book Thinking Fast and Slow. Specifically, he did not take it into account while judging professional estimates of the suitable sizes of different samples on the basis of the respective theoretical ones. Such judgment led him to misleading conclusion that human mind is not capable of solving statistical problems very well.
Arriving at the Misconception about Statistics
The two system model of human intellectual activity. The book presents the approach to understating human thinking that describes the work of human mind as interaction of two distinct systems. System 1 is responsible for intuitive thinking (Kahneman, 2011, p. 16). Figure 1 contains the expression of human face when only System 1 is active. Everyone who looks at it easily realizes that the lady does not control her emotions and is about to say some harsh words. Such the realization comes to the mind of an ordinary person almost instantly and this is another example of the activity of System 1. However, if, for instance, an average person needs to know the result of multiplication of two numbers that consist of two digits, he or she needs much more than a few seconds to come up with the answer. This is an example when only System 2 is active. The approach is sophisticated enough to take into account the interaction of these two systems. System 2 works only when the person pays attention and stops otherwise. Paying attention often requires self-control that consumes scarce resources. However, human being can be in state of flow when concentrating on a task is effortless. System 2 can change the way System 1 acts through “programming the normally automatic functions of attention and memory”.
Influence System 2 exerts on System 1. For instance, when one can force him/herself to look forward to appearing a lady with a specific color of hair, he or she substantially improves the chances of recognizing her in the crowd that moves from specific gate in the airport. Unfortunately, attention capacity is finite. Indeed, the results of psychological tests indicate that people become effectively blind while intensively working on something. Moreover, there are situations when System 2 cannot change the way the first one performs. The broadly-known Müler-Lyer delusion depicted on Figure 2 is one of them.
The description of the Müler-Lyer deception. Any person contemplating this drawing assumes that the upper horizontal segment is shorter than the bottom one. Nevertheless, the ruler communicates the opposite.
Influence System 1 exerts on System 2. System 1 is always active and sometimes prevents System 2 from achieving the maximal possible performance. The experiment presented in Figure 3 illustrates this point.
System 1 behaves distractively. If you do it on your own, you will definitely notice that in the first run the first column requires less attention than the second one. The opposite is true for the second run.
System 1 as a statistician. Kahneman argues that System 1 “automatically and effortlessly identifies causal connections between events, sometimes even when connections are spurious”. To illustrate this point, he analyzes the reaction of the human mind to the results of the statistical “study of the incidence of kidney cancer in the 3,141 counties in the United States”. According to them, the rarest occurrence of kidney cancer takes place in largely rural counties with low population density “located in traditionally Republican states in the Midwest, the South, and the West”. Kahnmen states that when a human being analyzes such result, his or her System 2 initially searches memory and formulates hypothesis. Meanwhile, System 1 interacts with associative memory retrieving “the facts and suggestions”. It is likely that the person “rejected the idea that Republican politicians provide protection against kidney cancer”. However, he or she could become curious about “the fact that the counties with low incidence of cancer are mostly rural”. Professional statisticians assert that “it is both easy and tempting to infer that their low cancer rates are directly connected with clean living of the rural lifestyle – no air or water pollution, an access to fresh food without additives”. However, the chosen study reports that the most frequent occurrence of kidney cancer takes place in largely rural counties with low population density “located in traditionally Republican states in the Midwest, the South, and the West” too. Probably, the hypothesis that “their high cancer rates might be directly related to the poverty of the rural lifestyle – no access to good medical care, high-fat diet, too much alcohol and tobacco” could mislead some researchers due to the above mentioned drawback of his or her System 1. However, the observation that in this case “the rural life style” accounts for “both very high and very low incidence of kidney cancer” suggests considering other interpretations as well.
Kahneman’s Confusion. Kahneman states that it is the sample size that is responsible for these incidents to occur in rural states. Specifically, according to him, low density of population in these regions is likely to result in choosing too small samples. It seems to me that Kahneman’s System 2 is not attentive enough here to notice that such choice could occur due to the cost considerations. Too small sample possesses an abnormally increased variability which perfectly matches the observed pattern. Kahneman describes in his book Thinking Fast and Slow the experiment in which he measured the performance of scientists specializing in different mathematical disciplines including statistics in estimation of suitable sample sizes for various experiments. It is significant discrepancy between their predictions and respective theoretical estimations that forced Kahneman to look for the explanation of this phenomenon in drawbacks of human thinking. As for statistical practitioners, they treat these estimations as upper bounds for the sizes of samples they use. For instance, American weather forecasters achieved very good accuracy. They did it to the greatest extent with the help of timely feedback. In other words, they prefer practical observations to theoretical estimations.
Probability Assessment Techniques
Researches often use samples to estimate various probabilities or probability distributions. Therefore, asking an expert to come up with a relevant sample size is one of the methods of probability assessment or probability distribution. These methods comprise a broad spectrum. In the case of the assessment of individual probabilities, it includes direct assessments and the probability wheel. Among the methods of the probability distribution assessment, there are the probability method and the graph drawing one.
The method of direct appraisal. Direct assessment assumes eliciting “a probability from a decision-maker” through posing “a direct question” about the value of probability of a certain outcome. The respondent usually replies through putting mark on “a scale that runs from 0 to 1”. However, some decision makers are more accustomed to communicating the appreciation of probability in terms of odds. For instance, they prefer the statement “odds of 25 to 1 against the occurrence of an event” to asserting that its probability is equal to 0.038.
Methods that assume device use. The probability wheel method assumes that the respondent has to keep choosing between two bets with the identical rewards until both of them will be equally good for him or her. The first bet concerns the outcome for which the method seeks probability. As for the second one, it means that the respondent receives the reward when, “after spinning the wheel once, the pointer is in the white sector”, and, otherwise, he or she receives nothing. The size of a sector is adjustable that allows gradual reduction of the trial probability value that the method strives to find.
A flaw of the probability wheel method. Although this device “enables the decision maker to visualize the chance of an event occurring,” it performs poorly assessing “events that have either very low or very high probability of occurrence” due to difficulty of differentiating “between the sizes of small sectors”. However, substituting the wheel with “an urn filled with 1000” balls of two different colors can remove this drawback. Specifically in this case, the respondent has “to imagine an urn filled with,” for instance, 400 red balls and 600 blue ones and subsequently to “choose between betting on the event in question occurring or betting on a red ball being drawn from the urn”. The choice assumes that both alternatives “offer the same reward”.
The probability method. One of the techniques of the probability distribution assessment is the probability method. Decision makers often “quote too narrow a range within which they think the uncertain quantity will lie”. If the distribution median is the first value they estimate, the created anchor results in adjustments to other small values. The probability method takes care of such effects through assuming the following seven-step procedure. Firstly, an asker establishes “the range of values within which” a respondent “thinks that the uncertain quantity will lie”. Secondly, the questioner asks “the decision maker to imagine scenarios that could lead to the true value lying outside the range”. Thirdly, the analyst revises “the range in the light of the responses in step 2”. Fourthly, the one divides “the range into six or seven roughly equal intervals”. Fifthly, the questioner asks “the decision maker for the cumulative probability at each interval”. This probability “can either be” the one “that the uncertain quantity will fall below each of these values” or the one that the quantity “will exceed each of these values”. The choice is up to the respondent. Sixthly, the analyst fits “curve, by hand, through the assessed points”. Seventhly, the asker conducts two checks. In the first one, he or she splits “the possible range into three equal intervals” and determines whether “the decision maker would be equally happy to place a bet on uncertain quantity falling in each interval”. The analyst makes “appropriate revisions to the distribution” if the respondent has some preferences. In the second check, the questioner controls “the modality of the elicited distribution”. In this case, there is an inflection on “the cumulative curve”; the analyst has to “ask the decision maker if he or she does have single best guess as to the value that uncertain quantity will assume” and subsequently makes corrections to “the distribution, if necessary”.
Graphical appraisal of probability distribution. Graph drawing is the collection of probability distribution assessment techniques. This class encompasses the approach in which the questioner prepares “a set of graphs, each representing different probability density function (pdf), and then asks the decision maker to select the graph that most closely represents his or her judgment”. Another subgroup of the graph drawing collection assumes that the analyst asks the respondent “to draw a graph to represent either a probability density function or a cumulative distribution function (cdf)”. Graph drawing also includes widely known method of relative heights.
The method of relative heights. In this method, the analyst asks the respondent “to identify the most likely value of the variable under consideration”. Afterwards, the asker draws the vertical line one hundred units long. Following the percentage answer to the inquires about how likely other values are in comparison with the first one that the respondent provides, the analyst draws lines with the lengths representing these responds. The final step of this method is the normalization.
Probability distribution assessment. One can apply this method “to assess probability density functions for continuous distributions” through fitting “a smooth pdf curve across the tops of the lines” that represent the elicited “relative likelihood of a few values”.
Comparing the probability appraisal methods. According to Goodwin and Wright, the probability method is better than the one of relative heights since it deals with overcoming “the tendency of decision makers to estimate distributions that have too narrow range”. Some researchers argue that the probability assessment “should generally start with the assessments based on the probability wheel” due to the fact that “many people have difficulty in making direct judgments”. As for other methods, these researches suggest to use them “at a later stage as consistency checks”. However, a bunch of studies are aimed at comparing “the methods, but these have not identified single best method”. Their results indicate “that variety of different methods should be used during the elicitation process”.
Conclusion
Thus, obtaining new scientific discoveries nowadays assumes extensive usage of scarce resources. Therefore, modern researchers care about saving costs. Statistics is an experimental science to a great extent. Consequently, the statisticians often take into account practical considerations while choosing values of parameters such as a sample size. Unfortunately, Kahneman was not aware of this subtlety when he was assessing how good statistical experts are at estimating the values of the suitable sample sizes for different experimental studies. Comparing their responses with the respective theoretical estimations, Kahneman arrived at the conclusion that human brain is not good at dealing with statistical problems. He views human thinking as an interaction of two distinct systems: System 1 and System 2. The first one provides the intuitive thinking whereas the second one does the deliberate thinking. System 2 never works in isolation. The famous Müler-Lyer delusion (see Figure 2) provides the evidence to the fact that System1 can permanently mislead System 2. According to Kahneman, it is System 1 that is responsible for human brain poor performance in solving statistical problems since this system tends to make cause-and-effect inferences about various events often when the respective causal connections are dubious. The researchers frequently use sample data to estimate probabilities of various events. Therefore, the sample size assessment that Kahneman used in his quest for studying the ability of human brain to solve statistical problems is analogous to the probability one. Hence, one can compare it with other techniques of probability assessment and probability distribution. They include direct assessments, the probability wheel method, the probability method, and the method of relative heights. There are studies devoted to their comparison arriving at the conclusion that one should use each method only in conjunction with other techniques.