Decision making in uncertainty: Rules and biases. Kahneman D., Slovik P., Tversky A

Decision making under uncertainty is based on the fact that the probabilities of various scenarios are unknown. In this case, the subject is guided, on the one hand, by his risk preference, and, on the other hand, by the selection criterion from all alternatives according to the compiled “decision matrix”. Decision-making under risk conditions is based on the fact that each situation of the development of events can be given a probability of its implementation. This allows you to weigh each of the efficiency values ​​and choose the situation with the lowest level of risk for implementation.

Justification and selection of specific management decisions related to financial risks is based on the concept and methodology decision theory. This theory assumes that decisions associated with risk are always characterized by elements of uncertainty about the specific behavior of the initial parameters, which do not allow one to clearly determine the values ​​of the final results of these decisions. Depending on the degree of uncertainty of the upcoming behavior of the initial decision-making parameters, there are risk conditions, in which the probability of occurrence of individual events affecting final result, can be established with varying degrees of accuracy, and uncertainty conditions, in which, due to the lack of necessary information, such a probability cannot be established. The theory of decision making under risk and uncertainty is based on the following assumptions:

1. The object of the decision is clearly determined and the main possible risk factors are known from it. AT financial management such objects are a separate financial transaction, a specific type valuable papers, a group of mutually exclusive real investment projects etc.

2. According to the decision-making object, an indicator was chosen, which the best way characterizes the effectiveness of this solution. For short-term financial transactions, this indicator is usually chosen as the amount or level of net profit, and for long-term - net present income or internal rate of return.

3. An indicator characterizing the level of its risk was chosen for the decision-making object. Financial risks are usually characterized by the degree of possible deviation of the expected performance indicator (net profit, net present income, etc.) from its average or expected value.

4. There are a finite number of decision alternatives(a finite number of alternative real investment projects, specific securities, ways to carry out a certain financial transaction, etc.).

5. There is a finite number of situations for the development of an event under the influence of changes in risk factors. In financial management, each of these situations characterizes one of the possible upcoming states of the external financial environment under the influence of changes in individual risk factors. The number of such situations in the decision-making process should be determined in the range from extremely favorable (the most optimistic situation) to extremely unfavorable (the most pessimistic situation).

6. For each combination of decision-making alternatives and situations of the development of an event, the final indicator of the effectiveness of the decision can be determined(a specific value of the amount of net profit, net present income, etc., corresponding to this combination).

7. For each of the situations under consideration, it is possible or impossible to estimate the probability of its implementation.. The possibility of assessing the probability divides the entire system of risky decisions into the previously considered conditions for their justification (“risk conditions” or “uncertainty conditions”).

8. The choice of solution is carried out according to the best of the considered alternatives.

Methodology decision-making under risk and uncertainty involves the construction in the process of substantiating risky decisions of the so-called "decision matrix", which has the following form (Table 1).

Table 1. "Decision matrix" built in the decision-making process under conditions of risk or uncertainty

Decision alternatives Variants of situations of development of events
C1 C2 ... C n
A1 E11 E12 ... E1 n
A2 E21 E22 ... E2 n
... . . ... .
A n E n1 E n2 ... E nn

In the above matrix, the values ​​of A1; A2;... And n characterize each of the decision alternatives; values ​​C 1; C2;...; C n - each of the possible scenarios for the development of events; E11 values; E12; E1 n; E21; E22; E2 n; E n1; E n2; ...; E nn is a specific level of decision efficiency corresponding to a certain alternative in a certain situation.

The above decision matrix characterizes one of its types, denoted as "winning matrix", as it considers a measure of efficiency. It is also possible to build a decision matrix and another type, referred to as a "risk matrix", in which, instead of an efficiency indicator, an indicator of financial losses is used, corresponding to certain combinations of decision alternatives and possible situations of development of events.

Based on the specified matrix, the best of the alternative solutions is calculated according to the selected criterion. The methodology for this calculation is differentiated for risk conditions and conditions of uncertainty.

I. Decision making under risk is based on the fact that each possible situation of the development of events can be given a certain probability of its implementation. This allows you to weigh each of the specific efficiency values ​​for individual alternatives by the probability value and, on this basis, obtain an integral indicator of the risk level corresponding to each of the decision making alternatives. Comparison of this integral indicator for individual alternatives allows you to choose for implementation the one that leads to the chosen goal (a given performance indicator) with the lowest level of risk.

Estimation of the probability of realization individual situations development of events can be obtained by expert way.

Based on the decision matrix built under risk conditions, taking into account the probability of individual situations, the integral level of risk is calculated for each of the decision-making alternatives.

II. Decision making under uncertainty is based on the fact that the probabilities of various scenarios for the development of events are unknown to the subject making the risky decision. In this case, when choosing an alternative to the decision being made, the subject is guided, on the one hand, by his risk preference, and, on the other hand, by the appropriate selection criterion from all alternatives according to the “decision matrix” compiled by him.

The main criteria used in decision making under uncertainty are presented below.

1. Wald's criterion ("maximin" criterion)

2. Criterion of "Maximax"

3. Hurwitz criterion (criterion of "optimism-pessimism" or "alpha criterion")

4. Savage criterion (criterion of losses from "minimax")

1. Wald's criterion (or "maximin" criterion) assumes that from all possible options of the "decision matrix" the alternative is chosen that of all the most unfavorable situations of the development of the event (minimizing the value of efficiency) has the largest of the minimum values ​​(i.e. the value of efficiency, the best of all the worst or the maximum of all the minimum ).

The Wald criterion (the "maximin" criterion) is guided by the choice of risky decisions under conditions of uncertainty, as a rule, a subject who is not prone to risk or considers possible situations like a pessimist.

2. Criterion of "Maximax" assumes that from all possible options of the “decision matrix”, the alternative is selected that of all the most favorable situations of development of events (maximizing the efficiency value) has the largest of the maximum values ​​(i.e., the efficiency value is the best of all the best or the maximum of the maximum).

The "maximum" criterion is used when choosing risky decisions under conditions of uncertainty, as a rule, subjects who are prone to risk, or who consider possible situations as optimists.

3. Hurwitz criterion (criterion of "optimism-pessimism" or "alpha criterion") allows you to be guided when choosing a risky decision under conditions of uncertainty by some average result of efficiency, which is in the field between the values ​​​​according to the criteria "maximax" and "maximin" (the field between these values ​​is connected by means of a convex linear function). The optimal solution alternative according to the Hurwitz criterion is determined based on the following formula:

A i \u003d a * E MAXi + (1 - a) * E MINi,

where A i is the weighted average efficiency according to the Hurwitz criterion for a specific alternative;

a - alpha coefficient, taken taking into account the risk preference in the field from 0 to 1 (values ​​approaching zero are typical for a risk-averse subject; a value of 0.5 is typical for a risk-neutral subject; values ​​approaching unit, are characteristic for the subject prone to risk);

E MAXi - the maximum value of efficiency for a particular alternative;

E MINi is the minimum efficiency value for a particular initiative.

The Hurwitz criterion is used when choosing risky decisions under conditions of uncertainty by those subjects who want to identify the degree of their specific risk preferences as accurately as possible by setting the value of the alpha coefficient.

4. Savage criterion (criterion of losses from "minimax") assumes that from all possible options of the “decision matrix”, the alternative is chosen that minimizes the size of the maximum losses for each of the possible solutions. When using this criterion, the “decision matrix” is transformed into a “loss matrix” (one of the options for the “risk matrix”), in which, instead of efficiency values, the sizes of losses are put down for various scenarios.

The Savage criterion is used when choosing risky decisions under conditions of uncertainty, as a rule, by subjects that are not inclined to risk.

Consider the mathematical foundations of decision making under uncertainty.

Essence and sources of uncertainty.

Uncertainty is a property of an object, expressed in its vagueness, vagueness, groundlessness, leading to insufficient opportunity for the decision maker to realize, understand, determine its present and future state.

Risk is a possible danger, an action at random, requiring, on the one hand, courage in the hope of a happy outcome, and on the other hand, taking into account the mathematical justification of the degree of risk.

Decision-making practice is characterized by a set of conditions and circumstances (situation) that create certain relationships, conditions, positions in the decision-making system. Taking into account the quantitative and qualitative characteristics of the information at the disposal of the decision maker, we can distinguish decisions made under the following conditions:

certainty (reliability);

uncertainty (unreliability);

risk (probabilistic certainty).

Under conditions of certainty, decision makers determine the possible alternatives of a decision quite accurately. However, in practice it is difficult to assess the factors that create conditions for decision-making, so situations of complete certainty are most often absent.

The sources of uncertainty in the expected conditions in the development of an enterprise can be the behavior of competitors, the organization's personnel, technical and technological processes, and market changes. At the same time, the conditions can be divided into socio-political, administrative-legislative, industrial, commercial, financial. Thus, the conditions that create uncertainty are the impact of factors external to internal environment organizations. The decision is made under conditions of uncertainty, when it is impossible to estimate the likelihood of potential outcomes. This should be the case when the factors to be considered are so new and complex that it is not possible to obtain sufficient relevant information about them. As a result, the likelihood of a particular outcome cannot be predicted with sufficient certainty. Uncertainty is characteristic of some decisions that have to be made in rapidly changing circumstances. The socio-cultural, political and knowledge-intensive environment has the highest potential for uncertainty. Department of Defense decisions to develop exceptionally sophisticated new weapons are often initially uncertain. The reason is that no one knows how the weapon will be used and whether it will happen at all, as well as what kind of weapon the enemy can use. Therefore, the ministry is often unable to determine whether a new weapon will be really effective by the time it enters the army, which may be, for example, in five years. However, in practice, very few management decisions have to be made under conditions of complete uncertainty.

When faced with uncertainty, a manager can use two main options. First, try to get additional relevant information and analyze the problem again. This often reduces the novelty and complexity of the problem. The leader combines this Additional information and analysis with accumulated experience, judgment, or intuition to give a set of results a subjective or implied likelihood.

The second possibility is to act exactly on past experience, judgment, or intuition and make an assumption about the likelihood of events. Time and information restrictions have essential when making managerial decisions.

In a situation of risk, it is possible, using the theory of probability, to calculate the probability of a particular change in the environment; in a situation of uncertainty, the probability values ​​cannot be obtained.

Uncertainty manifests itself in the impossibility of determining the probability of the occurrence of various states of the external environment due to their unlimited number and lack of assessment methods. Uncertainty is taken into account in various ways.

Rules and criteria for decision-making under conditions of uncertainty.

Here are some general criteria for the rational choice of solutions from the set of possible ones. The criteria are based on the analysis of the matrix of possible environmental states and decision alternatives.

The matrix shown in Table 1 contains: Aj - alternatives, i.e. options for action, one of which must be selected; Si -- possible options for environmental conditions; aij is a matrix element denoting the value of the cost of capital accepted by alternative j in the state of the environment i.

Table 1. Decision matrix

For selection optimal strategy in a situation of uncertainty, different rules and criteria are used.

Maximin rule (Waald criterion).

In accordance with this rule, among the alternatives aj, one is chosen that, under the most unfavorable state of the external environment, has the largest value of the indicator. For this purpose, alternatives with the minimum value of the indicator are fixed in each line of the matrix, and the maximum is selected from the marked minimum. The alternative a* with the maximum value of all the minimum values ​​is given priority.

The decision maker in this case is minimally prepared for risk, assuming the maximum negative development of the state of the environment and taking into account the least favorable development for each alternative.

According to the Waald criterion, decision makers choose a strategy that guarantees the maximum value of the worst payoff (maximin criterion).

Max rule.

In accordance with this rule, the alternative with the highest achievable value of the estimated indicator is selected. At the same time, the decision maker does not take into account the risk from adverse environmental changes. The alternative is found by the formula:

а* = (аjmaxj maxi Пij )

Using this rule, determine the maximum value for each row and choose the largest of them.

A big drawback of the maximax and maximin rules is the use of only one scenario for each alternative when making a decision.

Minimax rule (Sevage's criterion).

Unlike maximin, minimax is focused on minimizing not so much losses as regrets about lost profits. The rule allows reasonable risk for the sake of obtaining additional profit. The Savage criterion is calculated by the formula:

min max П = mini [ maxj (maxi Xij - Xij)]

where mini, maxj - search for the maximum by enumeration of the corresponding columns and rows.

The calculation of the minimax consists of four stages:

  • 1) The best result of each column is found separately, that is, the maximum Xij (market reaction).
  • 2) The deviation from the best result of each individual column is determined, that is, maxi Xij - Xij. The results obtained form a matrix of deviations (regrets), since its elements are lost profits from unsuccessful decisions made due to an erroneous assessment of the possibility of a market reaction.
  • 3) For each line of regrets, we find the maximum value.
  • 4) We choose a solution in which the maximum regret will be less than the others.

Hurwitz's rule.

According to this rule, the maximax and maximin rules are combined by linking the maximum of the minimum values ​​of the alternatives. This rule is also called the rule of optimism - pessimism. The optimal alternative can be calculated using the formula:

a* = maxi [(1-?) minj Пji+ ? maxj Пji]

where? - coefficient of optimism, ? =1…0 at? =1 the alternative is chosen according to the maxmax rule, when? =0 - according to the rule of maximin. Given the fear of risk, is it appropriate to ask? =0.3. The highest value of the target value determines the required alternative.

The Hurwitz rule is applied, taking into account more essential information than when using the maximin and maximax rules.

Thus, when taking management decision in general it is necessary:

predict future conditions, such as demand levels;

develop a list of possible alternatives

evaluate the payback of all alternatives;

determine the probability of each condition;

evaluate alternatives according to the chosen decision criterion.

The direct application of criteria in making a managerial decision under conditions of uncertainty is considered in the practical part of this work.

managerial decision uncertainty

Kahneman D., Slovik P., Tversky A. Decision making under uncertainty: Rules and biases

I have been craving for this book for a long time ... I first learned about the work of Nobel laureate Daniel Kahneman from the book Fooled by Chance by Nassim Taleb. Taleb quotes Kahneman a lot and juicy, and, as I found out later, not only in this, but also in his other books (Black Swan. Under the sign of unpredictability, On the secrets of sustainability). Moreover, I found numerous references to Kahneman in the books: Evgeny Ksenchuk Systems Thinking. Limits of mental models and system vision of the world, Leonard Mlodinov. (Im)perfect accident. How Chance Rules Our Lives. Unfortunately, I could not find Kahneman's book in paper form, so I "had" to purchase an e-book and download Kahneman from the Internet ... And believe me, I did not regret a single minute ...

D. Kahneman, P. Slovik, A. Tversky. Decision making in uncertainty: Rules and biases. - Kharkov: Publishing House Institute of Applied Psychology "Humanitarian Center", 2005. - 632 p.

The book brought to your attention deals with the peculiarities of people's thinking and behavior in assessing and predicting uncertain events. As the book convincingly shows, when making decisions under uncertain conditions, people are usually wrong, sometimes quite significantly, even if they studied probability theory and statistics. These errors are subject to certain psychological patterns that have been identified and well experimentally substantiated by researchers.

Since incorporating Bayesian ideas into psychological research, for the first time psychologists have been offered a coherent and well-articulated model of optimal behavior under uncertainty against which human decision making can be compared. The conformity of decision making to normative models has become one of the main research paradigms in the field of judgment under uncertainty.

PartI. Introduction

Chapter 1 Decision Making under Uncertainty: Rules and Prejudices

How do people estimate the probability of an uncertain event or the value of an uncertain quantity? Humans rely on a limited number of heuristic 1 principles that reduce the complex problems of estimating probabilities and predicting magnitude values ​​to simpler judgmental operations. Heuristics are very useful, but sometimes they lead to serious and systematic errors.

The subjective assessment of probability is similar to the subjective assessment of physical quantities such as distance or size.

Representativeness. What is the probability that process B will lead to event A? When answering people usually rely on representativeness heuristic, in which the probability is determined by the degree to which A is representative of B, that is, the degree to which A is similar to B. Consider the description of a person by his former neighbor: “Steve is very reserved and shy, always ready to help me, but too little interested in other people and reality in general. He is very meek and tidy, loves order, and is also prone to detail.” How do people rate the likelihood of who Steve is by profession (for example, a farmer, a salesman, an airplane pilot, a librarian, or a doctor)?

In the representativeness heuristic, the likelihood that Steve is a librarian, for example, is determined by the degree to which he is representative of the librarian, or conforms to the stereotype of the librarian. This approach to estimating probability leads to serious errors because similarity or representativeness is not affected by the individual factors that should influence the estimator of probability.

Insensitivity to the prior probability of the result. One of the factors that do not affect the representativeness, but significantly affect the probability - is the previous (a priori) probability, or the frequency of the underlying values ​​of the results (outcomes). In Steve's case, for example, the fact that there are many more farmers than librarians in the population is necessarily taken into account in any reasonable assessment of the likelihood that Steve is a librarian rather than a farmer. Taking into account the frequency of base values, however, does not really change Steve's conformity to the librarian/farmer stereotype. If people estimate probability by means of representativeness, therefore they will neglect prior probabilities.

This hypothesis was tested in an experiment in which prior probabilities were varied. The subjects were shown brief descriptions of several people selected at random from a group of 100 professionals - engineers and lawyers. The test-takers were asked to rate, for each description, the likelihood that it came from an engineer rather than a lawyer. In one experimental case, the subjects were told that the group from which the descriptions were given consisted of 70 engineers and 30 lawyers. In another case, the subjects were told that the group consisted of 30 engineers and 70 lawyers. The chances that each individual description is due to an engineer rather than a lawyer should be higher in the first case, where most engineers are, than in the second, where lawyers are most. This can be shown by applying Bayes' rule that the proportion of these odds should be (0.7/0.3) 2 , or 5.44 for each description. In gross violation of Bayes' rule, the subjects in both cases showed essentially the same probability estimates. Apparently, the participants in the experiment rated the likelihood that a particular description was that of an engineer rather than a lawyer as the extent to which that description was representative of those two stereotypes, with little, if any, consideration for the prior probabilities of those categories.

Insensitivity to sample size. People usually apply the representativeness heuristic. That is, they estimate the probability of an outcome in the sample, the extent to which this outcome is similar to the corresponding parameter. The similarity of the statistics in the sample to the typical parameter in the entire population does not depend on the sample size. Therefore, if the probability is calculated using representativeness, then the statistical probability in the sample will be essentially independent of the sample size. On the contrary, according to sampling theory, the expected deviation from the mean is smaller, the larger the sample. This fundamental concept of statistics is obviously not part of people's intuition.

Imagine a basket filled with balloons, 2/3 of one color and 1/3 of another. One person takes 5 balls out of the basket and discovers that 4 of them are red and 1 is white. Another person draws 20 balls and discovers that 12 of them are red and 8 are white. Which of these two people should be more confident in saying that the basket contains 2/3 red balls and 1/3 white balls rather than vice versa? In this example, the correct answer is to estimate the subsequent odds as 8 to 1 for a sample of 5 balls and 16 to 1 for a sample of 20 balls (Figure 1). However, most people think that the first sample provides much stronger support for the hypothesis that the basket is filled mostly with red balls, because the percentage of red balls in the first sample is higher than in the second. This again shows that intuitive estimates are dominated by sample proportion rather than sample size, which plays a decisive role in determining real subsequent chances.

Rice. 1. Probabilities in the problem with balls (for formulas, see the Excel file on the "Balls" sheet)

False concepts of chance. People assume that a sequence of events organized as a stochastic process represents an essential characteristic of that process even when the sequence is short. For example, with regard to whether a coin comes up heads or tails, people believe that the O-R-O-R-R-O sequence is more likely than the O-O-O-R-R-R sequence, which does not seem random, and also more likely than O-O-O-O-R-O sequence, which does not reflect the equivalence of the sides of the coin. Thus, people expect the essential characteristics of a process to be represented, not just globally, i.e. in complete sequence, but also locally - in each of its parts. However, the locally representative sequence systematically deviates from the expected odds: it has too many alternations and too few repetitions. 2

Another consequence of the representativeness belief is the well-known error of the casino gambler. For example, when seeing reds roll for too long on the roulette wheel, most people mistakenly believe that they should now most likely roll black, because a black roll would complete a more representative sequence than another red would. Chance is usually seen as a self-regulating process in which a deviation in one direction leads to a deviation in the opposite direction in order to restore balance. In fact, deviations are not corrected, but simply “dissolved” as the random process proceeds.

Showed a strong belief in what can be called the law of small numbers, according to which even small samples are highly representative of the populations from which they are selected. The results of these researchers reflected the expectation that a hypothesis that is valid for the entire population will be presented as a statistically significant result in the sample, with sample size irrelevant. As a result, experts place too much faith in the results obtained from small samples and overestimate the repeatability of these results. In research, this bias leads to inadequate sampling and overinterpretation of the results.

Insensitivity to forecast reliability. People are sometimes forced to make numerical predictions, such as the future price of a stock, the demand for a product, or the outcome of a football game. Such predictions are based on representativeness. For example, suppose someone is given a description of a company and is asked to predict its future earnings. If the company description is very favorable, then very high profits will seem most representative of that description; if the description is mediocre, then the most representative will seem to be an ordinary development of events. The extent to which a description is favorable does not depend on the reliability of that description or the extent to which it allows for accurate prediction. Therefore, if people make predictions based solely on the favorableness of the description, their predictions will be insensitive to the reliability of the description and to the expected accuracy of the prediction. This way of making judgments violates normative statistical theory, in which the extremum and range of predictions depend on predictability. When predictability is zero, the same prediction must be made in all cases.

Illusion of validity. People are quite confident in predicting that a person is a librarian when given a personality description that fits the librarian stereotype, even if it is sparse, unreliable, or outdated. The unreasonable confidence that results from a good match between the predicted outcome and the input data can be called the illusion of validity.

Misconceptions about regression. Let's pretend that large group children was tested using two similar versions of the ability test. If one selects ten children from among those who did best on one of these two versions, he will usually be disappointed with their performance on the second version of the test. These observations illustrate a general phenomenon known as regression to the mean, which was discovered by Galton over 100 years ago. In ordinary life, we all encounter a large number of cases of regression to the mean, comparing, for example, the height of fathers and sons. However, people have no assumptions about this. First, they do not expect regression in many of the contexts where it should occur. Second, when they acknowledge the occurrence of a regression, they often invent incorrect explanations for the causes.

Failure to recognize the meaning of regression can be detrimental. When discussing training flights, experienced instructors noted that praise for an exceptionally soft landing is usually followed by a worse landing on the next attempt, while harsh criticism after a hard landing is usually followed by an improvement in the next attempt. The instructors concluded that verbal rewards are detrimental to learning while reprimands are beneficial, contrary to accepted psychological doctrine. This conclusion is invalid due to the presence of regression to the mean. Thus, failure to understand the regression effect leads to overestimating the effectiveness of punishment and underestimating the effectiveness of rewards.

Availability. People rate the frequency of a class or the likelihood of events based on the ease with which they recall examples of cases or events. When the size of a class is estimated based on the availability of its elements, a class whose elements are easily retrieved from memory will appear more numerous than a class of the same size, but whose elements are less accessible and less easily recalled.

Subjects were read a list of famous people of both genders and then asked to rate whether the list contained more male names than female names. Different lists were provided to different groups of test takers. In some of the lists, the men were more famous than the women, and in others, the women were more famous than the men. In each of the lists, the subjects erroneously believed that the class (in this case, gender) in which there were more famous people, was more numerous.

The ability to imagine images plays an important role in assessing the probabilities of real life situations. The risk involved in a dangerous expedition, for example, is assessed by mentally re-enacting contingencies that the expedition does not have sufficient equipment to overcome. If many of these difficulties are vividly depicted, the expedition may seem extremely dangerous, although the ease with which disasters are imagined does not necessarily reflect their actual likelihood. Conversely, if the potential danger is hard to imagine, or simply does not come to mind, the risk associated with any event may be grossly underestimated.

illusory relationship. Long experience has taught us that, in general, elements of large classes are remembered better and faster than elements of less frequent classes; that more probable events are easier to imagine than unlikely ones; and that associative links between events are strengthened when events often occur simultaneously. As a result, a person has at his disposal a procedure ( availability heuristic) to estimate class size. The probability of an event, or the frequency with which events can occur simultaneously, is measured by the ease with which the corresponding mental processes of recall, recall, or association can be performed. However, these estimation procedures systematically lead to errors.

Adjustment and "binding" (anchoring). In many situations, people make estimates based on an initial value. Two groups of students high school evaluated, for 5 seconds, the value of the numerical expression that was written on the board. One group evaluated the value of the expression 8x7x6x5x4x3x2x1, while the other group evaluated the value of the expression 1x2x3x4x5x6x7x8. The average score for the ascending sequence was 512, while the average score for the descending sequence was 2250. The correct answer is 40,320 for both sequences.

Bias in the evaluation of complex events is especially significant in the context of planning. The successful completion of a business venture, such as the development of a new product, is usually complex: in order for the enterprise to succeed, each event in a series must occur. Even if each of these events is highly likely, the overall success rate can be quite low if the number of events is large. The general tendency to overestimate the likelihood of conjunctive 3 events leads to unreasonable optimism in estimating the likelihood that the plan will succeed, or that the project will be completed on time. Conversely, disjunctive 4 event structures are commonly encountered in risk assessment. complex system such as nuclear reactor or the human body, will be damaged if any of its essential components fail. Even when the probability of failure in each component is small, the probability of failure of the entire system can be high if many components are involved. Because of the "tie-in" bias, people tend to underestimate the likelihood of being denied complex systems. Thus, the binding bias can sometimes depend on the structure of the event. The structure of an event or phenomenon similar to a chain of links leads to an overestimation of the probability of this event, the structure of an event similar to a funnel, consisting of disjunctive links, leads to an underestimation of the probability of an event.

"Binding" when estimating the subjective probability distribution. In decision analysis, experts are often required to express their opinion on a quantity. For example, an expert may be asked to select a number, X 90, such that the subjective probability that this number will be higher than the Dow Jones average value is 0.90.

An expert is considered properly calibrated in a certain set of problems if only 2% of the correct values ​​of the estimated values ​​are below the given values. Thus, the true values ​​must strictly fall within the interval between X 01 and X 99 in 98% of the problems.

Confidence in heuristics and the prevalence of stereotypes are characteristic not only of ordinary people. Experienced researchers are also prone to the same biases - when they think intuitively. The inability of people to deduce such fundamental statistical rules as regression to the mean or the effect of sample size is surprising. While we all encounter numerous situations throughout our lives to which these rules can apply, very few discover the principles of sampling and regression from experience on their own. Statistical principles are not learned on the basis of everyday experience.

PartIIRepresentativeness

Daniel Kahneman (March 5, 1934, Tel Aviv) is an Israeli-American psychologist, one of the founders of the psychological economic theory and behavioral finance, which combines economics and cognitive science to explain the irrationality of a person's attitude to risk in decision making and in managing their behavior.

Known for his work, with Amos Tversky and others, in establishing a cognitive basis for common human fallacies in the use of heuristics, and for developing prospect theory; winner of the Nobel Prize in Economics in 2002 "for the application of psychological methods in economics, in particular - in the study of the formation of judgments and decision-making under conditions of uncertainty" (together with W. Smith), despite the fact that the research was conducted as a psychologist, and not as an economist.

Kahneman was born in Tel Aviv, spent his childhood years in Paris, and moved to Palestine in 1946. He received a bachelor's degree in mathematics and psychology from the Hebrew University of Jerusalem in 1954, after which he worked in the Israel Defense Forces, mainly in the psychological department. The unit in which he served was engaged in the selection and testing of recruits. Kahneman designed the personality assessment interview.

After his discharge from the army, Kahneman returned to the Hebrew University, where he took courses in logic and the philosophy of science. In 1958 he moved to the United States of America and received his Ph.D. in psychology from the University of California, Berkeley in 1961.

Since 1969, he collaborated with Amos Tversky, who, at the invitation of Kahneman, lectured at the Hebrew University on the estimation of the probability of events.

He currently works at Princeton University and also at the Hebrew University. He is on the editorial board of the journal Economics and Philosophy. Kahneman never stated that he was the only one involved in psychological economics - he indicated that everything that he received in this area, he and Tversky achieved together with their co-authors Richard Tailer and Jack Knetsch.

Kahneman is married to Anne Triesman, a renowned attention and memory researcher.

Books (2)

Making Decisions Under Uncertainty

Decision making in uncertainty: Rules and biases.

Decision Making under Uncertainty: Rules and Prejudices is a fundamental work on the psychology of decision making.

Links to individual works these authors are quite common in the academic literature, but a complete collection of these articles in Russian is published for the first time. The release of this book is certainly an important event for management professionals, strategic planning, decision making, consumer behavior, etc.

The book is of interest to specialists in the field of management, economics, psychology, both in theory and in practice, who deal with such a complex and interesting area. human activity like decision making.

Size: px

Start impression from page:

transcript

1 Kahneman D., Slovik P., Tversky A. Decision Making under Uncertainty: Rules and Prejudices I've been craving this book for a long time. I first learned about Nobel laureate Daniel Kahneman's work from Nassim Taleb's book Fooled by Randomness. Taleb quotes Kahneman a lot and juicy, and, as I found out later, not only in this, but also in his other books (Black Swan. Under the sign of unpredictability, On the secrets of stability). Moreover, I found numerous references to Kahneman in the books: Evgeny Ksenchuk Systems Thinking. Limits of mental models and system vision of the world, Leonard Mlodinov. (Im)perfect accident. How chance rules our lives. Unfortunately, I could not find Kahneman's book in paper form, so I "had" to purchase an e-book and download Kahneman from the Internet. And believe me, I did not regret a single minute D. Kahneman, P. Slovik, A. Tversky. Decision making in uncertainty: Rules and biases. Kharkiv: Publishing House Institute of Applied Psychology "Humanitarian Center", p. The book brought to your attention deals with the peculiarities of people's thinking and behavior in assessing and predicting uncertain events. As the book convincingly shows, when making decisions under uncertain conditions, people are usually wrong, sometimes quite significantly, even if they studied probability theory and statistics. These errors are subject to certain psychological patterns that have been identified and well experimentally substantiated by researchers. Since incorporating Bayesian ideas into psychological research, for the first time psychologists have been offered a coherent and well-articulated model of optimal behavior under uncertainty against which human decision making can be compared. The conformity of decision making to normative models has become one of the main research paradigms in the field of judgment under uncertainty. Part I. Introduction Chapter 1. Decision making under uncertainty: rules and biases How do people estimate the probability of an uncertain event or the value of an uncertain quantity? Humans rely on a limited number of heuristic 1 principles that reduce the complex problems of estimating probabilities and predicting magnitude values ​​to simpler judgmental operations. Heuristics are very useful, but sometimes they lead to serious and systematic errors. 1 Heuristic knowledge gained as experience is accumulated in any activity, in solving practical problems. Remember and feel this meaning well, as, perhaps, the word "heuristic" is the most frequently used in the book.

2 The subjective assessment of probability is similar to the subjective assessment of physical quantities such as distance or size. Representativeness. What is the probability that process B will lead to event A? When answering, people usually rely on the representativeness heuristic, in which the probability is determined by the degree to which A is representative of B, that is, the degree to which A is similar to B. Consider the description of a person by his former neighbor: “Steve is very reserved and shy, always ready help me, but is too little interested in other people and reality in general. He is very meek and tidy, loves order, and is also prone to detail.” How do people rate the likelihood of who Steve is by profession (for example, a farmer, a salesman, an airplane pilot, a librarian, or a doctor)? In the representativeness heuristic, the likelihood that Steve is, for example, a librarian is determined by the degree to which he is representative of the librarian, or conforms to the stereotype of the librarian. This approach to estimating probability leads to serious errors because similarity or representativeness is not affected by the individual factors that should influence the estimator of probability. Insensitivity to the prior probability of the result. One of the factors that do not affect representativeness, but significantly affect the probability is the prior (a priori) probability, or the frequency of the underlying values ​​of the results (outcomes). In Steve's case, for example, the fact that there are many more farmers than librarians in the population is necessarily taken into account in any reasonable assessment of the likelihood that Steve is a librarian rather than a farmer. Taking into account the frequency of base values, however, does not really change Steve's conformity to the librarian/farmer stereotype. If people estimate probability by means of representativeness, therefore they will neglect prior probabilities. This hypothesis was tested in an experiment in which prior probabilities were varied. Subjects were shown brief descriptions of several people selected at random from a group of 100 professional engineers and lawyers. The test-takers were asked to rate, for each description, the likelihood that it came from an engineer rather than a lawyer. In one experimental case, the subjects were told that the group from which the descriptions were given consisted of 70 engineers and 30 lawyers. In another case, the subjects were told that the group consisted of 30 engineers and 70 lawyers. The chances that each individual description is due to an engineer rather than a lawyer should be higher in the first case, where most engineers are, than in the second, where lawyers are most. This can be shown by applying Bayes' rule that the proportion of these odds should be (0.7/0.3) 2, or 5.44 for each description. In gross violation of Bayes' rule, the subjects in both cases showed essentially the same probability estimates. Apparently, the participants in the experiment rated the likelihood that a particular description was that of an engineer rather than a lawyer as the extent to which that description was representative of those two stereotypes, with little, if any, consideration for the prior probabilities of those categories. Insensitivity to sample size. People usually apply the representativeness heuristic. That is, they estimate the probability of an outcome in the sample, the extent to which this outcome is similar to the corresponding parameter. The similarity of the statistics in the sample to the typical parameter in the entire population does not depend on the sample size. Therefore, if the probability is calculated using representativeness, then the statistical probability in the sample will be essentially independent of the sample size. On the contrary, according to sampling theory, the expected deviation from the mean is smaller, the larger the sample. This fundamental concept of statistics is obviously not part of people's intuition. Imagine a basket filled with balloons, 2/3 of one color and 1/3 of another. One person takes 5 balls out of the basket and discovers that 4 of them are red and 1 is white. Another person draws 20 balls and discovers that 12 of them are red and 8 are white. Which of these two people should be more confident in saying that the basket contains 2/3 red balls and 1/3 white balls rather than vice versa? In this example, the correct answer is to estimate the subsequent odds as 8 to 1 for a sample of 5 balls and 16 to 1 for a sample of 20 balls (Figure 1). However, most

3 people think that the first sample provides much stronger support for the hypothesis that the basket is filled mostly with red balls, because the percentage of red balls in the first sample is greater than in the second. This again shows that intuitive estimates are dominated by sample proportion rather than sample size, which plays a decisive role in determining real subsequent chances. Rice. 1. Probabilities in the problem with balls (for formulas, see the Excel file on the sheet "Balls") Erroneous concepts of chance. People assume that a sequence of events organized as a stochastic process represents an essential characteristic of that process even when the sequence is short. For example, with regard to whether a coin comes up heads or tails, people believe that the O-R-O-R-R-O sequence is more likely than the O-O-O- R-R-R sequence, which does not seem random, and also more likely than the sequence O-O-O-O-P-O, which does not reflect the equivalence of the sides of the coin. Thus, people expect the essential characteristics of a process to be represented, not just globally, i.e. in complete succession, but also locally in each of its parts. However, the locally representative sequence systematically deviates from the expected odds: it has too many alternations and too few repetitions. 2 Another consequence of the belief in representativeness is the well-known error of the casino gambler. For example, when seeing reds roll for too long on the roulette wheel, most people mistakenly believe that they should now most likely roll black, because a black roll would complete a more representative sequence than another red would. Chance is usually seen as a self-regulating process in which a deviation in one direction leads to a deviation in the opposite direction in order to restore balance. In fact, deviations are not corrected, but simply “dissolved” as the random process proceeds. Showed a strong belief in what can be called the law of small numbers, according to which even small samples are highly representative of the populations from which they are selected. The results of these researchers reflected the expectation that a hypothesis that is valid for the entire population will be presented as a statistically significant result in the sample, with sample size irrelevant. As a result, experts place too much faith in the results obtained from small samples and overestimate the repeatability of these results. In research, this bias leads to inadequate sampling and overinterpretation of the results. Insensitivity to forecast reliability. People are sometimes forced to make numerical predictions, such as the future price of a stock, the demand for a product, or the outcome of a football game. Such predictions are based on representativeness. For example, suppose someone is given a description of a company and is asked to predict its future earnings. If the company description is very favorable, then very high profits will seem most representative of that description; if the description is mediocre, then the most representative will seem to be an ordinary development of events. The extent to which a description is favorable does not depend on the reliability of that description or the extent to which it allows for accurate prediction. Therefore, if people make predictions based solely on the favorableness of the description, their predictions will be insensitive to the reliability of the description and to the expected accuracy of the prediction. This way of making judgments violates normative statistical theory, in which the extremum and range of predictions depend on predictability. When predictability is zero, the same prediction must be made in all cases. 2 If you toss a coin 1000 times, how many sequences of 10 heads will occur on average? That's right, about one. The average probability of such an event = 1000 / 2 10 = 0.98. If interested, you can study the model in the Excel file on the "Coin" sheet.

4 The illusion of validity. People are quite confident in predicting that a person is a librarian when given a personality description that fits the librarian stereotype, even if it is sparse, unreliable, or outdated. The unreasonable confidence that results from a good match between the predicted outcome and the input data can be called the illusion of validity. Misconceptions about regression. Suppose a large group of children were tested on two similar versions of the ability test. If one selects ten children from among those who did best on one of these two versions, he will usually be disappointed with their performance on the second version of the test. These observations illustrate a general phenomenon known as regression to the mean, which was discovered by Galton over 100 years ago. In ordinary life, we all encounter a large number of cases of regression to the mean, comparing, for example, the height of fathers and sons. However, people have no assumptions about this. First, they do not expect regression in many of the contexts where it should occur. Second, when they acknowledge the occurrence of a regression, they often invent incorrect explanations for the causes. Failure to recognize the meaning of regression can be detrimental. When discussing training flights, experienced instructors noted that praise for an exceptionally soft landing is usually followed by a worse landing on the next attempt, while harsh criticism after a hard landing is usually followed by an improvement in the next attempt. The instructors concluded that verbal rewards are detrimental to learning while reprimands are beneficial, contrary to accepted psychological doctrine. This conclusion is invalid due to the presence of regression to the mean. Thus, failure to understand the regression effect leads to overestimating the effectiveness of punishment and underestimating the effectiveness of rewards. Availability. People rate the frequency of a class or the likelihood of events based on the ease with which they recall examples of cases or events. When the size of a class is estimated based on the availability of its elements, a class whose elements are easily retrieved from memory will appear more numerous than a class of the same size, but whose elements are less accessible and less easily recalled. Subjects were read a list of famous people of both genders and then asked to rate whether the list contained more male names than female names. Different lists were provided to different groups of test takers. In some of the lists, the men were more famous than the women, and in others, the women were more famous than the men. In each of the lists, the subjects erroneously believed that the class (in this case, gender) that included more famous people was more numerous. The ability to represent images plays an important role in assessing the probabilities of real life situations. The risk involved in a dangerous expedition, for example, is assessed by mentally re-enacting contingencies that the expedition does not have sufficient equipment to overcome. If many of these difficulties are vividly depicted, the expedition may seem extremely dangerous, although the ease with which disasters are imagined does not necessarily reflect their actual likelihood. Conversely, if the potential danger is hard to imagine, or simply does not come to mind, the risk associated with any event may be grossly underestimated. illusory relationship. Long experience has taught us that, in general, elements of large classes are remembered better and faster than elements of less frequent classes; that more probable events are easier to imagine than unlikely ones; and that associative links between events are strengthened when events often occur simultaneously. As a result, a person has at his disposal a procedure (availability heuristic) for estimating the class size. The probability of an event, or the frequency with which events can occur simultaneously, is measured by the ease with which the corresponding mental processes of recall, recall, or association can be performed. However, these estimation procedures systematically lead to errors.

5 Adjustment and anchoring. In many situations, people make estimates based on an initial value. Two groups of high school students evaluated, for 5 seconds, the value of a numerical expression that was written on the blackboard. One group evaluated the value of the expression 8x7x6x5x4x3x2x1, while the other group evaluated the value of the expression 1x2x3x4x5x6x7x8. The mean score for the ascending sequence was 512, while the mean score for the descending sequence was Correct for both sequences. Bias in the evaluation of complex events is especially significant in the context of planning. The successful completion of a business venture, such as the development of a new product, is usually complex: in order for the enterprise to succeed, each event in a series must occur. Even if each of these events is highly likely, the overall success rate can be quite low if the number of events is large. The general tendency to overestimate the likelihood of conjunctive 3 events leads to unreasonable optimism in estimating the likelihood that the plan will succeed, or that the project will be completed on time. Conversely, disjunctive 4 event structures are commonly encountered in risk assessment. A complex system, such as a nuclear reactor or the human body, will be damaged if any of its essential components fail. Even when the probability of failure in each component is small, the probability of failure of the entire system can be high if many components are involved. Because of the "tie-in" bias, people tend to underestimate the likelihood of failure in complex systems. Thus, the binding bias can sometimes depend on the structure of the event. The structure of an event or phenomenon similar to a chain of links leads to an overestimation of the probability of this event, the structure of an event similar to a funnel, consisting of disjunctive links, leads to an underestimation of the probability of an event. "Binding" when estimating the subjective probability distribution. In decision analysis, experts are often required to express their opinion on a quantity. For example, an expert may be asked to select a number, X 90, such that the subjective probability that this number will be higher than the Dow Jones average value is 0.90. An expert is considered properly calibrated in a certain set of problems if only 2% of the correct values ​​of the estimated values ​​are below the given values. Thus, the true values ​​must strictly fall within the interval between X 01 and X 99 in 98% of the problems. Confidence in heuristics and the prevalence of stereotypes are characteristic not only of ordinary people. Experienced explorers are also prone to the same biases when they think intuitively. The inability of people to deduce such fundamental statistical rules as regression to the mean or the effect of sample size is surprising. While we all encounter numerous situations throughout our lives to which these rules can apply, very few discover the principles of sampling and regression from experience on their own. Statistical principles are not learned on the basis of everyday experience. Part II Representativeness Chapter 2. Belief in the Law of Small Numbers Let's assume that you have conducted an experiment with 20 subjects and got a significant result. You now have a basis for experimenting with additional group out of 10 subjects. What do you think is the likelihood that the results would be significant if the trial was conducted separately for this group? Most psychologists have an exaggerated belief in the likelihood of successful replication of the results obtained. The issues addressed in this part of the book are the sources of such confidence, and their implications for the conduct of scientific research. Our 3 Connective, or conjunctive, is a judgment consisting of several simple ones connected by a logical connective “and”. That is, in order for a conjunctive event to occur, all of its component events must occur. 4 A disjunctive, or disjunctive, is a judgment consisting of several simple ones connected by a logical link "or". That is, for a disjunctive event to occur, at least one of its component events must occur.

6 thesis is that people have strong prejudices about random sampling; that these prejudices are fundamentally wrong; that these prejudices are characteristic of both simple subjects and trained scientists; and that its application in the course of scientific research has unfortunate consequences. We present for discussion the thesis that people consider a sample selected at random from the population as highly representative, that is, similar to the entire population in all essential characteristics. Therefore, they expect that any two samples drawn from a limited population will be more similar to each other and the population than sampling theory suggests, at least for small samples. The essence of the casino player's mistake is a misconception about the fairness of the law of chance. This error is not unique to players. Consider the following example. The average IQ among eighth graders is 100. You have chosen a random sample of 50 children to study academic achievement. The first child tested has an IQ of 150. What do you expect the average IQ for the entire sample to be? The correct answer is 101. An unexpectedly large number of people believe that the expected IQ for the sample is still 100. This can only be justified by the opinion that the random process is self-correcting. Statements like "errors cancel each other out" reflect people's idea of ​​an active process of self-correction of random processes. Some common processes in nature obey the following laws: deviation from a stable equilibrium generates a force that restores the balance. The laws of probability, on the contrary, do not work in this way: the deviations do not cancel out as the elements of the sample are sorted out, they are weakened. So far, we have tried to describe two related kinds of biases for determining odds. We proposed the representativeness hypothesis, according to which people believe that samples will be very similar to each other and the populations from which they are selected. We also assumed that people believe that the processes in the sample are self-correcting. These two opinions lead to the same consequences. The Law of Large Numbers ensures that very large samples will indeed be highly representative of the population from which they are drawn. People's intuition about random samples seems to fit with the law of small numbers, which states that the law of large numbers applies to small numbers as well. The advocate of the law of small numbers conducts his scientific activity in the following way: He risks his research hypotheses on small samples, not realizing that the odds in his favor are extremely low. He overestimates power. He rarely explains the deviation from expected sample results by sample variability, because he finds an "explanation" for any discrepancy. Edwards argued that people fail to extract enough information or certainty from probabilistic data. Our respondents, in accordance with the representativeness hypothesis, tend to extract from the data large quantity certainties than the data actually contains. What, in this case, can be done? Can belief in the law of small numbers be eradicated, or at least controlled? An obvious precaution is the calculation. A proponent of the law of small numbers has erroneous beliefs about the level of certainty, power and confidence intervals. Significance levels are usually computed and reported, but power and confidence intervals are not. Explicit power calculations relating to some well-founded hypothesis must be performed before the study is conducted. Such calculations lead to the realization that there is no point in conducting a study unless, for example, the sample size is increased by 4 times. We abandon the belief that a serious researcher will knowingly take the risk of 0.5 that his well-founded research hypothesis will never be confirmed. Chapter 3. Subjective Probability: Assessing Representativeness We use the term "subjective probability" to refer to any estimate of the likelihood of an event given by a subject or inferred from his behavior. These estimates are not supposed to satisfy any axioms or consistency requirements.

7 We use the term "objective probability" to refer to numerical values ​​calculated on the basis of established assumptions, according to the laws of probability calculation. Of course, this terminology does not coincide with any philosophical representation of probability. Subjective probability plays an important role in our lives. Perhaps the most general conclusion drawn from numerous studies is that people do not follow the principles of probability theory in estimating the likelihood of uncertain events. This conclusion can hardly be considered surprising, because many of the laws of randomness are neither intuitively obvious nor convenient to apply. Less obvious, however, is the fact that the deviations of subjective from objective probability appear to be reliable, systematic, and difficult to eliminate. Obviously, people replace the laws of randomness with heuristics, the estimates of which are sometimes reasonable, but very often not. In this book, we explore in detail one of these heuristics, called representativeness. Event A is rated as more likely than event B whenever it appears to be more representative than B. In other words, ordering events by their subjective probability is the same as ordering them by representativeness. Similarity between sample and population. The concept of representativeness is best explained with examples. All families in the city with six children were examined. In 72 families, boys and girls were born in this order D M D M M D. In how many families do you think the birth order of children was M D M M M M? The two birth sequences are approximately equally likely, but most people will of course agree that they are not equally representative. The described determinant of representativeness is to maintain the same minority or majority ratio in the sample as in the population. We expect a sample that maintains this relationship to be rated as more likely than a sample that is (objectively) just as likely to occur, but where the relationship is violated. reflection of chance. For an uncertain event to be representative, it is not sufficient that it be similar to its original population. The event must also reflect the properties of the indeterminate process that gave rise to it, that is, it must appear to be random. The main characteristic of apparent randomness is the absence of systematic patterns. For example, an ordered sequence of coin flips is not representative. People view chance as unpredictable but essentially fair. They expect even short sequences of coin tosses to contain relatively equal numbers of heads and tails. In general, a representative sample is one in which the essential characteristics of the original population are represented as a whole, not only in the full sample, but also locally in each of its parts. This belief, we hypothesize, underlies the fallacies of intuition about randomness, which is presented in a wide variety of contexts. Sample distribution. When a sample is described in terms of a single statistic, such as the mean, the extent to which it is representative of the population is determined by the similarity of that statistic to the corresponding population parameter. Since the sample size does not reflect any specific feature of the original population, it is not associated with representativeness. Thus, an event in which more than 600 boys are found in a sample of 1000 infants, for example, is as representative as the discovery of more than 60 boys in a sample of 100 babies. Therefore, these two events would be assessed as equally likely, although the latter, in fact, is much more likely. Misconceptions about the role of standard size often appear in daily life. On the one hand, people often take percentage results seriously without caring about the number of observations, which can be ridiculously small. On the other hand, people often remain skeptical in the face of overwhelming evidence from a large sample. The effect of sample size does not disappear despite knowing the correct rule and extensive training in statistics. There is an opinion that a person, generally speaking, follows Bayes' rule, but is not able to assess the full impact of evidence, and therefore is conservative. We believe that the normative approach

8 Bayes to the analysis and modeling of subjective probability can bring significant benefits. We believe that in his assessment of evidence, the person is probably not a conservative follower of Bayes: he is not a follower of Bayes at all. Chapter 4. About the psychology of forecasting When forecasting and making decisions under conditions of uncertainty, people do not tend to determine the probability of an outcome or resort to a statistical theory of forecasting. Instead, they rely on a limited number of heuristics, which sometimes leads to right judgment and sometimes entails serious and systematic errors. We consider the role of one of these representativeness heuristics in intuitive predictions. Given certain data (eg, a brief description of the individual), relevant outcomes (eg, occupation or level of achievement) can be determined by the extent to which they are representative of those data. We argue that people predict on the basis of representativeness, that is, they choose or predict outcomes by analyzing the extent to which outcomes reflect significant features of the original data. In many situations, representative outcomes are indeed more likely than others. However, this is not always the case because there are a number of factors (eg, prior probabilities of outcomes and reliability of raw data) that affect the likelihood of outcomes rather than their representativeness. Since people do not take these factors into account, their intuitive predictions systematically and significantly violate the statistical rules of forecasting. Category prediction. Baseline, Similarity, and Probability Three types of information are important for statistical prediction: (a) primary or background information (eg, baseline values ​​for areas of specialization of university graduates); (b) additional information for a particular case (for example, a description of the identity of Tom V.); (c) the expected accuracy of the forecast (for example, the prior probability of correct answers). The fundamental rule of statistical prediction states that expected accuracy affects specific gravity attributed to additional and primary information. As expected accuracy decreases, predictions should become more regressive, that is, closer to predictions based on primary information. In the case of Tom W., the expected accuracy was low, and the subjects had to rely on the prior probability. Instead, they made predictions based on representativeness, that is, they predicted outcomes in their semblance of additional information without taking into account the prior probability. Evidence based on prior probability or information about the individual. The following study is a more thorough test of the hypothesis that intuitive predictions depend on representativeness and are relatively independent of prior probability. The subjects were read the following story: a group of psychologists interviewed and administered a personality test to 30 engineers and 70 lawyers, all of whom were successful in their respective fields. Based on this information, personality briefs were written for 30 engineers and 70 lawyers. In your questionnaires you will find five descriptions chosen at random from 100 available descriptions. For each description, please indicate the probability (between 0 and 100) that the person described is an engineer. Subjects in the other group received identical instructions, except for the prior probability: they were told that out of 100 people studied, 70 were engineers and 30 were lawyers. Subjects in both groups were given the same descriptions. After five descriptions, the subjects were faced with a blank description: suppose you have no information about a person selected at random from the population. A graph was built (Fig. 2). Each dot corresponds to one description of the person. The x-axis indicates the probability of attributing the description of a person to the profession of an engineer, if it was said in the condition that there are 30% of engineers in the sample; on the Y-axis, the probability of classifying the description as an engineer profession, if it was said in the condition that there are 70% of engineers in the sample. All points must lie on the Bayes curve (convex, solid). In fact, only the empty square, which corresponds to the "empty" descriptions, lies on this line: in the absence of a description, the subjects

9 decided that the probability score would be 70% for a high prior and 30% for a low prior. In the remaining five cases, the points lie not far from the diagonal of the square (equal probabilities). For example, for a description corresponding to point A in Fig. 1, regardless of the conditions of the task (both at 30% and at 70% a priori probability), the subjects estimated the probability of being an engineer at 5%. Rice. Fig. 2 Estimated average probability (for engineers) for five descriptions (one point one description) and for the "empty" description (square symbol) at high and low prior probabilities (the curved solid line shows how the distribution should look according to Bayes' rule) So, the prior probability was not taken into account when information about the individual was available. The subjects applied their knowledge of the prior probability only when they were not given any description. The strength of this effect is demonstrated by responses to the following description: Dick 30-year-old male. Married, no children yet. Very capable and motivated employee, shows great promise. Enjoys the recognition of colleagues. This description has been constructed in such a way as to be completely uninformative about Dick's profession. The subjects of both groups came to an agreement: the average scores were 50% (point B). The difference between the responses to this description and the "blank" description clarifies the situation. Obviously, people react differently when they receive no description and when a useless description is given. In the first case, the prior probability is taken into account; in the second, the prior probability is ignored. One of the basic principles of statistical forecasting is that a prior probability that sums up our knowledge of a problem before we have a particular description remains relevant even after that description is obtained. Bayes' rule translates this qualitative principle into a multiplicative relationship between a priori probability and a probability ratio. Our subjects were unable to combine the prior probability and additional information. When they were given a description, no matter how uninformative or unreliable it may be. The failure to appreciate the role of a priori, once a specific description is given, is perhaps one of the most significant deviations of intuition from a normative theory of forecasting. Numerical prediction. Suppose you are told that a psychology consultant described a first-year student as smart, confident, well-read, industrious, and inquisitive. Consider two types of questions that could be asked on this description: (A) Evaluation: What is your opinion of learning ability after this description? What percentage of freshman descriptions do you think would impress you more? (C) Prediction: What average scores do you think this

10 student? What percentage of first-year students will receive a higher average grade ? There is an important difference between these two questions. In the first case, you evaluate the original data; and in the second, you predict the outcome. Since there is more uncertainty in the second question than in the first, your prediction must be more regressive than your estimate. That is, the percentage you give as a prediction should be closer to 50% than the percentage you give as an estimate. On the other hand, the representativeness hypothesis states that forecasting and estimating must match. Several studies have been conducted to test this hypothesis. The comparison did not show a significant difference in variability between the assessment and prediction groups. Prediction or broadcast. People predict by choosing the outcome that is most representative. The main indicator of representativeness in the context of number prediction is the orderliness or interconnectedness of the source data. The more ordered the input data, the more representative the predicted value will appear, and the more reliable the prediction will be. It has been found that internal variability or inconsistency in input data reduces the reliability of predictions. It is impossible to overcome the fallacy that ordered profiles allow more predictability than unordered ones. It is worth noting, however, that this belief is inconsistent with the commonly used multivariate forecasting model (i.e., the normal linear model), in which the expected accuracy of the forecast is independent of the variability within the profile. Ideas regarding regression. The effects of regression are all around us. In life, the most outstanding fathers have mediocre sons, wonderful wives have mediocre husbands who are unadapted and tend to adapt, and the lucky ones are ultimately out of luck. Despite these factors, people do not acquire a proper understanding of regression. First, they do not expect regression to occur in many situations where it should occur. Second, as any teacher of statistics will attest, it is extremely difficult to acquire a proper concept of regression. Third, when people observe regression, they usually invent false dynamic explanations for this phenomenon. What makes the concept of regression counterintuitive, difficult to acquire and apply? We argue that a major source of difficulty is that regression effects generally violate the intuition that tells us that the predicted outcome should be as representative of the original information as possible. The expectation that every significant act of behavior is highly representative of the performer may explain why laypersons and psychologists alike are continually surprised by the marginal correlations among seemingly interchangeable dimensions of honesty, risk-taking, aggression, and addiction. Testing problem. A randomly selected person has an IQ of 140. Let's assume that the IQ is the sum of the "true" score plus the random measurement error. Please state the upper and lower 95% confidence limits for this person's true IQ. That is, name an upper limit such that you are 95% sure that the true IQ is, in fact, lower than this figure, and a lower limit such that you are 95% sure that the true IQ is, in fact, higher. In this task, subjects were asked to consider the observed IQ as the sum of the "true" IQ and the error component. Since the observed level of intelligence is well above average, it is more likely that the error component is positive and that the individual will score lower on subsequent tests. When a regression effect is discovered, it is usually seen as a systematic change that requires an independent explanation. Indeed, many false explanations for the effects of regression have been offered in the social sciences. Dynamic principles have been used to explain why a business that is very successful at one time tends to deteriorate afterwards. Some of these explanations would not have been offered had their authors realized that given two variables of equal variability, the following two statements are logically equivalent: (a) Y is regressive with respect to X; (b) the correlation between Y and X is less than one. Therefore, explaining the regression is tantamount to explaining why the correlation is less than one.

11 Flight school instructors used a consistent positive reinforcement policy recommended by psychologists. They verbally rewarded each successful in-flight maneuver. After using this training approach for some time, the instructors stated that contrary to psychological doctrine, high praise for the good execution of difficult maneuvers usually leads to poor performance on the next attempt. What should the psychologist say? Regression is inevitable in flight maneuvers because maneuver execution is not absolutely reliable and progress is slow when performed sequentially. Therefore, pilots who perform exceptionally well on one test are likely to perform worse on the next, regardless of the instructors' response to their initial success. Experienced flight school instructors actually detected the regression, but attributed it to the harmful effects of the reward. CHAPTER 5 Exploring representativeness Maya Bar-Hiller, Daniel Kahneman, and Amos Tversky suggested that when estimating the likelihood of uncertain events, people often turn to heuristics or rules of thumb that correlate little if anything with the variables that actually determine the likelihood of an event. . One such heuristic is representativeness, defined as a subjective assessment of the extent to which the event in question "is similar in essential properties to its original population" or "reflects the essential features of the process that gave rise to it." Confidence in the representativeness of a case as a measure of its likelihood can lead to two kinds of bias in judgment. First, it can place undue weight on variables that affect the representativeness of an event rather than its likelihood. Secondly, it may reduce the importance of variables that are essential to determining the probability of an event, but not related to its representativeness. Two closed vessels are given. Both have a mixture of red and green beads. The number of beads is different in two vessels; in the small one there are 10 beads, and in the large one there are 100 beads. The percentage of red and green beads is the same in both vessels. The selection is carried out as follows: you blindly take out a bead from a vessel, remember its color and return it to its place. You shuffle the beads, take them out again blindly, and remember the color again. In general, you draw a bead from a small vessel 9 times, and from a large one 15 times. In which case do you think you are more likely to guess the dominant color? Given the description of the sampling procedure, the number of beads in these two vessels is absolutely irrelevant from a normative point of view. In their choices, the subjects had to unequivocally pay attention to a large sample of 15 beads. Instead, 72 out of 110 subjects chose a smaller sample of 9 beads. This can only be explained by the fact that the ratio of sample size to population size is 90% in the latter case and only 15% in the former. Chapter 6. Representativeness and Representativeness-Based Estimates Several years ago, we presented an analysis of decision making under uncertainty that linked subjective probabilities and intuitive predictions about expectations and impressions of representativeness. Two different hypotheses have been included in this concept: (i) people expect samples to be similar to their parent population and also reflect the randomness of the sampling process; (ii) people often rely on representativeness as a heuristic for judgment and prediction. Representativeness is the relationship between a process or model M and some case or event X associated with this model. Representativeness, like similarity, can be determined empirically, for example, by asking people to rate which of two events, X 1 or X 2, is more representative of some model M, or whether event X is more representative of M 1 or M 2.

12 The representativeness ratio can be defined for (1) magnitude and distribution, (2) event and category, (3) sample and population, and (4) cause and effect. If confidence in representativeness leads to bias, why do people use it as a basis for forecasts and estimates? First, representativeness seems to be easily accessible and easy to evaluate. It is easier for us to assess the representativeness of an event in relation to a class than to assess its conditional probability. Second, probable events tend to be more representative than less probable ones. For example, a population-like sample is more likely than an atypical sample of the same size. Third, the notion that samples are generally representative of their parent populations leads people to overestimate the correlation between frequency and representativeness. Confidence in representativeness, however, leads to predictable errors of judgment, because representativeness has its own logic, which is different from that of probability. A significant difference between probability and representativeness arises when evaluating complex events. Suppose we were given some information about a person (for example, short description personality) and we think about various traits or combinations of traits that this person may have: occupation, inclinations, or political sympathies. One of the basic laws of probability says that detail can only lower the probability. Thus, the probability that this person being both a Republican and an artist at the same time should be less than the probability that a person is an artist. However, the requirement that P(A and B) P(B), which can be called the rule of conjunction, does not refer to similarity or representativeness. A blue square, for example, may be more like a blue circle than just a circle, and a person may resemble our image of a Republican and artist more than our image of a Republican. Since the target object's similarity can be increased by adding features that the object also has to the target, similarity or representativeness can be increased by specifying the target. People judge the likelihood of events by the extent to which those events are representative of the relevant model or process. Since the representativeness of an event can be increased by refinement, a complex target can be judged more likely than one of its components. The conclusion that a conjunction often seems more likely than one of its components can have far-reaching implications. There is no reason to believe that the judgments of political analysts, jurors, judges, and doctors are independent of the conjunction effect. This effect is likely to be especially negative when trying to predict the future by estimating the probabilities of individual scenarios. As if looking into a crystal ball, politicians, futurologists, and also ordinary people are looking for an image of the future that best represents their model of the development of the present. This search leads to the construction of detailed scenarios that are internally consistent and highly representative of our model of the world. Such scenarios are often less likely than less detailed forecasts, which are in fact more likely. As the detail of a scenario increases, its probability can only steadily decrease, but its representativeness, and hence its apparent probability, can increase. Confidence in representativeness, in our opinion, is the primary reason for the unreasonable preference for detailed scenarios and the illusory sense of intuition that such constructions often provide. Since human judgment is inseparable from the solution of the exciting problems of our lives, the conflict between the intuitive concept of probability and the logical structure of this concept urgently needs to be resolved. Part III Causality and Attribution Chapter 7 Conventional Proposition: Information Is Not Necessarily Informative Even in the realm of gambling, where people have at least some rudimentary understanding of how to handle probabilities, they can exhibit remarkable blindness and prejudice. Outside of these situations, people may be completely unable to see

13 the need for such "simple" probabilistic information as a base value. Not understanding how to properly combine base value information with target case information leads people to simply ignore base value information altogether. It seems to us, however, that another principle may also operate. By its nature, the underlying meaning or coherence of information is vague, insignificant, and abstract. On the contrary, the information of the target case is vivid, significant and specific. This hypothesis is not new. In 1927, Bertrand Russell suggested that "generally accepted induction depends on the emotional interest of the cases, not on their number." In the studies we have done on the effects of information consistency, a simple representation of the number of occurrences has been contrasted with instances of emotional interest. According to Russell's hypothesis, emotional interest prevailed in every case. We assume that specific emotionally interesting information has a high potential to draw conclusions. Abstract information is less rich in potential links to the associative network through which scenarios can be reached. Russell's hypothesis has several important premises for action in Everyday life . As an illustration, consider a simple example. Let's say you need to buy a new car, and for the sake of economy and durability, you decide to buy one of Sweden's solid mid-range cars like Volvo or Saab. Being a cautious buyer, you go to customer service, which tells you that according to the results of expert studies, Volvo is superior in mechanical parameters, and the inhabitants note higher wear resistance. Armed with information, you decide to contact your Volvo dealer before the end of the week. Meanwhile, at one of the parties, you tell a friend about your intention, his reaction makes you think: “Volvo! You must be joking. My brother-in-law had a Volvo. First, the intricate computer refueling thing went haywire. 250 bucks. Then he started having problems with the rear axle. I had to replace it. Then transmission and clutch. Three years later they were sold for parts. The logical status of this information is that out of a few hundred civilians owning Volvos from consumer service, the number has increased by one, and that the average frequency of repairs has dropped by an iota in three or four dimensions. However, anyone who claims that he will not take into account the opinion of a random interlocutor is either not sincere or does not know himself at all. CHAPTER 8 Causal Schemas in Decision Making Under Uncertainty Michett's work has vividly demonstrated the tendency to think of sequences of events in terms of causal relationships, even when one is fully aware that the relationship between events is random and that the attributed causality is illusory. We examine estimates of the conditional probability P(X/D) of some target event X, based on some evidence or data D. In a normative consideration of conditional probability theory, the differences between the types of relationship D to X are immaterial, and the impact of the data depends solely on their informativeness. On the contrary, we suggest that the psychological impact of data depends on their role in the causal scheme. In particular, we hypothesize that causal data have a greater impact than other data of similar informativeness; and that in the presence of data that generates a causal pattern, random data that does not fit that pattern has little or no value. Causal and diagnostic reasoning. People can be expected to infer results from causes with greater certainty than causes from results, even if the result and the cause actually provide the same amount of information about each other. In one set of questions, we asked subjects to compare two conditional probabilities P(Y/X) and P(X/Y) for a pair of events X and Y such that (1) X is naturally seen as the cause of Y; and (2) P(X) = P(Y), that is, the marginal probabilities of the two events are equal. The last condition implies that P(Y/X) = P(X/Y). We predicted that the majority of subjects would find the causal relationship stronger than the diagnostic one and falsely state that P(Y/X) > P(X/Y).


Basic Concepts of Probability The previous notes (see table of contents) have dealt with methods of data collection, methods for constructing tables and charts, and the study of descriptive statistics. In this

Econometric Modeling Laboratory work 7 Analysis of residuals. Autocorrelation Table of contents Properties of residuals... 3 1st Gauss-Markov condition: Е(ε i) = 0 for all observations... 3 2nd Gauss-Markov condition:

Lecture. Math statistics. The main task of mathematical statistics is the development of methods for obtaining scientifically based conclusions about mass phenomena and processes from observational and experimental data.

UDC 519.816 Estimation of the probabilities of predicted events A.G. Madera PhD Professor, Department of Mathematics, Faculty of Economic Sciences graduate School economy (national research university)

A sample or a sample population is a part of the general population of elements that is covered by an experiment (observation, survey). Sample Characteristics: The qualitative characteristic of the sample is that

Lecture 5 ECONOMETRIC 5 Checking the quality of the regression equation Prerequisites of the least squares method Consider a paired linear regression model X 5 Let, based on a sample of n observations, estimate

Elements of the theory of probability. Plan. 1. Events, types of events. 2. Probability of an event a) Classical probability of an event. b) Statistical probability of an event. 3. Algebra of events a) Sum of events. Probability

Lecture 7 VERIFICATION OF STATISTICAL HYPOTHESES PURPOSE OF THE LECTURE: to define the concept of statistical hypotheses and the rules for their verification; to test hypotheses about the equality of the means and variances of a normally distributed

Raskin MA «Conditional probabilities..» L:\materials\raskin We are considering a situation, the further development of which we cannot accurately predict. At the same time, some outcomes (development scenarios) for the current

Behind LDA Part 1 Koltsov S.N. Differences in approaches to probability theory A random variable is a variable that, as a result of experience, takes one of many values, and the appearance

Topic 6. Development of the concept and hypothesis of systems research 6.1. Hypothesis and its role in the study. 6.2. Hypothesis development. 6.3. Research concept. 6.1. Hypothesis and its role in the study. In the study

: Lecture 3. People as information processors Vladimir Ivanov Elena Nikishina Faculty of Economics Department of Applied Institutional Economics 03.03.2014 Contents 1 Limited cognitive abilities

Lecture 1. Topic: BASIC APPROACHES TO THE DETERMINATION OF PROBABILITY The subject of probability theory. History reference The subject of probability theory is the study of regularities arising from massive, homogeneous

Parapsychology and psychophysics. - 1992. - 3. - S.55-64. Statistical criterion for detecting extrasensory abilities of a person A.G. Chunovkina Criteria for detecting extrasensory abilities are proposed

Federal Agency for Education educational institution higher vocational education"NATIONAL RESEARCH TOMSK POLYTECHNICAL UNIVERSITY" LECTURE ON THEORY

Parapsychology and psychophysics. - 1994. - 4. - S.64-71. Statistical approach to interpretation, processing of results and testing of hypotheses in experiments to identify extrasensory abilities of a person

Test on Mathematical Methods in Pedagogy and Psychology test preparation system Gee Test oldkyx.com methods and methods of collecting information 1. It is customary to highlight the following types hypotheses: 1) [-] confirmed

Canonical Analysis Module Canonical Correlation Study of dependencies versus experimental studies Empirical studies In a study of correlations, you want to find dependencies

STATISTICAL ESTIMATION OF DISTRIBUTION PARAMETERS. The concept of statistical estimation of parameters Methods of mathematical statistics are used in the analysis of phenomena that have the property of statistical stability.

Lecture 7 ECONOMETRIC 7 Analysis of the quality of the empirical equation of multiple linear regression The construction of an empirical regression equation is the initial stage of econometric analysis.

Lecture 3. ECONOMETRIC 3. Factor selection methods. The optimal composition of the factors included in the econometric model is one of the main conditions for its good quality, understood and as correspondence

PART 8 MATHEMATICAL STATISTICS Lecture 4 BASIC CONCEPTS AND PROBLEMS OF MATHEMATICAL STATISTICS PURPOSE OF THE LECTURE: to define the concept of general and sample population and formulate three typical problems

Introduction to expert analysis. 1. Prerequisites for the emergence of expert assessments. Due to the lack of knowledge, the task seems difficult and unsolvable. In the theory and practice of modern management, the following can be distinguished

Task Solving problems in probability theory Topic: "Probability of a random event." Task. The coin is tossed three times in a row. Under the outcome of the experiment we mean the sequence X, X, X 3., where

Lecture 1 Introduction. Interrelation and unity of natural and humanitarian sciences. Methodology of knowledge in the natural sciences. Scientific picture of the world. Culture is everything that is created by human labor in the course of history,

Laboratory exercises 5, 6 Multiple correlation-regression analysis The work is described in the manual “Econometrics. Additional materials "Irkutsk: IrGUPS, 04. Time for execution and protection

Research methodology It is important to distinguish between such concepts as methodology and method. Methodology is the doctrine of the structure, logical organization, methods and means of activity. Method is a collection

Lectures 8 and 9 Topic: The law of large numbers and limit theorems of probability theory Regularities in the behavior of random variables are the more noticeable, the greater the number of tests, experiments or observations The law of large

30 AUTOMETRY. 2016. V. 52, 1 UDC 519.24 CONSENT CRITERION BASED ON INTERVAL EVALUATION E. L. Kuleshov Far Eastern Federal University, 690950, Vladivostok, st. Sukhanova, 8 [email protected]

Elements of mathematical statistics Mathematical statistics is a part of the general applied mathematical discipline "Probability Theory and Mathematical Statistics", however,

PLANNED RESULTS Personal results: education of Russian civic identity; patriotism, respect for the Fatherland, awareness of the contribution of domestic scientists to the development of world science; responsible

Lecture 1. Statistical methods of information processing in oil and gas business. The compiler of Art. teacher cafe BNGS SamSTU, master Nikitin V.I. 1. BASIC CONCEPTS OF MATHEMATICAL STATISTICS 1.1. STATISTICAL

CAUSE AND EFFECT STUDIES EXPERIMENT Ph.D.

Parameter estimation 30 5. GENERAL PARAMETER ESTIMATION 5.. Introduction

UDC 624.014 STATISTICAL UNCERTAINTY EVALUATION OF RESISTANCE MODELS OF STEEL STRUCTURES Nadolsky VV, Ph.D. tech. Sciences (BNTU) Abstract. It is known that the uncertainties of resistance models and

4. Brown's model on small samples Now we should point out a certain feature of Brown's method, which we did not indicate in order not to violate the sequence of presentation, namely, the need

S A Lavrenchenko http://lawrencenkoru PROBABILITY THEORY Lecture 2 Conditional probability Bernoulli's formula “The sword, he is the blade, symbolizes everything masculine I think it can be depicted like this And Marie with the index

MATHEMATICAL METHODS IN LAND MANAGEMENT Karpichenko Alexander Aleksandrovich Associate Professor of the Department of Soil Science and Land Information Systems Literature elib.bsu.by Mathematical Methods in Land Management [Electronic

FEDERAL STATE BUDGETARY EDUCATIONAL INSTITUTION OF HIGHER PROFESSIONAL EDUCATION "Chelyabinsk State Academy of Culture and Art" Department of Informatics PROBABILITY THEORY

MINISTRY OF EDUCATION AND SCIENCE OF THE RUSSIAN FEDERATION FEDERAL AGENCY FOR EDUCATION STATE EDUCATIONAL INSTITUTION OF HIGHER PROFESSIONAL EDUCATION NOVOSIBIRSK STATE

The main provisions of the theory of probability An event is called random with respect to certain conditions, which, under the implementation of these conditions, can either occur or not occur. Probability theory has

Glossary Variational series grouped statistical series Variation - fluctuation, diversity, variability of the value of a feature in units of the population. Probability is a numerical measure of objective possibility

Annotation to curriculum Algebra Subject Algebra Level of education - Basic general education Regulatory and methodological 1. Federal state educational standard materials of the main

« Information Technology processing of statistical data" Moscow 2012 BASIC PROVISIONS OF MATHEMATICAL STATISTICS Statistical variables Variables are quantities that can be measured, controlled

VERIFICATION OF STATISTICAL HYPOTHESES The concept of a statistical hypothesis A statistical hypothesis is an assumption about the type of distribution or about the values ​​of unknown parameters of the population, which can

Department of Mathematics and Informatics PROBABILITY THEORY AND MATHEMATICAL STATISTICS Training and metodology complex for HPE students studying with the use of distance technologies Module 3 MATHEMATICAL

Lecture 0.3. Correlation coefficient In an econometric study, the question of the presence or absence of a relationship between the analyzed variables is solved using the methods of correlation analysis. Only

STATISTICAL HYPOTHESIS IN ECONOMETRIC STUDIES Morozova N.N. Financial University under the Government Russian Federation, Smolensk, Russia STATISTICAL HYPOTHESIS IN ECONOMETRIC STUDIES Morozova

Topic 8. Sociological and marketing in ensuring the management process in the social sphere. Social forecasting. The main functions of research in the social sphere. The main goals and objectives of sociological

Correlation From Wikipedia, the free encyclopedia Correlation is a statistical relationship between two or more random variables (or variables that can be

MULTICOLLINEARITY OF MULTIPLE REGRESSION MODELS Multicollinearity is a serious problem in building multiple regression models based on the least squares method (LSM).

Statistical Hypothesis Testing 37 6. SIGNIFICANCE CRITERIA AND HYPOTHESIS TESTING 6.. Introduction

BULLETIN OF TOMSK STATE UNIVERSITY 2009 Philosophy. Sociology. Political Science 4(8) IS EXISTENCE A PREDICATE? 1 I don't quite understand the meaning this issue. Mr. Neil says that existence

SPSS is a software product designed to complete all steps statistical analysis: from browsing data, creating tables and calculating descriptive statistics to applying complex

Econometric Modeling Lab 6 Residual Analysis. Heteroskedasticity Table of contents Properties of residuals... 3 1st Gauss-Markov condition: Е(ε i) = 0 for all observations... 3 Task 1.

Explanatory note In accordance with the letter of the Ministry of Defense of the Russian Federation 03-93 in / 13-03 dated September 23, 2003 on the teaching of combinatorics, statistics and probability theory in the main general school the teaching of probabilistic-statistical

Lecture 6. Methods for measuring the closeness of a pair correlation relationship Signs can be presented in quantitative, ordinal and nominal scales. Depending on the scale on which the signs are presented,

Empathy, penetration into his subjective world, empathy, and it is also higher in people of middle adulthood. PECULIARITIES OF PERCEPTION OF INFORMATION ABOUT YOURSELF: BARNUM EFFECT Shportko M.I., 4th year student