Professional Documents
Culture Documents
When people use normative models in decision-making, they often consider them-
selves to be procedurally rational, and they associate their errors only with the violation
of value rationality. This is why optimization models built on the basis of the Expected
Utility Theory (EUT) are widely used when we judge business rationality [1]. We are
accustomed to these models to such an extent that when expanding their application to
a wider range of problems, we often forget that processing (instrumental) rationality in
decision-making becomes a determining factor when a problem is considered under
conditions of uncertainty, therefore becoming poorly structured. Attempts to establish
procedural rationality, which is information-based and energy-based in the framework
of value rationality, leads to the issue of our behavior becoming irrational within the
framework of the mathematical model used, which in turn loses its predictive accuracy.
Max Weber [2] coined “instrumental” and “value rationality” for two different kinds
of human reasoning. Social action, like all types of action, may be…: (1) instrumentally
rational—that is, it is determined by expectations for the behavior of other human beings
and objects in the environment. These expectations are used as “conditions” or “means”
for the attainment of the individual’s own rationally-pursued and calculated ends;
© The Editor(s) (if applicable) and The Author(s), under exclusive license
to Springer Nature Switzerland AG 2021
H. Ayaz and U. Asgher (Eds.): AHFE 2020, AISC 1201, pp. 199–206, 2021.
https://doi.org/10.1007/978-3-030-51041-1_27
200 A. M. Yemelyanov and I. S. Bedny
(2) value-rational—in other words, the action is determined by a conscious belief in its
value, which is based on some ethical, aesthetic, belief-driven, or other form of
behavior, independent of its potential for success. Weber emphasized that it is highly
unusual to find only one of these orientations, since combinations are usually the norm.
Robert Nozick [3] also accepted the idea of Weber’s two kinds of rationality.
However, he indicated the dominant role of instrumental rationality. He highlighted the
prominence of instrumental rationality as “the means-ends connection” and “the effi-
cient and effective achieving of goals,” though he did accept the traditional suggestion
that instrumental rationality is incomplete because it is value-free.
By this logic, there are two kinds of human reasoning (rationality): instrumental
and value, where the leading role belongs to instrumental rationality. In essence,
instrumental rationality relates to achieving a goal, while value rationality is tied to the
goal’s particular value (epistemic, moral, ethical, etc.). Value rationality is determined
by a wholly conscious belief in the value of the goal based on some ethical, aesthetic,
belief-driven, or other form of behavior or consideration, independent of its potential
for success. Instrumental rationality can be either conscious or unconscious.
irrational under another theory. Furthermore, no one model of rationality can possibly fit
all contexts. Thus, under the conditions of applying EUT, the instrumental rationality of
decision-making is already predetermined by the choice of the weighted formula, while
value rationality (which depends on the context) determines the rationality of the entire
decision. This is true only for well-defined, quantitatively-formulated problems,
including problems associated with economic risk, in which the goal of the choice and
the criteria of success are determined in advance. For the majority of ill-defined prob-
lems, in which there is pronounced qualitative uncertainty, the rationality of choice is
determined by both the evaluative and the processing (instrumental) parts of it, and
instrumental rationality plays an integral role. Therefore, we only agree that no one
model of value rationality can fit all contexts, but we also disagree with the statement
that “no one model of rationality can fit all contexts.” This type of rationality is
determined by the instrumental rationality of the model of self-regulation, which is
inherently limited by an individual’s capabilities as well as the conditions of the problem
they are solving.
not be affected by how the problem is described [8]. Authors admitted the fallacy of
such a choice, explaining it in terms of the general properties of people’s attitudes
towards a risk: people are expected to show a risk-seeking preference when faced with
negatively-framed problems and risk aversion when presented with positively-framed
ones. In reality, in the analysis of the Asian disease, participants made decisions in two
different problems with two different goals: Problem 1 (positive framing) has one
specific goal – “save all 600 lives” and Problem 2 (negative framing) has another
specific goal – “do not allow any living patient out of 600 to die.” Considering this,
program B’s motivation was to “save lives,” which is primarily associated with the
medicinal effects of treatment meant to combat a disease, while program D’s moti-
vation was to “prevent loss,” which is often associated with prophylactic measures to
prevent illness. This conclusion is also associated with the fact that the given infor-
mation regarding survivors is typically present in programs regarding treatment, while
information regarding the deceased is generally found in programs related to preventive
measures. It should be noted that prospect theory is a rational theory for decision-
making under risk. Its instrumental part uses linear optimization to find the best
decision in well-defined problems.
Herbert Simon outlined the idea of the theory of bounded rationality. This theory aims
to describe how individuals actually make decisions in situations of uncertainty in
which “the conditions for rationality are not met” [9, p. 377]. Even many psychologists
have come to believe that bounded rationality is the study of deviation from rationality
[10, p. 297]. Simon proposed satisficing (formed from the words satisfactory and
sufficing) as a general alternative to optimizing, also using the term to refer to a specific
decision-making heuristic—the satisficing heuristic. Satisficing can deal with uncer-
tainty—that is, with ill-defined situations in which not all alternatives and conse-
quences can be foreseen and well-defined. Bounded rationality is a descriptive theory,
which posits that rationality should respect the epistemological, environmental and
computational constraints of human brains. Additionally, rational behavior relies on a
satisficing process (finding a good enough solution) as opposed to the EUT maximizing
approach. The heuristic approach to decision‐making is the mechanism of imple-
menting bounded rationality. Example: simple fast‐and‐frugal tree using readily
available clinical cues outperformed a 50-variable multivariable logistic model
regarding the decision of whether to admit a patient with chest pain to a coronary care
unit [11]. G. Gigerenzer and his collaborators have theoretically and experimentally
shown that many cognitive fallacies are better understood as adaptive responses to a
world of uncertainty—such as the conjunction fallacy, the base rate fallacy, and
overconfidence [12].
We present a self-regulation model of decision-making under uncertainty which
implements limited (bounded) rationality. The main features of this model are core
shaping factors that determine instrumental and value rationality and the rules that
regulate coordination between them within the process of self-regulation.
Instrumental and Value Rationality of the Self-Regulation Model of Decision-Making 203
defined problems. In the self-regulation process, the leading role belongs to instru-
mental rationality. With the help of feedback and feedforward controls, the mechanism
of instrumental rationality guides the mechanism of value rationality in collecting
external and internal data for creating the level of motivation intended for choosing the
satisficing alternative. The important role of instrumental rationality not only consists
of processing external data from the environment, but also of eliciting internal data
from long-term memory [13]. The rules of self-regulation together with the rules of
motivation determine the process of self-regulation in decision-making under uncer-
tainty [16].
In the conclusion, we will briefly demonstrate how the self-regulation model
produces a rational (aka satisficing) solution for the “Asian disease” problem that was
analyzed earlier with the help of prospect theory, which produced an irrational deci-
sion. The description of the program with the help of the core factors of instrumental
and value rationality will look like this. First of all, the decision-making here is related
to two different problems with two different and uncertain goals and conditions:
Problem 1: Make a choice between problems A and B with the goal “save all 600
lives” (positive framing). Program 2: Make a choice between programs C and D with
the goal “do not allow any living patient out of 600 to die” (negative framing).
Problems 1 and 2 have the same information-based components of positive (iS +)
and negative (iS-) significance and positive (iD +) and negative (iD-) component of
difficulty. But there is a difference in the energy-based components of positive (eS +)
and negative (eS-) significance and positive (eD +) and negative (eD-) components of
difficulty. For Problem 1, this determines the higher level of motivation for choosing
program A (72%) as opposed to program B (28%) and for Problem 2, the higher level
of motivation for choosing program D (78%) as opposed to program C (22%).
Therefore, the self-regulation model of decision-making provides a rational base for
decisions that people have made for two “Asian disease” problems. Their instrumental
and value rationality were bounded due to the uncertain nature of these problems’ goals
and conditions. Operating only with value rationality, prospect theory turns out to be an
instrumentally poor choice for problems in the face of uncertainty.
References
1. Bell, D.E., Raiffa, H., Tversky, A.: Descriptive, normative, and prescriptive interactions in
decision making. In: Bell, D.E., Raiffa, H., Tversky, A. (eds.) Decision Making: Descriptive,
Normative, and Prescriptive Interactions, pp. 9–30. Cambridge University Press, Cambridge
(1988)
2. Weber, M.: Economy and Society. In: Roth, G., Wittich, C. (eds.) University of California
Press, Berkeley (1978)
3. Nozick, R.: The Nature of Rationality. Princeton University Press, Princeton (1993)
4. Djulbegovic, B., Elqayam, S.: Many faces of rationality: implications of the great rationality
debate for clinical decision-making. J. Eval. Clin. Pract. 23, 915–922 (2017)
5. Kahneman, D., Tversky, A.: Prospect theory: an analysis of decision under risk.
Econometrica 47(2), 263–291 (1979)
6. Kahneman, D., Tversky, A.: Choices, values, and frames. Am. Psychol. 39(4), 341–350
(1984)
206 A. M. Yemelyanov and I. S. Bedny