You are on page 1of 5

DYNAMIC INCENTIVE MODEL CONSTRUCTED IN ACCORDANCE WITH

REQUIREMENTS FOR SUSTAINABLE BUSINESS DEVELOPMENT

Abstract
Different types of management structures used in enterprises are considered. Such structures
are traditionally divided into three types according to the number and types of control relations. A
basic dynamic incentive model with the simplest management structure, which includes one
Principal and one agent, is presented. The basic dynamic model of stimulation is based on the static
model, its solution having already been found. To reduce the number of experiments, the method of
qualitatively representative scenarios for finding optimal inputs has been used. Experimental results
for the basic dynamic model are presented. The simulation results allow us to determine the solution
of the problem in the dynamic form, and provide a foundation for future research using other more
complex control structures.

1 Introduction
In that period of history applying of differ incentive models is essential and neccessary for
successful business. Sustainable business development gives answer on questions of symbiosis of
business and maintenance of environment, that in current reality represents the main problem of all
major nations, since future human wellbeing is being dependent on it. Simulation namely simple
reality view helps to select different optimal mechanisms for pressing problems solving.
Simulation on managed system level requires creating of management model. Simple in-out
system model, consisting of regulating member - center - and regulated subject - agent - pictured in
figure 1 [1].

Regulating Managed system state


member

Management

Regulated
subject
External interferences

Fig. 1 - In-out model

For subjects interaction situation formalization game theory often used. From the control
point of view game models which agents make decisions not at the same time but consistently are
of great interest. Notably if regulating member and regulated subjects exist then principal defines
game rules first and afterwards subjects make desicions based on these rules. Such games are called
hierarcical. Inherently hierarcical game is one with fixed moves sequence. Simplest hierarcical
game model pictured in figure 2 - two-person game wherein the first (doing first move) player -
center (regulating member) and the second one - agent [2].

А
Fig. 2 - Basic structure "center-agent"

We can complicate structure further on but actually there exists common description
technology for game-theory management problems in different structures.
2 Dynamic incentive model
Basic incentive model is a such hierarcical game. We dynamically state the incentive model
in accordance with requirements for sustainable development in the following form:
T
H H(u,x)
F ( s , u , x )=∑ [ H ( ut , xt )−s ( u t , x t ) ] → max ; (1)
s(u,x) t =1
П c(u,x) s : [ 0 , ∞ ) × [ 0 , ∞ ] →[0 , ∞ ]; (2)
T
u
f ( s , u , x ) =∑ [ s ( u , x ) −c ( u ) ] → max ;
t t t
x (3)
t=1
t
u ≥ 0; (4)
x t +1=x t + g ( ut , x t ) , x 0=x 0 ,t =0,1 , … , T −1; (5)
T ¿
x =x . (6)
Compared to static model here is appended state variable x like discrete time t function
which describes managed dynamic system, T - analysis period. Agent action is denoted as u. State
variable's dynamic is assigned of equation (5) with initial data x 0. Also there is sustainable
development condition (6) that means implementation of an optimal plan u¿ as of end of analysis
period.
Optimal plan is defined by optimal control problem solving
T

∑ [ H ( u t , x t )−c (u t )] → max , ut ≥ 0 (7)


t =1

¿
Optimal plan x is constant in any point of time by condition. Controlling mechanism is
similar to static model mechanism:

{
t

s ( u t ,u ¿ )=
¿
δ+ ∑ c ( ut ) ,u t=u ¿ , (13)
τ=t−1
0 , ot h erwise ,t=1,2 , … , T ;

As a basic we have took functions as follows

H ( u , x )=a √u−k |x−x ¿| ,c ( u )=bu 2 , g ( u , x )= p √ u−mx . (15)

3 Results
Using the qualitatively representative scenarios method we have conducted a series of
imitation model runs that helped verify control mechanism (similar to static model mechanism)
optimality hypothesis [3].
We have chosen specific variables values with regard to their meaning and functionality in
model. Variables values for experiments are listed in the table 1.

Table 1 – Variables description


Notation Description Values in model
a Coefficient for converting of agent's efforts 0, 1.5, 15, 100
into center funds.
k Coefficient for converting difference 0, 0.5, 1, 2, 15
between plan and actual values of state
variable into funds.
b Coefficient for converting agent's efforts into 0, 0.5, 1, 1.5, 2
compensation funds.
delta Motivational raise. 0, 0.1, 0.3, 1
Notation Description Values in model
m Coefficient of state variable discrete loss. 0, 0.5, 1, 2
p Coefficient for cinverting efforts into state 0, 0.5, 1, 2
variable value.

Model works incorrectly with all coefficients equivalent 1. As can be seen in figure 3 with
a=1,5 and delta=0 objective agent function equals 0 since with b=1 and delta=0 values of costs
and costs compensation functions are equaled. That's why agent makes no efforts.

Fig. 3 – Experiment with a=1,5 and delta=0

With delta=0,1 and p=0 objective center function value increased. Moreover objective
agent function is minimal as can be seen in figure 4. Besides model works incorrectly since plan
value of state variable is equaled to zero.

Fig. 4 – Experiment with delta=0,1 and p=0

Objective center function increased with decreasing of p value. At the same time if b=0
then agent anyway would take efforts as can be seen in figure 5. Since in spite of compensation
function equality to zero reward function equals to motivational raise which size should be greater
than zero.
Fig. 5 – Experiment with b=0 and p=0,5

With decreasing of k objective center function increased. Main model properties unalter on
change of m. Objective center function increased with increasing of a . As can be seen on figure 6
with a=100 and k =0,5 objective center function increased manyfold.

Fig. 6 – Experiment with a=100 and k =0,5

We can derive optimal values for every coefficient from conducted experiments. Optimal
values are listed in the table 2.

Table 2 – Optimal variables values


Notation Optimal value
a a → max
k k → min ; k >0
b b → min ; b ≥0
delta delta → min ; delta>0
m m∈[0 ; 1]
p p →min ; p>0

4 Conclusions
The optimal control problem is being solved with such coeffitients' values. Furthermore
could be said that control mechanism for dynamic model is optimal similarly to static model
mechanism. Also nothing varies on change of control aim since optimal value of state variable are
achived literally in 1 simulating step. In follow-up study we can make a comparison of different
rules with control mechanism, of varying basic functions and of alternative model coefficients. And
clarify with the aid of imitational experiments which exact alternative is more preferable. For this
purpose we can further continue relying upon method of qualitatively representative scenarios for
imitation simulating.
References

1. Бурков В.Н., Коргин Н.А., Новиков Д.А. Введение в теорию управления


организационными системами / Под ред. чл.-корр. РАН Д.А. Новикова. – М.: Либроком,
2009. – 264 с.
2. Новиков Д.А Теория управления организационными системами. - М., 2007. – 584 с.
3. Ougolnitsky G.A., Usov A.B. Computer Simulations as a Solution Method for
Differential Games // Computer Simulations: Advances in Research and Applications. Eds. M.D.
Pfeffer and E. Bachmaier. - N.Y.: Nova Science Publishers, 2018. P.63-106.

You might also like