Professional Documents
Culture Documents
Abstract
Different types of management structures used in enterprises are considered. Such structures
are traditionally divided into three types according to the number and types of control relations. A
basic dynamic incentive model with the simplest management structure, which includes one
Principal and one agent, is presented. The basic dynamic model of stimulation is based on the static
model, its solution having already been found. To reduce the number of experiments, the method of
qualitatively representative scenarios for finding optimal inputs has been used. Experimental results
for the basic dynamic model are presented. The simulation results allow us to determine the solution
of the problem in the dynamic form, and provide a foundation for future research using other more
complex control structures.
1 Introduction
In that period of history applying of differ incentive models is essential and neccessary for
successful business. Sustainable business development gives answer on questions of symbiosis of
business and maintenance of environment, that in current reality represents the main problem of all
major nations, since future human wellbeing is being dependent on it. Simulation namely simple
reality view helps to select different optimal mechanisms for pressing problems solving.
Simulation on managed system level requires creating of management model. Simple in-out
system model, consisting of regulating member - center - and regulated subject - agent - pictured in
figure 1 [1].
Management
Regulated
subject
External interferences
For subjects interaction situation formalization game theory often used. From the control
point of view game models which agents make decisions not at the same time but consistently are
of great interest. Notably if regulating member and regulated subjects exist then principal defines
game rules first and afterwards subjects make desicions based on these rules. Such games are called
hierarcical. Inherently hierarcical game is one with fixed moves sequence. Simplest hierarcical
game model pictured in figure 2 - two-person game wherein the first (doing first move) player -
center (regulating member) and the second one - agent [2].
А
Fig. 2 - Basic structure "center-agent"
We can complicate structure further on but actually there exists common description
technology for game-theory management problems in different structures.
2 Dynamic incentive model
Basic incentive model is a such hierarcical game. We dynamically state the incentive model
in accordance with requirements for sustainable development in the following form:
T
H H(u,x)
F ( s , u , x )=∑ [ H ( ut , xt )−s ( u t , x t ) ] → max ; (1)
s(u,x) t =1
П c(u,x) s : [ 0 , ∞ ) × [ 0 , ∞ ] →[0 , ∞ ]; (2)
T
u
f ( s , u , x ) =∑ [ s ( u , x ) −c ( u ) ] → max ;
t t t
x (3)
t=1
t
u ≥ 0; (4)
x t +1=x t + g ( ut , x t ) , x 0=x 0 ,t =0,1 , … , T −1; (5)
T ¿
x =x . (6)
Compared to static model here is appended state variable x like discrete time t function
which describes managed dynamic system, T - analysis period. Agent action is denoted as u. State
variable's dynamic is assigned of equation (5) with initial data x 0. Also there is sustainable
development condition (6) that means implementation of an optimal plan u¿ as of end of analysis
period.
Optimal plan is defined by optimal control problem solving
T
¿
Optimal plan x is constant in any point of time by condition. Controlling mechanism is
similar to static model mechanism:
{
t
s ( u t ,u ¿ )=
¿
δ+ ∑ c ( ut ) ,u t=u ¿ , (13)
τ=t−1
0 , ot h erwise ,t=1,2 , … , T ;
3 Results
Using the qualitatively representative scenarios method we have conducted a series of
imitation model runs that helped verify control mechanism (similar to static model mechanism)
optimality hypothesis [3].
We have chosen specific variables values with regard to their meaning and functionality in
model. Variables values for experiments are listed in the table 1.
Model works incorrectly with all coefficients equivalent 1. As can be seen in figure 3 with
a=1,5 and delta=0 objective agent function equals 0 since with b=1 and delta=0 values of costs
and costs compensation functions are equaled. That's why agent makes no efforts.
With delta=0,1 and p=0 objective center function value increased. Moreover objective
agent function is minimal as can be seen in figure 4. Besides model works incorrectly since plan
value of state variable is equaled to zero.
Objective center function increased with decreasing of p value. At the same time if b=0
then agent anyway would take efforts as can be seen in figure 5. Since in spite of compensation
function equality to zero reward function equals to motivational raise which size should be greater
than zero.
Fig. 5 – Experiment with b=0 and p=0,5
With decreasing of k objective center function increased. Main model properties unalter on
change of m. Objective center function increased with increasing of a . As can be seen on figure 6
with a=100 and k =0,5 objective center function increased manyfold.
We can derive optimal values for every coefficient from conducted experiments. Optimal
values are listed in the table 2.
4 Conclusions
The optimal control problem is being solved with such coeffitients' values. Furthermore
could be said that control mechanism for dynamic model is optimal similarly to static model
mechanism. Also nothing varies on change of control aim since optimal value of state variable are
achived literally in 1 simulating step. In follow-up study we can make a comparison of different
rules with control mechanism, of varying basic functions and of alternative model coefficients. And
clarify with the aid of imitational experiments which exact alternative is more preferable. For this
purpose we can further continue relying upon method of qualitatively representative scenarios for
imitation simulating.
References