You are on page 1of 3

2016 International Conference on Smart City and Systems Engineering

Neural network method for solving convex programming problem with linear
inequality constraints
Yang Hongmei
Department of Mathematics, Changji University , Changji, 831100, China;
20813524@qq.com

Abstract-The paper mainly considers the convex [4]:


programming problem with linear inequality min f ( x)
constraints. First,we get the dual problem of the s.t. Ax  b, x  0 (1)
original problem, then according to the relationship
Where A is matrix of m  n , b  R , x  R , f ( x ) is the
m n
between the original problem and the dual problem,
namely nonlinear convex programming theory, We convex function on the open convex set.
get the energy function of solving of the convex In order to solve the problem (1),we always assume
programming problem with linear inequality that there is a solution for the problem (1), and it is
strong compatible, that is to say, x  R ,so that:
' n
constraints. And then we construct the neural
network model for solving this problem, further we Ax  b, x  0 .
' '

proves that each of the optimal solution to solving the Clearly, easy to get the dual problem of the problem (1),
problem and its dual problem is the neural networks’ namely:
solution. And the solution of neural network model is
uniformly asymptotically stable. So it Can be seen max L( x,  )  f ( x)   T ( Ax  b)  L( z )
that the model’s structure is simple, the scale is small. x ,

s.t. x L( x,  )  f ( x)  AT
  0
Keywords-the linear inequality constraints; Convex   0, x  0
programming; Energy function; The neural network (2)

Where,   (1 , 2 ,..., m ) , z  ( x ,  ) .


T T T T

I. INTRODUCTION According to the theory of nonlinear convex


programming, we can get comes the following
In today's many areas of science and technology, conclusion easily:
especially the optimal control and signal processing,
Theorem 1 Assuming that x  R is the optimal solution
n

pattern recognition, etc., we often run into optimization


of the problem (1), then, there is   R , so that
m
problem. To these questions, using the traditional
numerical method is not very effective, which mainly z  ( x ,  ) is the optimal solution of the problem (2).
T

stems from its computational solution of time depends on Theorem 2 Assuming that
the dimension and structure of the problem, and the
( x)  Ax  b, ( x)  R m , ( x)  C1 ,
complexity of the algorithm. In recent years, using the
neural network model to solve the optimization problem then
is more and more widely, and achieved good effect [1-3]. 1
Neural network model of adaptability and parallelism, ( x)  0 ( x)T [ ( x)  ( x) ]  0
2 ,
can improve the computing speed. Since the Tank and
Hopfield put forward to solve the linear programming where,
model of neural network for the first time in 1985, It has 1
( x)T [ ( x)  ( x) ]  C1
attracted many researchers greatly interested. In turn it 2 ,
has produced many other neural network models, such as And
dual neural network, feedback neural network, projection
neural network, etc. When solving problems, we hope x  ( x1 , x2 ,..., xn )T
.
that the proposed neural network has simple structure
and good properties. Based on the above, this paper Theorem 3 Assuming that x
and z
 ( x
,  T
) are the
presents a neural network model to solve convex optimal solutions with the problem (1) and (2) if and
programming problem with linear inequality constraints, only if, respectively,
and the stability of the model is proved.  T ( x )  0

 x L ( x ,  )  0


 ( x )  0
II. STRUNTURE NEURAL NETWORK

  0
Considering the following problem of the convex 
programming problem with linear inequality constraints  x  0


978-1-5090-5530-2/16 $31.00 © 2016 IEEE 518


521
DOI 10.1109/ICSCSE.2016.100
Proof: The problem (1)’s Lagrange function is 1 1
E ( z )  E ( x,  ) 
( T x L ( x,  )
2
˄x˅ )2 
L ( x,  )  f ( x )   T ( x ) , 2 2
1 1
It is defined in the C  R  R .
n m  7
˄x˅ [
˄x˅
˄x˅ ]   7 [   ]
2 2
Because of the problem (1) is convex, and it conforms to Where,

the strong consistency condition, so x is the optimal z  ( xT ,  T ) 7  R n  m .
solution of the problem (1), if and only if there
Theorem 4 z  ( x ,  ) is E ( z ) ’s zero point,
T T T

is   R ,so that z  ( x ,  ) is L( x,  ) ’s saddle


m T

x and z  ( x ,  ) are the optimal solutions with the


T T T

point in the C. So there are


L( x ,  )  L( x ,  )  L( x,  ), ( x,  )  C problem (1) and (2) if and only if, respectively.
Proof: “  ”
If x and z  ( x ,  ) are the optimal solutions with
T T T
According to
L( x ,  )  L( x ,  ) , the problem (1) and (2) if and only if, respectively,
According to the theorem 2 and theorem 3’s optimal
Have to solution,
f ( x )   T ( Ax  b)  f ( x )   T ( Ax  b) Namely there are:
(   )T ( Ax  b)  0 , 1
2 [
˄x˅ ] 0
7
˄x˅ ˄x˅
By the arbitrariness of   R
m

 1  7 [   ]  0
  0 2
 
 x L ( x ,  )  0

 Ax  b  0
 T ( Ax  b)=0  T
 ˄x ˅ 

Plug in
Namely E( z)  0 ,
  0 E ( z ) ’s zero point. “  ” Clearly
 that is to say z is
 ˄x ˅ 0 established.
 T This completes the proof.
 ˄x ˅ =0
. According to the above content, solving the problem
Combining with the problem (2), (1) of the neural network can be defined as follows:
there is dz
  E ( z )
x L( x ,  )  0 . dt
This completes the proof. Namely
Therefore, the energy function for solving problems
(1) is defined as follows [5] :
 dx
 dt   x E ( z )  
A7  
7
˄x˅

 x L( x,  )
xx L( x,  )  A [ ˄x˅
7
˄x˅]

 d    E ( z )   7 ˄x˅˄


 dt

  A
x L ( x,  )  (    )
(3)
Known, the differential equation’s initial value problem Then
has a unique solution.   .
That is the problem (1) and (2)’s of each optimal
III CONLUSIONS solutions are the neural networks (3)’s equilibrium point.
Proof: If
Theorem 5 Assuming that problem (1) and problems z  ( xT ,  T )T  ,
(2)’s optimal solution sets are
by the conclusion of theorem 1 and theorem 3 z  ,
Namely:
   z  R nm xand ( xT ,  T )T are the optimal solutions   .
with the problem (1) and (2) if and only if, respectively Theorem 6 If the problem (1) has the optimal solution,
neural network (3) has the only balance point is
the neural network (3)’s equilibrium point sets are uniformly asymptotically stable.
 =  z  R nm E ( z )  0
,

522
519
ACKNOWLEDGEMENTS
This research is supported by the Natural Science
Foundation of Xinjiang province(Grant No.
2016D01C004).

REFERENCES
[1] Bouzerdorm A, Pattison t. r. “Neural network for
quadratic optimization with bound constraints,”
IEEE Trans on Neural Networks, 1993, 4 (2) :
293-304.
[2] Xia Yousheng.” Neural networks for solving
extending linear programming problem,” IEEE
Trans on Neural networks, 1997, 8 (3) : 803-806.
[3] Xia Yousheng.”A new neural network for solving
linear programming problem,” IEEE Trans on
neural networks, 1996, 7 (6) : 268-275.
[4] zhang tao, etc., “solving linear convex
programming problems with inequality constraints
the potential decline of interior point algorithm,”
Journal of chengdu university, 2013, 1 (32) :
36-41.(in Chinese)
[5] Yang Hongmei. “Solving semi-infinite
multiobjective programming algorithm of neural
network,” Science, technology and engineering,
2011, 11 (33) : 8286-8288.(in Chinese)

523
520

You might also like