You are on page 1of 6

Gradient Vector:

Gradient of a scaler function is a vector


function. The magnitude and direction of
gradient vector is as follows.
 Magnitude: Maximum rate of change
of the scaler function at that point.
 Direction: Direction is along the
direction corresponding to the highest
rate of change the scaler function.
Example: Heat flows along negative
temperature gradient.
Gradient Vector in 3D (three dimension)
Space:
 ˆ  ˆ  ˆ
 ( x, y, z )  i j k
x y z

Theorem: Direction of the gradient vector


of a scaler at any point on the scaler
surface is perpendicular to the scaler
surface at that point.
Proof:
Let us consider two points P (x, y, z) and Q
(x+dx, y+dy, z+dz) on the scaler surface
 ( x, y, z )  0.
Position vector of the point P ( x, y, z ) :
r  x iˆ  y ˆj  z kˆ
Position vector of the point Q ( x  dx, y  dy , z  dz ) :
r  dr   x  dx  iˆ   y  dy  ˆj   z  dz  kˆ
PQ  dr  dx iˆ  dy ˆj  dz kˆ (Tangential to the surface)
P (x, y, z) and Q (x+dx, y+dy, z+dz) are two
closed points, as dx, dy and dz are
infinitesimally small increments in x, y and z
respectively. Direction of dr (i.e., PQ) is thus
tangential to the scaler surface  ( x, y, z )  0 at
the point P (x, y, z).
An abitrary scaler funtion :  ( x, y , z )  0
 d  0
  
Again, d  dx  dy  dz
x y z
   ˆ  ˆ 
  iˆ 
 x y
j
z 

k  . dx iˆ  dy ˆj  dz kˆ 
  .dr
Hence, d   .dr  0
  dr (Tangential to the scaler surface  ( x, y , z )  0)

Hence, gradient vector of a scaler


 ( x, y, z) at any point P ( x, y, z ) on
the scaler surface is perpendicular to
the scaler surface at that point.
(QED)
Conditions for Maximum and Minimum
Value of the Inner Product:
A.B  A B cos  A, B where,  1  cos  A, B  1

Condittion for Maximum Inner Product:

 A.B  Max
 A B when,  A, B  0 ,

i.e., when, A B
i.e., when, B  t A
where, t = some scaler constan

Condittion for Maximum Inner Product:

 A.B  Min
  A B when,  A, B  180 ,

i.e., when, A and B are parallel and opposit


i.e., when, B  t A
where, t = some scaler constant
Total Differential of a Function:
An abitrary scaler funtion of n var iables :
   ( x1 , x2 , x3 ,..., xn )   ( X )
where, X   x1 , x2 , x3 ,..., xn  
T n

Gradient of the scalerfunction is given by ,


T
     
X( X )   , , , ... , 

 1x  x2 x3  xn 

Total differential of the function  ( x1 , x2 , x3 ,..., xn ) is given by,


   
d  dx1  dx2  dx3  ...  dx3
x1 x2 x3 xn
 dx1 
 dx 
 2
 dx3 
       
 , , , ... ,  . 
 x1 x2 x3 xn   
.
 
. 
 dx 
 n
T
  X  ( X )  dX
where, dX   dx1 , dx2 , dx3 ,..., dxn  
T n

Here d is infinitesimally small change in  ( x1 , x2 , x3 ,..., xn )   ( X ).


Iterative Maximisation of a Function
(Gradient Ascent Method)
T
d   X  ( X )  dX
Inorder to maximise  ( X ) , the d [infinitesimally small change
in  ( X )] must be the maximum increment as possible at every step.
Thus to achieve this, the change in X , that is dX [infinitesimally
small change in X ]must be along  X  ( X ) [ gradient of  ( X )].
 dX  X  ( X )
i.e., dX   X  ( X ) where  = rate of learning
This procedure is called Gradient Ascent Method.

Iterative Minimisation of a Function


(Gradient Descent Method)
T
d   X  ( X )  dX
Inorder to minimise  ( X ) , the d [infinitesimally small change in  ( X )]
must be the maximum decrement as possible at every step.
Thus to achieve this, the change in X , that is dX [infinitesimally
small change in X ]must be along the opposit direction of  X  ( X )
[ gradient of  ( X )].
 dX must be parallel and opposit to  X  ( X )
i.e., dX   X  ( X ) where  = rate of learning
This procedure is called Gradient Descent Method.

You might also like