You are on page 1of 3

Name: Areej Fatima

Roll#: MPHIL-20-10

Assignment topic: Generalization

Submitted to: Dr. Imran Javaid

Date: 03-11-2020

Department: CASPAM

Consider a mathematical idea and associate with it two well known generalizations.

 Generalization
Generalizing is the process of "seeing through the particular" by not dwelling in the
particularities but rather stressing relationships… whenever we stress some features we
consequently ignore others, and this is how generalizing comes about."
A generalization is a form of abstraction whereby common properties of specific instances are
formulated as general concepts or claims. Generalizations posit the existence of a domain
or set of elements, as well as one or more common characteristics shared by those elements.

 Mathematical generalization
In mathematics, generalization can be both a process and a product. When one looks at specific
instances, notices a pattern, and uses inductive reasoning to conjecture a statement about all such
patterns, one is generalizing. The symbolic, verbal, or visual representation of the pattern in your
conjecture might be called a generalization.

 Example#1
Metric space, in mathematics, especially topology, an abstract set with a distance function,
called a metric, that specifies a nonnegative distance between any two of its points in such a way
that the following properties hold:

 the distance from the first point to the second equals zero if and only if the points are the
same
 the distance from the first point to the second equals the distance from the second to the
first, and
 the sum of the distance from the first point to the second and the distance from the second
point to a third exceeds or equals the distance from the first to the third.

The last of these properties is called the triangle inequality. The usual distance function on
the real number line is a metric, as is the usual distance function in Euclidean n-dimensional
space. Thus, a metric generalizes the notion of usual distance.

 Example#2

In short, the Jacobian matrix is a generalization of the gradient for vector-valued functions.

Recall that the gradient is a vector of partial derivatives of a multi-variable function. So, consider
a multi-variable function of the form f:X1×X2×⋯×XN→Y.

The output of this function is f(x1,x2,…,xN)=y, where xi∈Xi, for i=1,…,N


and y∈Y.

And the gradient is ∇f=[∂f/∂x1,…,∂f/∂xN]∈R.

A vector-valued function is a function whose output is a vector, i.e. a function of the


form f:X→Y1×Y2×⋯×YM.

so the output of this function is a vector f(x)=[y1,y2,…,yM], where x∈X and yi∈Yi,
for i=1,…,M. You can also view a vector-valued function f as a vector of scalar-valued
functions [f1,f2,…,fM], where fi:X→Yi, for all i.

We can also have multi-variable vector-valued functions, i.e. functions of the form

f:X1×X2×⋯×XN→Y1×Y2×⋯×YM
The Jacobian matrix is an N×M matrix with one partial derivative for each combination of inputs
and outputs (i.e. fi).

If we want to optimize a multi-variable vector-valued function, we can make use of the Jacobian,
in a similar way that we make use of the gradient in the case of multi-variable functions.

____________________________________

You might also like