Professional Documents
Culture Documents
Save
5.5K 8
:
Why Math?
Linear algebra, probability and calculus are the ‘languages’ in which machine
learning is formulated. Learning these topics will contribute a deeper
understanding of the underlying algorithmic mechanics and allow
development of new algorithms.
The core data structures behind Deep-Learning are Scalars, Vectors, Matrices
and Tensors. Programmatically, let’s solve all the basic linear algebra problems
using these.
Scalars
Scalars are single numbers and are an example of a 0th-order tensor. The
notation x ∈ ℝ states that x is a scalar belonging to a set of real-values numbers,
ℝ.
Few built-in scalar types are int, float, complex, bytes, Unicode in Python. In
In NumPy a python library, there are 24 new fundamental data types to
describe different types of scalars. For information regarding datatypes refer
documentation here.
<class 'int'>
<class 'float'>
12.5
-2.5
37.5
:
0.6666666666666666
The following code snippet checks if the given variable is scalar or not.
True
:
False
True
Vectors
Vectors are ordered arrays of single numbers and are an example of 1st-order
tensor. Vectors are fragments of objects known as vector spaces. A vector
space can be considered of as the entire collection of all possible vectors of a
particular length (or dimension). The three-dimensional real-valued vector
space, denoted by ℝ^3 is often used to represent our real-world notion of
three-dimensional space mathematically.
Matrices
Matrices are rectangular arrays consisting of numbers and are an example of
2nd-order tensors. If m and n are positive integers, that is m, n ∈ ℕ then the
m×n matrix contains m*n numbers, with m rows and n columns.
$python
>>> a = x.mean(0)
>>> a
matrix([[1.5, 2.5]])
>>> # Finding the mean with 1 with the matrix x.
>>> z = x.mean(1)
>>> z
matrix([[ 1.5],
[ 2.5]])
:
>>> z.shape
(2, 1)
>>> y = x - z
matrix([[-0.5, 0.5],
[-0.5, 0.5]])
>>> print(type(z))
<class 'numpy.matrixlib.defmatrix.matrix'>
Matrix Addition
Matrices can be added to scalars, vectors and other matrices. Each of these
operations has a precise definition. These techniques are used frequently in
machine learning and deep learning so it is worth familiarising yourself with
them.
:
Matrix-Matrix Addition
C = A + B (Shape of A and B should be equal)
The methods shape return the shape of the matrix, and add takes in two
arguments and returns the sum of those matrices. If the shape of the matrices
is not same it throws an error saying, addition not possible.
:
Matrix-Scalar Addition
Adds the given scalar to all the elements in the given matrix.
:
Matrix Scalar Multiplication
Multiplies the given scalar to all the elements in the given matrix.
:
Matrix Multiplication
A of shape (m x n) and B of shape (n x p) multiplied gives C of shape (m x p)
src
:
Matrix Transpose
With transposition you can convert a row vector to a column vector and vice
versa:
A=[aij]mxn
AT=[aji]n×m
:
Tensors
The more general entity of a tensor encapsulates the scalar, vector and the
matrix. It is sometimes necessary — both in the physical sciences and machine
learning — to make use of tensors with order that exceeds two.
:
src
Important Links
Closing Notes
Thanks for reading. If you found this story helpful, please click the below to
spread the love.
Special Thanks to Samhita Alla for her contributions towards the article.
:
Sign up for The Variable
By Towards Data Science
Every Thursday, the Variable delivers the very best of Towards Data Science: from hands-on tutorials and
cutting-edge research to original features you don't want to miss. Take a look.
By signing up, you will create a Medium account if you don’t already have one. Review
our Privacy Policy for more information about our privacy practices.