You are on page 1of 2

CS1FC16 MEMORY STRUCTURES

Memory model
Von Neumann memory is a linear strip, indexed from 0. In theory this is a random access memory –
meaning that any location can be accessed at constant time. However, this is physically impossible – the
signals are limited by the speed of light – they cannot travel faster. If a memory cell is far away, it will
take more time to access it.
The speed of light is 3 x 108 m/s.
A 3 GHz clock performs 3 x 109 operations per second
During one clock operation, the light travels 10 -1 m = 10cm.
The speed of a signal in copper is between 65% and 95% of the speed of light – it's variable.
The speed of a signal in fiber optics is 65% of the speed of light – it's constant.
If the speed is constant, the data can be sent using frequency modulation – speeding the signal
up and slowing it down.
The speed of a signal in silicon is 20% of the speed of light – during one operation of a 3 GHz clock, it
travels 2 cm – if a chip in CPU is no bigger than that, it can access data in one clock cycle.

Transactions between CPU and registers operate up to the CPU clock speed – they are on the same
chip. Computers also have cache memory that is much faster to access than main memory. Computers
often have many levels of cache memory, each holding more data but also further from the CPU.
Copying a program to the cache allows to access it faster, as all instructions are nearby.

A von Neumann computer stalls until it has satisfied a memory request. A computer typically spends
90% of the time stalled – the faster the CPU clock is, the bigger the proportion of the time spent stalled.

Smaller programs run faster than big ones – a big block of memory covers a large distance from the first
to the last index.

Strings
A string is a linear sequence of characters. The compiler always keeps a pointer to the start of the
string. To indicate the end of a string, it may put a null value there or keep pointers to both the start
and the end – the latter approach is more secure – the control is isolated from the data, whereas the
null value can be overwritten causing the compiler not to recognize the end of a string.

Vector
A vector is a linear sequence of objects – for example pointers to other structures. The compiler always
has a pointer to keep track of the start of the vector, but it may or may not keep track of the end of the
vector.

Matrix
A matrix is a two-dimensional table of data
A matrix can be encoded as a column vector: the elements of the vector are pointers to row vectors.
It can also be encoded as a row vector – with elements pointing to the columns.
Alternatively, a matrix can be encoded as a strip with elements laid out in row order – this way saves
memory but the access is slower.
Being able to access a matrix in different ways allows optimizing certain operations:
Transpose: there is no need to move any elements - it only requires a change in the order of access
Matrix multiplication algorithm executes the fastest if the first matrix is accessed by rows and the
second by columns.

Matrix identity
If AI = A = IA then I is the identity matrix. Identity matrices are square matrices.
Matrix power
If A2 exists, then A is a square matrix.
A1 = A, A0 = I

Associativity
Matrix addition and multiplication is associative: A+ (B + C) = (A + B) + C, A(BC) = (AB)C
Commutativity
Matrix addition is commutative: A + B = B + A
Matrix multiplication is not commutative – the order in which you multiply the matrices matters.
Distributivity
Matrix algebra is distributive:
A(B + C) = AB + AC
Example:
Let A be a 1000 x 4 matrix, B be 4 x 4 matrix, C be a 4 x 4 matrix
Multiplication A(BC) is faster than (AB)C – AB is going to be 1000 x 4 matrix, whereas BC will be a 4 x 4
matrix. In the (AB)C case you end up with 2 matrices of size 1000 x 4.

Zero-one matrix
All of the elements of a zero-one matrix are either 0 or 1. They can be used for many purposes,
including performing Boolean algebra.
Operations on zero-one matrices:
 Matrix join – obtained by performing the OR operation on the elements -
replacing + with OR in matrix addition computes the join
 Matrix meet – obtained by performing AND operation on the elements –
replacing + with AND in matrix addition computes the meet
 Product – obtained by replacing + with OR and x with AND in matrix
multiplication
Array
Arrays are tables of any non-negative integral dimensions. The elements are identified by the array
indexes.
A 0-dimensional array is an element, a 1-dimensional array is a sequence of elements. A matrix can be
represented as 2-dimensional array.

Flags
Software often contains many flags that control what a program does.
They can be implemented as:
Conditionals – which is not very efficient – you have decisions inside the loops
Pointers to functions – shifting the decisions from inside the loop to program setup which is
more efficient – they move complexity from repeated runs to initialization.

You might also like