You are on page 1of 6

Shared Memory Multiprocessors

• Beyond the instruction level concurrency used


in vector and multiple issue processors, there
are processing ensembles that consist of ‘n’
identical processors that share a common
memory.
• Multiprocessors are usually designed for at
least one of two reasons
– Fault Tolerance:
– Program Speed up.

Shared Memory Multiprocessors


• Fault Tolerant Systems: n identical
processors ensure that failure of one processor
does not affect the ability of the multiprocessor
to continue with program execution.
• These multiprocessors are called high
availability or high integrity systems
• These systems may not provide any speed up
over a single processor system
Shared Memory Multiprocessors
• Program Speed up: Most multiprocessors are
designed with main objective of improving
program speed up over that of single processor.
• Yet fault tolerance is still an issue as no design
for speedup ought to come at the expense of
fault tolerance.
• It is generally not acceptable for whole multi
processor system to fail if any one of its
processors fail.

Shared Memory Multiprocessors


• Basic Issues: Three basic issues are
associated with designs of multiprocessor
systems.
– Partitioning
– Scheduling of tasks
– Communication and synchronization.
Partitioning
• This is the process of dividing a program into
tasks, each of which can be assigned to an
individual processor for execution.
• The partitioning process occurs at compile time.
• The goal of portioning process is to uncover the
maximum amount of parallelism possible within
certain obvious machine limitations.
• Program overhead, the added time a task takes
to be loaded into a processor, define the size of
minimum task produced by portioning program.

Partitioning
• The program overhead time which is
configuration and scheduling dependent, limits
the maximum degree of parallelism among
executing sub tasks.
• If amount of parallelism is increased by using
finer and finer grain task sizes the amount of
overhead time is accordingly increased.
• If available parallelism exceeds the known
number of processors, or several shorter tasks
share the same instruction / data working set,
Clustering is used to group subtasks into a
single assignable task
Partitioning
• The detection of parallelism is done by one of
three methods.
– Explicit statement of concurrency in high level
language. Programmers delineate boundaries
among tasks that can be executed in parallel.
– Programmers hint in source statement which
compilers can use or ignore.
– Implicit parallelism: sophisticated compilers can
detect parallelism in normal serial code and
transform program code for execution on
multiprocessors.

Scheduling
• Scheduling is done both statically ( at
compile time ) and dynamically ( at run
time).
• Statically scheduling is not sufficient to
ensure optimum speedup or even fault
tolerance.
• The processor availability is difficult to
predict and may vary from run to run.
• Runtime scheduling has advantage of
handling changing system environments
and program structures.
Scheduling
• Run time overhead is prime disadvantage of
run time scheduling.
• It is desirable from fault tolerance point of
view that run time scheduling is initiated by
any processor and then the process itself is
distributed across all available processors.
• The major run time overheads in run time
scheduling include.
– Information gathering: (about dynamic program
state and state of the system)
– Scheduling

Scheduling
– Dynamic execution control: dynamic clustering or
process creation at run time.
– Dynamic data management.: Assignment of
tasks and processors in such a way as to
minimize the required amount of memory
overhead delay in accessing the data.
– Overhead is primarily a function of following two
program characteristics-
• Program Dynamicity
• Granularity- size of basic program sub-task to be
scheduled
Scheduling
-Following Run time Scheduling Techniques
are arranged according to complexity-
• System Load Balancing
• Load Balancing
• Clustering
• Scheduling with Compiler Assistance
• Static Scheduling/ Custom Scheduling

You might also like