You are on page 1of 15

Bachelor of Technology in Computer Science

&Engineering

Assignment

Department of Computer Science


I.F.T.M. University, Moradabad
Submitted by
Pooja
B.Tech (CS) 4th year
Roll No:- 16118022
ASSIGNMENT -1(Questions)

1. What is Vector Processor? Classify the Vector Processor with block diagram and vector
processing methods

2. Define the Flynn’s classification with features and neat diagram.

3. Write Short Notes On:-


a. Control Parallelism
b. Data Parallelism
c. Multiprocessor d. Multicomputer

4. Describe Shared Memory Multiprocessor with block diagram and its applications also.

5. What is Array Processor? Classify the Array Processor with block diagram and vector
processing methods

6. Describe Shared Memory Multiprocessor with block diagram and its applications also.
Q. 1: What is Vector Processor? Classify the Vector Processor with block diagram and Vector processing
methods.

Ans: A vector processor is a central processing unit that can work on an entire vector in one instruction. The
instruction to the processor is in the form of one complete vector instead of its element. Vector processors
are used because they reduce the draw and interpret bandwidth owing to the fact that fewer instructions must
be fetched.A vector processor is also known as an array processor.
Vector Process Classification:
According to from where the operands are retrieved in a vector processor, pipe lined vector
computers are classified into two architectural configurations:
1.Memory to memory architecture –
In memory to memory architecture, source operands, intermediate and final results are
retrieved (read) directly from the main memory. For memory to memory vector
instructions, the information of the base address, the offset, the increment, and the the
vector length must be specified in order to enable streams of data transfers between the
main memory and pipelines. The processors like TI-ASC, CDC STAR-100, and Cyber-
205 have vector instructions in memory to memory formats. The main points about
memory to memory architecture are:
o There is no limitation of size
o Speed is comparatively slow in this architecture

2.Register to register architecture –


In register to register architecture, operands and results are retrieved indirectly from the
main memory through the use of large number of vector registers or scalar registers.
The processors like Cray-1 and the Fujitsu VP-200 use vector instructions in register to
register formats. The main points about register to register architecture are:
o Register to register architecture has limited size.
o Speed is very high as compared to the memory to memory architecture.
o The hardware cost is high in this architecture.

 A block diagram of a modern multiple pipeline vector computer is shown below:

Vector processing is the process of using vectors to store a large number of variables
for high-intensity data processing - weather forecasting, human genome mapping and
GIS data are some examples. A vector processor is a computer CPU with parallel
processors that has the capability for vector processing.
Q. 2: Define the Flynn’s classification with features and neat diagram.
Ans: Flynn proposed a classification for the organization of a computer system by the
number of instructions and data items that are manipulated simultaneously.The
sequence of instructions read from memory constitutes an instruction stream. The
operations performed on the data in the processor constitute a data stream.Flynn's
taxonomy is a classification of computer architectures, proposed by Michael J. Flynn in
1966. The classification system has stuck, and it has been used as a tool in design of modern
processors and their functionalities. Since the rise of multiprocessing central processing
units (CPUs), a multiprogramming context has evolved as an extension of the classification
system.

Flynn's classification divides computers into four major groups that are:
Single instruction stream, single data stream (SISD): A sequential computer which
exploits no parallelism in either the instruction or data streams. Single control unit
(CU) fetches single instruction stream (IS) from memory. The CU then generates
appropriate control signals to direct single processing element (PE) to operate on single
data stream (DS) i.e., one operation at a time.
Examples of SISD architecture are the traditional uniprocessor machines like older
personal computers (PCs; by 2010, many PCs had multiple cores) and mainframe
computers.

Single instruction stream, multiple data stream (SIMD): A single instruction operates
on multiple different data streams. Instructions can be executed sequentially, such as by
pipelining, or in parallel by multiple functional units.
Single instruction, multiple threads (SIMT) is an execution model used in parallel
computing where single instruction, multiple data (SIMD) is combined with
multithreading. This is not a distinct classification in Flynn's taxonomy, where it would
be a subset of SIMD. Nvidia commonly uses the term in its marketing materials and
technical documents where it argues for the novelty of Nvidia architecture.
Multiple instruction stream, single data stream (MISD): Multiple instructions
operate on one data stream. This is an uncommon architecture which is generally used
for fault tolerance. Heterogeneous systems operate on the same data stream and must
agree on the result. Examples include the Space Shuttle flight control computer.

Multiple instruction stream, multiple data stream (MIMD): Multiple autonomous


processors simultaneously executing different instructions on different data. MIMD
architectures include multi-core superscalar processors, and distributed systems, using
either one shared memory space or a distributed memory space.
Q No. 3: Wtite Short Notes On:
:
A. Control Parallelism:- Control parallelism (also known as function parallelism and
Task parallelism) is a form of parallelization of computer code across multiple
processors in parallel computing environments. Task parallelism focuses on
distributing tasks—concurrently performed by processes or threads—across different
processors. In contrast to data parallelism which involves running the same task on
different components of data, task parallelism is distinguished by running many
different tasks at the same time on the same data. A common type of task parallelism is
pipelining which consists of moving a single set of data through a series of separate
tasks where each task can execute independently of the others.
In a multiprocessor system, task parallelism is achieved when each processor executes
a different thread (or process) on the same or different data. The threads may execute
the same or different code. In the general case, different execution threads
communicate with one another as they work, but this is not a requirement.
Communication usually takes place by passing data from one thread to the next as part
of a workflow.

B. Data Parallelism: Data parallelism is parallelization across multiple processors in


parallel computing environments. It focuses on distributing the data across different
nodes, which operate on the data in parallel. It can be applied on regular data
structures like arrays and matrices by working on each element in parallel. It contrasts
to task parallelism as another form of parallelism.

A data parallel job on an array of n elements can be divided equally among all the
processors. Let us assume we want to sum all the elements of the given array and the
time for a single addition operation is Ta time units. In the case of sequential
execution, the time taken by the process will be n×Ta time units as it sums up all the
elements of an array. On the other hand, if we execute this job as a data parallel job on
4 processors the time taken would reduce to (n/4)×Ta + merging overhead time units.
Parallel execution results in a speedup of 4 over sequential execution. One important
thing to note is that the locality of data references plays an important part in evaluating
the performance of a data parallel programming model. Locality of data depends on the
memory accesses performed by the program as well as the size of the cache.

C. Multiprocessor: Multiprocessor is the use of two or more central processing units


(CPUs) within a single computer system.The term also refers to the ability of a system
to support more than one processor or the ability to allocate tasks between them. There
are many variations on this basic theme, and the definition of multiprocessor can vary
with context, mostly as a function of how CPUs are defined (multiple cores on one die,
multiple dies in one package, multiple packages in one system unit, etc.).
According to some on-line dictionaries, a multiprocessor is a computer system having
two or more processing units (multiple processors) each sharing main memory and
peripherals, in order to simultaneously process programs.A 2009 textbook defined
multiprocessor system similarly, but noting that the processors may share "some or all
of the system’s memory and I/O facilities"; it also gave tightly coupled system as a
synonymous term
D. Multicomputer: Multicomputers are distributed memory MIMD architectures.
The following diagram shows a conceptual model of a multicomputer –

Multicomputers are message-passing machines which apply packet switching method to


exchange data. Here, each processor has a private memory, but no global address space
as a processor can access only its own local memory. So, communication is not
transparent: here programmers have to explicitly put communication primitives in their
code.

Having no globally accessible memory is a drawback of multicomputers. This can be


solved by using the following two schemes −
Virtual Shared Memory (VSM) Shared
Virtual Memory (SVM)
In these schemes, the application programmer assumes a big shared memory which is
globally addressable. If required, the memory references made by applications are
translated into the message-passing paradigm.

Virtual Shared Memory (VSM)


VSM is a hardware implementation. So, the virtual memory system of the Operating
System is transparently implemented on top of VSM. So, the operating system thinks it is
running on a machine with a shared memory.

Shared Virtual Memory (SVM)


SVM is a software implementation at the Operating System level with hardware support
from the Memory Management Unit (MMU) of the processor. Here, the unit of sharing is
Operating System memory pages.
If a processor addresses a particular memory location, the MMU determines whether the
memory page associated with the memory access is in the local memory or not. If the page
is not in the memory, in a normal computer system it is swapped in from the disk by the
Operating System.
But, in SVM, the Operating System fetches the page from the remote node which owns
that particular page.
4.) Describe Shared Memory Multiprocessor with block diagram and its applications
also

Ans: A shared-memory multiprocessor is an architecture consisting of a modest number


of processors, all of which have direct (hardware) access to all the main memory in the
system . This permits any of the system processors to access data that any of the other
processors has created or will use. The key to this form of multiprocessor architecture is
the interconnection network that directly connects all the processors to the memories.
This is complicated by the need to retain cache coherence across all caches of all
processors in the system.

Shared-memory multiprocessors are differentiated by the relative time to access the


common memory blocks by their processors. A SMP is a system architecture in which all
the processors can access each memory block in the same amount of time. This capability
is often referred to as “UMA” or uniform
memory access. SMPs are controlled by a single operating system across all the processor
cores and a network such as a bus or cross-bar that gives direct access to the multiple
memory banks. Access times can still vary, as contention between two or more processors
for any single memory bank will delay access times of one or more processors. But all
processors still have the same chance and equal access.
5. What is Array Processor? Classify the Array Processor with block diagram and
vector processing methods.

Ans: Array Processor is a central processing unit (CPU) that implements an instruction set
containing instructions that operate on one-dimensional arrays of data called vectors, compared
to the scalar processors, whose instructions operate on single data items. Vector processors can
greatly improve performance on certain workloads, notably numerical simulation and similar
tasks. Vector machines appeared in the early 1970s and dominated supercomputer design
through the 1970s into the 1990s, notably the various Cray platforms. The rapid fall in the
price-to- performance ratio of conventional microprocessor designs led to the vector
supercomputer's demise in the later 1990s.
As of 2016 most commodity CPUs implement architectures that feature instructions for a form
of vector processing on multiple (vectorized) data sets. Common examples include Intel x86's
MMX, SSE and AVX instructions, AMD's 3DNow! extensions, Sparc's VIS extension,
PowerPC's AltiVec and MIPS' MSA. Vector processing techniques also operate in video-game
console hardware and in graphics accelerators. In 2000, IBM, Toshiba and Sony collaborated to
create the Cell processor.

Types of Array Processor


Array Processor performs computations on large array of data. These are two types of
Array Processors: Attached Array Processor, and SIMD Array Processor. These are
explained as following below.

1.Attached Array Processor :


To improve the performance of the host computer in numerical computational tasks
auxiliary processor is attatched to it.
Attached array processor has two interfaces:
Input output interface to a common processor. Interface
with a local memory.
Here local memory interconnects main memory. Host computer is general purpose
computer. Attached processor is back end machine driven by the host computer.
The array processor is connected through an I/O controller to the computer & the computer
treats it as an external interface.

2.SIMD Array Processor : -


This is computer with multiple process unit operating in parallel Both types of array
processors, manipulate vectors but their internal organization is different.
6:- Describe Shared Memory Multiprocessor with block diagram and its applications also.

Ans.

Shared Memory Multiprocessor:-


A shared-memory multiprocessor is a computer system composed of multiple independent processors that
execute different instruction streams. Using Flynns’s classification, an SMP is a multiple-instruction
multiple-data (MIMD) architecture. The processors share a common memory address space and
communicate with each other via memory. Multiprocessor includes some number of processors with local
caches, all interconnected with each other and with common memory via an interconnection .

Shared Memory Multiprocessor applications :-

 These systems are able to perform multiple-instructions-on-multiple-data (MIMD) programming.


 This type of architecture allows parallel processing.
 The distributed memory is highly scalable.

You might also like