You are on page 1of 8

RESEARCH PAPER ON

TOPOLOGICAL PROPERTIES OF
MULTIPROCESSOR INTERCONNECTION
NETWORK
ABSTRACT:
Modern PC systems are a digital computer or multi-computer machines. Their potency depends on
sensible strategies of administering the dead works. A parallel application's quick process is
feasible only if its components are suitably ordered in a time and an area. — the most focus of
researchers these days are to extend the machine power of the system massively. They are
exploiting a correspondence, a necessity in coming up with superior PC systems. Regarding
hardware, it refers to providing multiple, simultaneously active processors (nodes). It means
structuring a program as a group of, for the most part, freelance subtasks (a load) regarding a
computer code. The Analysis is active in developing a new digital computer design and
programming the divided program to attain high performance.

INTRODUCTION:
The increase in laptop speed, including their ability to unravel larger and more significant issues, is
an everlasting challenge for the designers. As laptop systems grow additional advanced, and their
speed will increase, the problems should be overcome to extend the speed additional, and therefore,
the "capacity" appears to grow even quicker. The difficulties follow physical phenomena at the
foundations of laptop devices technology. Parallel computing is a vital analysis space in Computer
Science. However, it consists of many issues not solved during a serial machine, like parallel
algorithms for the computer program, dividing the computer program into subtasks, synchronizing
and coordinating communication, and programming the assignments onto the processors. Thus, to
extend the speed, these sources, ' yield should be higher, or the devices' scale should be smaller.
What is more, to attenuate the number of defective circuits in one semiconductor piece, the chips
square measure reduced in size? Therefore, the growing quality of the processors results in AN
raised density of power dissipation. Yet, it cannot grow to eternity. Moreover, since the sunshine
wavelength limits the lithography methods, additional shrinking becomes slower and extra pricey
than in recent years. Hence, it appears that unless new ways that squares measure found to beat the
prevailing technological issues, the development of processors is going to be slower and
prohibitively pricey. An answer to the present drawback will be exploiting potential
simultaneousness within the execution of some freelance program fragments. In different words,
using the similarity of computations will be the solution.
Parallel computing is the next generation of computers and has wedged numerous scientific and
engineering applications to business applications. Planning a group of dependent or freelance tasks
for parallel execution on multiple processors is a crucial machine downside wherever a big
downside or associate with an application is divided into a group of subtasks, and these subtasks
are distributed among the processors. The allocation of subtasks on processors and process their
order of execution is termed task planning. Task planning is associate with NP-complete the
specific case problem and a few restricted cases. It's been utilized in many applications, from
scientific to engineering issues. Figure.1 shows the layout of the transformation from an associate
with applications program to task scheduling. Here, is associate with applications program is split
into several subtasks portrayed by Directed Acyclic Graph. They're allotted to the multiple
processors and will be maintained precedence constraints among the subtasks.
PARALLEL COMPUTER SYSTEMS:
A. HARDWARE:
It is common to begin an outline of parallel systems to classify kinds of correspondence and
similar machines. A helpful read of correspondence varieties is characteristic between an
information correspondence and the code correspondence. Another read of the data processing
classification considers the coarseness of a resemblance. Classical computers execute directions
within the order set by the sequence within the program code. This approach is named a control-
driven or Neumann design.
The control-driven computers are divided into four classes: SISD (single instruction stream, one
information stream), SIMD (single instruction stream, multiple information streams), MISD
(multiple instruction streams, single information stream), MIMD (multiple instruction streams,
multiple information streams). A multi-computer consists of a group of processors with native
recollections, interconnected by some network. We'll name by process component (PE) a
processor with native memory and a network interface. The kind of letter interconnection will
additional differentiate Tightly-coupled computers. We tend to limit interconnection sorts to the
bus(es), point-to-point networks (also referred to as single-stage networks) during this work.
PEs a square measure is connected by many layers of switches in time networks, whereas the
inner layer switches don't have any PEs hooked up. A period of time, networks square measure
divided here into trees and period cube network.

B. SOFTWARE:
In several common applications (programs) nice potential correspondence is often found. Thus,
programs are often dead via several synchronous threads (mutual relations between the
application's notions, a thread and a task). Laptop systems should offer support for implementing
an application's correspondence and the problems exposed by planning. Parallel operative systems
evolve from antecedent existing systems, and plenty of ideas are "naturally" familial. Supported
acceptable time interval, 2 load varieties are distinguished in single-processor systems: terminal
(or interactive) and batch load. Since batch tasks square measure submitted to the pc system so
much previous their actual execution begins, settled planning algorithms are often applied. For
the terminal load that needs a right away response, access to processors is granted supported
FCFS, Round-Robin, multi-level priority queues etc.
MULTIPROCESSOR INTERCONNECTION NETWORK:
A. NETWORK CHARACTERISTICS:
 Operation Modes: There square measure two basic modes of network operation;
synchronous and asynchronous. Within the synchronous mode, the network is
centrally supervised, association ways established at the same time. Within the
asynchronous mode, association ways square measure setup or disconnected on a
personal basis. The asynchronous mode is of operation is additionally acceptable for
the digital computer system.
 Switching Techniques: There square measure 3 basic switch techniques: circuit
switch, packet switch and hole switch.
 Circuit Switching:Sets up the switches associated ports and establishes a constant
path between an input-output try, economical for big transmissions.
 Packet Switching: Refers to a method within which messages between any 2
terminals square measure broken into many shorter, fixed-length packets routed
severally to their destination victimization store-and-forward procedures. Compared
with circuit change, packet change is economical for shorter and a lot of frequent
transmissions.
 Routing Techniques: It is the strategy of creating communication ways and
breakdown conflicts. 3 basic routing techniques are considered: centralized,
distributive, and logical selections' an area unit created regionally supported the
present conditions. Within the reconciling the theme, data regarding the network are
collected globally, however routing selections an area unit created regionally.

B. INTERCONNECTION NETWORK TOPOLOGY


CHARACTERISTICS:
 Number of Nodes: The number of nodes influences the complexes of the system and
hence the value of the system.
 Degree of Node:It is the number of connections needed at every node. It determines
the complexness of the network. Therefore, it ought to be as a low as attainable.
 Diameter:It is the life of the utmost internodes distance within the network. It
determines the space concerned in the communication and thus the performance of the
digital computer system.
 Extensibility:It is a property that expedites large-sized systems out of tiny ones with
minimum modifications within the nodes' configurations.
 Fault Tolerance:If one or additional parts fail in a digital computer network, it ought
to work adaptively. Within the centralized routing, central management makes all the
logic choices required to line up communication ways. This theme is additionally
versatile for tiny to medium scale systems within the distributed theme.

INTERCONNECTION NETWORK TOPOLOGY:


Topologies within the static networks are often classified in step with the size needed for the
layout. For an illustration, 1-dimensional, 2- dimensional, third-dimensional are shown below.
1-dimensional interconnection network:

(a). Linear

2-dimenional interconnection network:

(b). Ring

(c). Mesh

(d). Star
(e). Tree

(f). Systolic
1-dimensional interconnection network involves the linear array utilised for pipeline designs. 2-
dimensional topologies connect the ring, star, tree, mesh, and systolic array. 3-dimensional
topologies include 3–cube, cube connected cycles, etc.

TOPOLOGICAL PROPERTIES:
Network No. of Link Node degree Diameter Bisection Width
Star N-1 N-1 2 N-1
Completely N(N-1)/2 N-1 1 (N/2)2 N
Connected
Binary Tree N-1 3 2(Log2N -1) 1
Illiac Mesh 2N 4 N -1 2 N
Hypercube NLog2N/2 N Log2N N/2
De-bruijn 2N-1 4 Log2N N/Log2N
LET N+2 4 N 2
LEC (N/2)2 +3 4 N N
MULTIPROCESSOR SPEED UP:
Designers of uniprocessor systems, live performance regarding speeding
• equally, digital computer architects measure speeding performance.

The speedup typically refers to what proportion quicker a program runs on a system with n
processors than on a system with one the processor of a similar kind.

Linear speedup:The execution of the program's time on the n-processor system would be l/
ordinal of the execution time on a one-processor system.
When the quantity of processors is little, the system achieves near-linear quickening
• because the variety of processors will increase, the quickening curve diverges from the perfect,
eventually flattening out or maybe decreasing.
There is some limitation on digital computer speed up regarding Interprocessor communication
• Synchronization
• Load equalization
Whenever one processor generates (computes) a price that's required by the fraction of the program
running on another processor, that worth should be communicated to the processors that require it,
that takes time
• On a uniprocessor system, the entire program runs on one processor; therefore, there's no time
lost to interprocessor communication
It is usually necessary to synchronize the processors to confirm that they need all completed some
part of the program before any processor begins acting on following the part of the program.
CONCLUSION:
In this topic, we do tend to thought-about chosen topology of digital computer design and their
characteristics. The Three-D design provides higher performance as compared to alternative
design. Digital computer tasks need many processors simultaneously, granting expressing task
similarity at a high level of abstraction. This model permits for locating straightforward solutions to
issues that are once more refractory in an alternative setting. Moreover, a severable task thought
permits introducing pc design context that is commonly extremely generalized to create issues
manageable within the classical approach. There's some limitation regarding communication
overhead and process overhead.

Reference:
1. Manaullah, "Performance Evaluation of Multiprocessor Architectures".
2. Janhavi B., Sunil Surve, Sapna Prabhu, "Comparison of Load Balancing Algorithms in a Grid".
3. Abdus Samad, M. Q. Rafiq and Omar Farooq, "A Novel Algorithm for Fast Retrieval of Information from a
Multi-processor Server".
4. Sandeep Sharma, Sarabjit Singh, and Meenakshi Sharma, "Performance Analysis of Load Balancing
Algorithms".
5. Sandeep Sharma, Sarabjit Singh, and Meenakshi Sharma, "Performance Analysis of Load Balancing
Algorithms".

You might also like