You are on page 1of 7

Latency

Being simple latency means whenever you have given input to the system and the total time
period it takes to give output so that particular time period/interval is known as
latency. Actually, latency is the in-between handling time of computers, as some of you may
think that whenever some system connects with another system it happens directly but no it
isn’t, the signal or data follows the proper trace route for reaching its final destination.

Types of Latency

 Interrupt Latency: Interrupt Latency can be described as the time that it takes for a
computer to act on a signal.

 Fiber Optic Latency: Fiber Optic Latency is the latency that comes from traveling
some distance through fiber optic cable.

 Internet Latency: Internet Latency is the type of latency that totally depends on the
distance.

 WAN Latency: WAN Latency is the delay that is happened if the resource is
requested from a server or another computer or anywhere else.

 Audio Latency: Audio Latency can be easily stated as the delay between the creation
of the sound and the hearing of the sound.

 Operational Latency: Operational Latency can be described as the time taken during
operations if they are performed in a flow.

 Mechanical Latency: Mechanical Latency is basically the delay that happens from
the mechanical device to the output.

 Computer and OS Latency: Combined and OS Latency is simple due to the delay
between the input and the output.

What Causes Internet Latency?

 Transmission Medium: The material/nature of the medium through which data or


signal is to be transmitted affects the latency.

 Low Memory Space: The common memory space creates a problem for OS in
maintaining the RAM needs.

 Propagation: The number of time signals take to transmit the data from one source to
another.

 Multiple Routers: As I have discussed before that data travels a full traceroute means
it travels from one router to another router which increases the latency, etc.
Grain size: Grain size/ Granularity are a measure that defines how much computation is
involved in a process. Grain size is concluded by counting number of instructions in a
program segment. The subsequent types of grain sizes have been recognized.

Fine Grain: This type includes nearly less than 20 instructions.

2) Medium Grain: This type includes nearly less than 500 instructions.

3) Coarse Grain: This type includes nearly greater than or equal to one thousand instruction.

Instruction level: It is the lowest level and degree of parallelism is highest at this level. Fine
grain size is used at statement or instruction level as just few instructions make the grain size
here. The fine grain size may perhaps vary according to type of the program. E.g. for
scientific applications, Instruction level grain size may be higher. As the higher degree of
parallelism is able to be achieved at this level, the overhead for a programmer would be more.

2) Loop Level: This is other stage of parallelism where iterative loop instructions able to be
parallelized. Fine grain size is used at this stage too. Simple loops in program are simple to
parallelize whereas the recursive loops are hard. This kind of parallelism can be achieved by
the compilers.

3) Subprogram or Procedure Level: This level consists of subroutines, subprograms or


procedures. Medium grain size is used at this level including several thousands of instructions
in a process. Multiprogramming is applied at this stage. Parallelism at this level has been
developed by programmers however not through compilers. Parallelism through compilers
hasn't been attained at the medium and coarse grain size.

4) Program Level: It is the last level consisting of independent programs for parallelism.
Coarse grain size is used at this stage including tens of thousands of instructions. Time
sharing is attained at this level of parallelism. Parallelism at this stage has been exploited
through the operating system.

A multiprocessor is a computer with numerous processors in one unit. At various levels of


solving a problem, the processors of a multiprocessor system may be able to interact and
cooperate. The processors communicate with one another by passing messages or by sharing
a common memory.

Types of Multiprocessors

The following are the types of multiprocessors.

Symmetric Multiprocessors

Each processor in these systems runs a similar version of the operating system and
communicates with the others. There is no master-slave connection between the processors
because they are all peer-to-peer.

The Encore version of Unix for the Multimax Computer is a symmetric multiprocessing
system.

Asymmetric Multiprocessors

In an asymmetric system, each CPU is allocated a certain task. A master processor is in


charge of giving all of the other processors’ instructions. An asymmetric multiprocessor
system has a master-slave relationship.
Asymmetric multiprocessors were the only type of multiprocessor available before the advent
of symmetric multiprocessors. This is also the more affordable alternative right now.

Advantages of Multiprocessor Systems

Here is the list of the potential advantages of multiprocessor systems.

More reliable Systems

Even if one processor fails in a multiprocessor system, the system will not come to a halt.
The ability to work seamlessly even in the case of hardware failure can be defined as graceful
degradation. If one of the five processors in a multiprocessor system fails, the remaining four
processors continue to work. As a result, rather than coming to a complete stop, the machine
slows down.

Increasing Throughout

The system’s throughput increases as several processors work together, indicating the number
of processes done per unit of time increases. The throughput increases by a factor of N when
there are N processors.

More Economic Systems

Since multiprocessor systems share data storage, peripheral devices, power supply, and other
resources, they are less expensive in the long run than single-processor systems. If several
processes share data, it is preferable to schedule them on multiprocessor systems with shared
data rather than separate computer systems with different copies of the data.

Characteristics of Multiprocessor

The following are the important characteristics of multiprocessors.

1. Parallel Processing: This requires the use of many processors at the same time. These
processors are designed to do a particular task using a single architecture. Processors
are generally identical, and they operate together to create the effect that the users are
the only individuals who are using the system. In reality, several others are trying to
use the system in the first place.

2. Distributed Computing: In addition to parallel computing, this distributed processing


requires the use of a processor network. Each processor in this network can be
thought of as a standalone computer with the ability to solve problems. These
processors are diverse, and each one is typically assigned to a separate job.

3. Supercomputing: This entails using the quickest machines to address large,


computationally difficult issues. Supercomputers used to be vector computers, but
nowadays, most people accept vector or parallel computing.

4. Pipelining: Besides supercomputing, this is a method that divides a task into multiple
subtasks that must be completed in a specified order. Each subtask is aided by the
functional units. The devices are connected serially, and they all work at the same
time.

5. Vector Computing: This is a method that divides a task into multiple subtasks that
must be completed in a specified order. Each subtask is aided by the functional units.
The devices are connected serially, and they all work at the same time.

6. Systolic: Pipelining is similar, but the units are not organised linearly. Systolic steps
are often tiny and numerous, and they are conducted in lockstep. This is more
commonly used in specialised hardware like image or signal processors.

Programme Flow Mechanism:

Traditional computers are founded on a control flow structure by which the series of program
implementation is particularly established in the user program. Data flow computers have a
high degree of parallelism at the fine-grain instruction-level reduction computers are based on
a demand-driven method which commences operation based on the demand for its result by
other computations. Data flow & control flow computers − There are mainly two sorts of
computers as data flow computers are connectional computers depends on the Von Neumann
machine. It transfers out instructions under program flow control whereas the control flow
computer implements instructions under the availability of information. Control flow
Computers − Control Flow computers occupy shared memory to influence program
instructions and data objects. Variables in shared memory are upgraded by some instructions.
The implementation of one instruction can create side effects on various instructions because
memory is shared. In few cases, the side effects avoid parallel processing from taking place.
A uniprocessor computer is genetically sequential because of the use of a control-driven
structure.

Data Flow Computers − In a data flow computer, the running of instruction is determined
by data availability rather than being directed by a program counter. In this concept, any
instruction must be ready for implementation whenever operands turn available.

The instructions in a data-driven program are not controlled in some way. Rather than
being saved in shared memory, information is precisely held inside instructions.

Computational results are transferred directly between instructions. The information created
by instruction will be replicated into multiple copies and forwarded directly to all needy
instructions.
This data-driven design needed no shared memory, no program counter, and no control
sequencer. It needed a special method to identify data availability, match data tokens with
needy instructions, and allow the group reaction of asynchronous instructions execution.

Control flow refers to the path the execution takes in a program, and sequential
programming that focuses on explicit control flow using control structures like loops or
conditional statements is called imperative programming. In an imperative model, data may
follow the control flow, but the main question is about the order of execution.

Dataflow abstracts over explicit control flow by placing the emphasis on the routing and
transformation of data and is part of the declarative programming paradigm. In a dataflow
model, control follows data and computations are executed implicitly based on data
avail ability.

Concurrency control refers to the use of explicit mechanisms like locks to synchronize
interdependent concurrent computations. It is a matter of emphasis – control flow schedules
data movement, or data movement implies transfer of control.

SIMD

SIMD stands for 'Single Instruction and Multiple Data Stream'. It represents an
organization that includes many processing units under the supervision of a common control
unit.

All processors receive the same instruction from the control unit but operate on different
items of data.

The shared memory unit must contain multiple modules so that it can communicate with all
the processors simultaneously.
SIMD is mainly dedicated to array processing machines. However, vector processors can also
be seen as a part of this group.

You might also like