You are on page 1of 3

Name: M Shahzaib Ali

ID# F2018266382

Section: V5
Shared Memory Systems

It is one of the most basic models of interprocess communication. Shared memory is a shared memory
between two or more processes.

Again, each process has its own address area, if any process seeks to communicate more information
from its address location to other processes, then it is only possible through IPC (inter process
communication) strategies. As we already know, communication can be between related or unrelated
processes.

Usually, the connection of the related process is done using pipes or pipes with the name. Unrelated
processes (meaning one process that works in one terminal and another process in another terminal)
communication can be made using Named Pipes or by popular IPC Shared Memory and Message Line
techniques.

Distributed Memory Systems

In an unallocated, distributed memory system, each processor has access to only its local memory, And
data is transmitted over a network using a messaging system. We take multiple multicore computers.
connect them using a network in a distributed memory structure, similar to the way they work various
offices keep in touch by telephone. With a fast enough network, we can scale this strategy into millions
of CPU cores too beyond.

Shared memory systems are difficult to build but easy to use, making them convenient Laptops and
desktops. Distributed memory systems, including a few shared memory computers, each with its own
Operating system and memory, easy to design but very difficult to use. However, this is the only thing a
design that can be used to build a modern supercomputer.

Task Parallelism

This refers to the performance of several tasks on multiple computing cores at once. In the course of the
work various tasks are performed on the same or different data. Calculation done asynchronously.
Speedup is reduced as each CPU will use a different thread or process in same or different piece of data.
Compliance value is equal to the number of tasks that is done independently. The load rating depends
on the hardware availability once planning strategies such as standing and flexible planning.

Data Parallelism

Data parallelism refers to performing the same function on multiple computing cores at the same time.
Time. In data synchronization the same function is performed on different sets of the same data. The
calculation is done in a consistent manner. Speedup is great as there is only one cable to pull applies to
all data sets. Similarity depends on the size of the input. Done for the best download balance of the
multiprocessor system.
Example

Consider the case of summarizing the content of a N-dimensional array. One thread in one context the
device will automatically add features [0]… [N 1]. However, in a dual-core system, cable A is active in
context 0 can combine elements [0] … [N / 2 1], while string B running in context 1 can combine
elements [N / 2] … [N 1]. As a result, both wires will work the same in different calculations cores.

You might also like