You are on page 1of 8

1) explain the need of increasing the performance with example

Ans ::
The need for ever-increasing performance is driven by the demand for faster and
more powerful technology that can handle increasingly complex tasks and process
larger amounts of data. This is particularly important in fields such as artificial
intelligence, scientific research, and gaming.

For example, in the field of artificial intelligence, neural networks require significant
computing power to analyze vast amounts of data and make accurate predictions. As
the complexity of these networks increases, so does the need for more powerful
hardware to process the data.

In scientific research, simulations and modeling require enormous amounts of


computing power to accurately represent real-world phenomena. For example,
climate models require powerful supercomputers to simulate weather patterns and
make predictions about climate change.

In the gaming industry, players expect high-quality graphics and immersive


gameplay experiences that can only be achieved with ever-increasing processing
power. As game developers create more detailed environments and more
sophisticated artificial intelligence, the need for high-performance hardware
becomes increasingly important.

Overall, the need for ever-increasing performance is driven by the demand for
technology that can handle increasingly complex tasks and process larger amounts
of data, which is essential for progress in fields such as artificial intelligence,
scientific research, and gaming.
2) Explain need of writing parallel programs with appropriate example.

Ans::

Writing parallel programs is essential to take advantage of modern hardware and


increase the performance of software applications. Parallel programming involves
breaking down a large task into smaller, independent subtasks that can be executed
concurrently on multiple processing units, such as multiple cores of a CPU or
multiple GPUs.

Here's an example to explain the need for writing parallel programs:

Suppose you are developing a video editing software that needs to apply several
image filters (e.g., brightness, contrast, saturation, etc.) to each frame of a high-
resolution video. Applying these filters can be a computationally intensive task and
may take a long time to process the entire video, especially if done sequentially on
a single core.

By writing a parallel program, you can distribute the workload of applying the
filters across multiple processing units, thereby reducing the processing time
significantly. For example, you could assign each processing unit to handle a
different section of the video and apply the filters concurrently. This would allow
you to complete the video processing in a fraction of the time it would take to do it
sequentially on a single core.

Another example where parallel programming is essential is in scientific


simulations, such as weather forecasting or fluid dynamics. These simulations
require the processing of vast amounts of data and can take a significant amount of
time to run. By writing parallel programs, scientists can divide the simulation into
smaller, independent subtasks and run them concurrently on multiple processing
units, reducing the time required for the simulation.

Overall, the need for parallel programming arises from the increasing demand for
high-performance computing and the availability of hardware with multiple
processing units. By breaking down complex tasks into smaller subtasks and
running them concurrently, parallel programming can significantly improve the
performance of software applications.
3) Write a short note on cluster computing with neat architecture diagram.

Ans:
Cluster computing is a type of parallel computing that divides a complex problem
into smaller pieces and distributes them across a network of computers to be solved
simultaneously. It is a form of distributed computing in which a group of
interconnected processors (nodes) combine their resources to perform a specific
task.

A cluster computer architecture typically consists of multiple nodes connected


together via high-speed networks such as Ethernet, InfiniBand, or Myrinet. Each
node is a self-contained computer, often referred to as a server, with its own
processors, memory, and storage. The nodes are grouped together to form a cluster,
and the cluster is managed by a cluster management software that enables the
nodes to communicate and coordinate with each other.

Benefits of Cluster Computing:

Cluster computing offers a wide array of benefits. Some of these include the
following.

Cost-Effectiveness – Compared with the mainframe systems, cluster computing is


considered to be much more cost-effective. These computing systems offer
enhanced performance with respect to the mainframe computer devices.

Processing Speed – The processing speed of cluster computing is justified with that
of the mainframe systems and other supercomputers present in the world.

Expandability – Scalability and expandability are another set of advantages that


cluster computing offers. Cluster computing represents an opportunity for adding
any number of additional resources and systems to the existing computing
network.

Increased Resource Availability – Availability plays a vital role in cluster


computing systems. Failure of any connected active node can be easily passed on
to other active nodes on the server, ensuring high availability.

Improved Flexibility – In cluster computing, superior specifications can be


upgraded and extended by adding newer nodes to the existing server.
Disadvantages of Cluster Computing :

1. High cost :

It is not so much cost-effective due to its high hardware and its design.

2. Problem in finding fault :

It is difficult to find which component has a fault.

3. More space is needed :

Infrastructure may increase as more servers are needed to manage and monitor.

Applications of Cluster Computing :

Various complex computational problems can be solved.


It can be used in the applications of aerodynamics, astrophysics and in data mining.
Weather forecasting.
Image Rendering.

___________ Architecture of cluster computing system____________


4) Difference between cluster computing and grid computing
Ans::
5) Explain with neat diagram Von Neumann Architecture and its bottleneck?
Ans::
Von Neumann Architecture is a computer architecture developed by John von
Neumann. It is the most common model of computer architecture and is still used
in modern computers. It consists of four main components: a CPU, memory, an
input/output (I/O) device, and a control unit.

The CPU is the central processing unit, which is responsible for executing
instructions and processing data. It contains an arithmetic logic unit (ALU) and a
set of registers. The memory stores the instructions and data for the CPU. It can be
divided into two components: main memory, which is the primary storage for
programs and data, and secondary storage, which is used for long-term storage.
The input/output device is used to provide a means for the user to communicate
with the computer. The control unit is responsible for coordinating the activities of
the other components.

The bottleneck in the Von Neumann architecture is the speed at which data can be
transferred between the CPU and the memory. The speed of this data transfer is
limited by the speed of the bus, which connects the CPU and the memory. This
limits the amount of data that can be processed at any given time, as the CPU has
to wait for the data to be transferred before it can process it. This can lead to delays
and can be a major cause of performance issues.
6) Explain in detail concept of Process, multitasking and threads?
Ans::
Process:
A process is an instance of a computer program that is being executed by a
computer processor. It contains the program code, data being operated on, and
other resources such as open files and network connections. A process is created
when a program is loaded into memory or when a new instance of an existing
program is started. Each process has its own memory address space and one or
more threads of execution.

Multitasking:
Multitasking is the ability of an operating system to run multiple programs or
threads concurrently. In a multitasking system, multiple threads of execution are
interleaved on a single processor so that each thread gets some time to execute its
instructions. This allows multiple programs or processes to run at the same time,
improving the overall system performance.

Threads:
A thread is a lightweight process that allows multiple tasks to be executed
concurrently within a single process. A thread is a unit of execution within a
process, and each process can have multiple threads. A thread shares the same
address space and resources with other threads in the same process. Threads can be
used to improve the performance of a process by allowing different tasks to be
executed in parallel. Threads also allow for synchronization between tasks, as they
can communicate with each other using shared variables.

7) What is page fault? When it occurs and what is its effect on computational
performance?
Ans::

Page fault is an error that occurs when a program tries to access a


memory page that is not currently loaded into the main memory. This
can happen when the requested page is not in the memory, or when it is
in the memory but not in the correct location. When a page fault occurs,
the system must retrieve the page from secondary memory, such as a
hard drive, and load it into the main memory before the instruction can
be executed.
Page faults can have a significant impact on computational performance,
as they can add significant overhead to the system. When a page fault
occurs, the system must take time to retrieve and load the page, which
can cause a significant delay in the instruction being executed. This can
cause applications to run slower and reduce overall system performance.

8) Explain following terms in detail:


1.Static and dynamic threads
2.Nondeterminism and its effect on computational performance.

Static threads are threads that are fixed in number when the
program is first created. The number of threads does not change
during the life of the program. This makes it easier to debug and
optimize the code, since it is always known how many threads are
running in the program.

Dynamic threads are threads that are created and destroyed as the
program runs. This makes it possible to scale the program to adapt
to changing workloads, but can make it more difficult to debug and
optimize the code.

Nondeterminism is when the result of a computation is not known


in advance. This can have a negative effect on computational
performance, as the program may need to explore a large number of
possible outcomes in order to find a solution. This can lead to
inefficient use of resources and can cause the program to run
slower than expected.

You might also like