You are on page 1of 12

{Parallel and Distributed Computing}

Introduction to parallel
& distributed systems
Lecture 1

Arfan Shahzad
{ arfanskp@gmail.com }
The Scenario

• Online Classes due to Corona Virus

• Primary Source: MS Teams

• Responsibility is the key to Success


Weeks Contents Activities
Introduction to parallel and distributed systems
1
Scalability issues, Amdahl’s law
Flynn taxonomy
2
Multithreading, superscalar processors, Intel’s hyper threading
Shared Memory architecture
3 Assignment-1
Processor to memory connection strategies
Distributed Memory architecture
4 Quiz-1
Routing mechanisms
Introduction to threads,
Thread Models
5
Posix threads API
Programming with Pthreads
Matrix multiplication
6 Assignment-2
Decomposition techniques
Shared memory Programming with OpenMP
7 Quiz-2
OpenMP Work sharing constructs, Reduction Clause
8 Numerical integration: PI program
9 MID TERM
Weeks Contents Activities
Introduction to Distributed Systems
10
Network Topologies
Distributed System Archtiectures
11 Client Server, Peer-to-Peer Assignment-3
Challenges of Distributed Systems
Hadoop echo system
12 Quiz-3
MPI introduction
MPI practice sessions
13
Spark Introduction
Cluster Computing using google cloud
14 Assignment-4
Spark Practice lab
Spark Architecture
15 Quiz-4
Programming Distributed machines
Data centers introduction and design
16
Review
17 FINAL TERM
The Agenda

1. Introduction to Parallel Computing

2. Introduction to Distributed Computing


Computing

• Computing is the process to complete a given goal-oriented task by


using computer technology.

• Computing may include the design and development of software and


hardware systems for a broad range of purposes, often consist of
structuring, processing and managing any kind of information.
Parallel

• In mathematics: Parallel means two lines that never intersect — think


of an equal sign.
Parallel Computing Systems

• Parallel computing systems are the simultaneous execution of the


single task (split up and adapted) on multiple processors in order to
obtain results faster.

• The idea is based on the fact that the process of solving a problem
usually can be divided into smaller tasks (divide and conquer), which
may be carried out simultaneously with some coordination.
Parallel Computing cont…

• The terms parallel computing architecture sometimes used for a


computer with more than one processor (few to thousands),
available for processing.

• The recent multicore processors (chips with more than one processor
core) are some commercial examples which bring parallel computing
to the desktop.
Parallel Computing cont…
Multi-core & Multi-processor
Distributed Systems

• We define a distributed system as one in which hardware or


software components located at networked computers
communicate and coordinate their actions only by passing
messages.

• This simple definition covers the entire range of systems in which


networked computers can usefully be deployed.

• Education System in Current scenario (online class).


Distributed Systems cont…

You might also like