Professional Documents
Culture Documents
Life cycle
Daemon and Worker Threads
We can create child threads from the main thread. The main thread is the last thread
to finish execution because it performs various shutdown operations
„The typical difference is that threads (of the same process) run in a shared
memory space, while processes run in separate memory spaces”
the local variables and method objects are stored on the heap memory
arguments as well as method calls and live as long as it is referenced from
are stored on the stack somewhere in the application.
EVERY THREAD HAS ITS OWN STACK MEMORY BUT ALL THREADS
SHARE THE HEAP MEMORY (SHARED MEMORY SPACE)
„The typical difference is that threads (of the same process) run in a shared
memory space, while processes run in separate memory spaces”
as long as a thread owns an intrinsic lock no other thread can acquire the same lock
the other thread will block when it attempts to acquire the lock.
synchronized(this) {
counter++;
}
}
synchronized(SomeClass.class) {
counter++;
}
}
wait() notify()
thread #1
counter = 0
thread #2
Memory Management
thread #1
counter = 0
thread #2
Memory Management
counter = counter + 1
thread #1
counter = 0
thread #2
Memory Management
counter = counter + 1
thread #1
counter = 0
thread #2
Memory Management
counter = counter + 1
thread #1
counter = 0
thread #2
counter = counter + 1
Memory Management
thread #1
counter = 1
thread #2
counter = counter + 1
Memory Management
thread #1
counter = 1
thread #2
Memory Management
thread #1
counter = 1
thread #2
INTERMEDIATE TERMINAL
SOURCE
OPERATIONS OPERATIONS
new
runnable
running
waiting
dead
Threads: there are various stages thread is born, it is started, it runs
and it dies
running time
#threads
MULTITHREADING
DEADLOCK AND LIVELOCK
Deadlock
“Deadlock occurs when two or more threads wait forever for a lock
or resource held by another of the threads”
Deadlock happens when two processes each within its own transaction updates
two rows of information but in the opposite order.
For example: process A updates row 1 then row 2 in the exact timeframe that
process B updates row 2 then row 1
Deadlock
“Deadlock occurs when two or more threads wait forever for a lock
or resource held by another of the threads”
2.) we should make sure that each thread acquires the locks in the same order
to avoid any cyclic dependency in lock acquisition
3.) livelock can be handled with the methods above and some randomness
~ threads retry acquiring the locks at random intervals
Livelock
thread #1
thread #2
„parallel execution”
thread #1
thread #2
„multithreaded execution” (with time-slicing)
Volatile Keyword
Every read of a volatile variable will be read from the RAM so from
the main memory (and not from cache)
usually variables are cached for performance reasons
thread 1
counter = 0
thread 2
Volatile
thread 1
counter = 0
thread 2
Volatile
counter = counter + 1
thread 1
counter = 0
thread 2
Volatile
counter = counter + 1
thread 1
counter = 0
thread 2
Volatile
counter = counter + 1
thread 1
counter = 0
thread 2
counter = counter + 1
Volatile
thread 1
counter = 1
thread 2
counter = counter + 1
Volatile
thread 1
counter = 1
thread 2
Volatile
thread 1
counter = 1
thread 2
object1
thread #1 thread #2
object2
MULTITHREADING
STUDENT LIBRARY SIMULATION
b0 b1 b2 b3 b4 b5 b6 b7
s0 s1 s2 s3
MULTITHREADING
DINING PHILOSOPHERS PROBLEM
Dining Philosophers Problem
it was formulated by Dijkstra in 1965
5 philopshers are present at a table and there are 5 forks (chopsticks)
the philsophers can eat and think
philosophers can eat when they have both left and right chopsticks
a chopstick can be hold by one philosopher at a given time
the problem: how to create a concurrent algorithm such that no philosopher
will starve? (so the aim is to avoid deadlocks)
Dining Philosophers Problem
Dining Philosophers Problem
p
1
p
2
p
0
p
3
p
4
Dining Philosophers Problem
p
1
p
2
p c
0 0 c
1
c c
2
4
c
3
p
3
p
4
Dining Philosophers Problem
Dining Philosophers Problem
Dining Philosophers Problem
MULTITHREADING
Locks and synchronization
Locks and Synchronized Blocks
A reentrant lock has the same basic behavior as we have seen for
synchronized blocks (with some extrended features)
we can get the list of threads waiting for the given lock
with reentrant locks
for i in nums
total = total + nums[i]
5 2 8 1 11 17 9 3
for i in nums
total = total + nums[i]
5 2 8 1 11 17 9 3
for i in nums
total = total + nums[i]
5 2 8 1 11 17 9 3
for i in nums
total = total + nums[i]
5 2 8 1 11 17 9 3
for i in nums
total = total + nums[i]
5 2 8 1 11 17 9 3
for i in nums
total = total + nums[i]
5 2 8 1 11 17 9 3
for i in nums
total = total + nums[i]
5 2 8 1 11 17 9 3
for i in nums
total = total + nums[i]
5 2 8 1 11 17 9 3
for i in nums
total = total + nums[i]
Parallel sum with multiple processors or multicore processor
we can assign a task to every processor
~ parallel computing
5 2 8 1 11 17 9 3
Parallel sum with multiple processors or multicore processor
we can assign a task to every processor
~ parallel computing
5 2 8 1 11 17 9 3
thread #1 thread #2
sum1 = 16 sum2 = 40
MULTITHREADING
FORK-join framework
What is fork-join framework?
ForkJoinPool
So ForkJoinPool creates a fix number of threads usually the number of CPU cores
These threads are executing the tasks but if a thread has no task: it can „steal” a task
from more busy threads
~ tasks are distributed to all threads in the thread pool !!!
fork split the given task into smaller subtasks that can be
executed in a parallel manner
task
task task
task task
fork split the given task into smaller subtasks that can be
executed in a parallel manner
task
task task
task task
join the splitted tasks are being executed and after all of them are finished
they are merged into one result
task
task task
task task
PARALLEL
EXECUTION
SEQUENTIAL
EXECUTION
PARALLEL
EXECUTION
SEQUENTIAL
EXECUTION
PARALLEL
EXECUTION
SEQUENTIAL
EXECUTION
PARALLEL
EXECUTION
SEQUENTIAL
EXECUTION
PARALLEL
EXECUTION
SEQUENTIAL
EXECUTION
PARALLEL
EXECUTION
SEQUENTIAL
EXECUTION
PARALLEL
EXECUTION
SEQUENTIAL
EXECUTION
PARALLEL
EXECUTION
SEQUENTIAL
EXECUTION
PARALLEL
EXECUTION
SEQUENTIAL
EXECUTION
PARALLEL
EXECUTION
SEQUENTIAL
EXECUTION
PARALLEL
EXECUTION
SEQUENTIAL
EXECUTION
PARALLEL
EXECUTION
SEQUENTIAL
EXECUTION
PARALLEL
EXECUTION
SEQUENTIAL
EXECUTION
PARALLEL
EXECUTION
SEQUENTIAL
EXECUTION
PARALLEL
EXECUTION
SEQUENTIAL
EXECUTION
PARALLEL
EXECUTION
SEQUENTIAL
EXECUTION
Semaphores and Mutexes
invented by Dijkstra back in 1962
semaphores are simple variables (or abstract data types) that are used for
controlling access to a common resource
it is an important concept in operating systems as wel
1.) semaphores tracks only how many resources are free – it does not keep
track of which of the resources are free
2.) the semaphore count may serve as a useful trigger for a number of
different actions (web servers)
MUTEX
mutex is a locking mechanishm
MUTEX
MUTEX
These threads need memory + CPU will spend too much time switching
context when the threads are swapped
1.) SingleThreadExecutor
2.) FixedThreadPool(n)
3.) CachedThreadPool
The number of threads is not bounded: if all the threads are busy
executing some tasks and a new task comes the pool will create
and add a new thread to the executor.
4.) ScheduledExecutor
executorService.execute()
executorService.submit()