You are on page 1of 17

ADVANCED OPERATING SYSTEMS

UNIT 2- DISTRIBUTED OPERATING SYSTEMS

UNIT II: DISTRIBUTED OPERATING SYSTEMS

Distributed Operating Systems: Issues – Communication Primitives –


Lamport”s Logical Clocks – Deadlock handling strategies – Issues in
deadlock detection and resolution-distributed file systems –design
issues – Case studies – The Sun Network File System-Coda.

Distributed operating system:


 A distributed operating system (DOS) is an essential type of
operating system.
 Distributed systems use many central processors to server
multiple real-time applications and users.
 As a result, data processing jobs are distributed between the
processors.
 Distributed operating system is used to share data and files.
 Distributed applications are running on multiple computers
linked by communications
 The user can handle the data from different locations.
 Processers in a DOS communicate with each other through
communication lines like high speed buses.
Buses:
 A mechanism that transfer data between components inside
computers. Real-time example of DOS:

Server Client node

Types of Distributed Operating systems:


1. Client-Server System
2. Three-tier
3. N-tier
4. Peer -to-peer

1. Client server system:


 This type of system requires the client to request a resource,
after which the server gives the requested resource.
2. Three-tier:
 The information about the client is saved in the intermediate
tier rather than in the client, which simplifies development.
 This type of architecture is most commonly used in online
applications.

Dr. S.Sivakumar, Assistant Professor, PG Department of Computer Science, C.Mutlur, Chidambaram. 1|Page
ADVANCED OPERATING SYSTEMS
UNIT 2- DISTRIBUTED OPERATING SYSTEMS
3. N-tier:
 N-tier system is used when the server application needs to
forward requests to additional enterprise services on the
network.
4. Peer-to-Peer Systems:
 The nodes play an important role in the systems
 This type of system contains nodes that are equal
participants in data sharing.
 Furthermore all the tasks are equally divided between all the
nodes.
 These nodes will interact with each other as required as
“shared resources”.
 To accomplish this is a network is needed.

Design Issues of Distributed Operating System


 The following are some of the major design issues of distributed
systems:

1. Heterogeneity:
 Different networks, hardware, operating system, programming
languages. We setup protocols to solve these hetrogenetics
o Middleware: A software layer that provides a programming
abstraction as well as masking the heterogeneity.
o Mobilecode: code that can be sent from one computer to
another and run at the destination.
2. Transparency:
 Transparency ensures that the distributed system should be perceived
as a single entity by the users or the application programmers rather
than a collection of autonomous systems, which is cooperating.
 The user should be unaware of where the services are located and the
transfer from a local machine to a remote one should be transparent.
3. Openness:
 The openness of the distributed system is determined primarily by the
degree to which new resource -sharing services can be made available
for use by a variety of client programs.
 Open systems are characterized by the fact that their key interfaces
are published.

Dr. S.Sivakumar, Assistant Professor, PG Department of Computer Science, C.Mutlur, Chidambaram. 2|Page
ADVANCED OPERATING SYSTEMS
UNIT 2- DISTRIBUTED OPERATING SYSTEMS
4. Concurrency:
 There is a possibility that several clients will attempt to access a shared
resource at the same time.
 Any object that represents a shared resource in a distributed system
must ensure that it operates correctly in a concurrent environment.
5. Security:
 The security of an information system has three components
confidentially, integrity, and availability.
 Encryption protects shared resources and keeps sensitive information
secrets when transmitted.
6. Scalability:
 The scalability of the system should remain efficient even with a
significant increase in the number of users and resources connected.
7. Failure Handling or (Resilience to Failure):
 When some faults occur in hardware and the software program, it may
produce incorrect results or they may stop before they have completed
the intended computation so corrective measures should to implement to
handle this case.
 Failure handling is difficult in distributed systems because the failure is
partial i.e., some components fail while others continue to function.

Communication Primitives:

 Message send and Message receive communication primitives are


done through Send() and Receive(), respectively.
 High level constructs [Helps the program in using underlying
communication network]
 Two Types of Communication Models
1. Message passing
2. Remote Procedure Calls

1. Message Passing Method:


 Two basic communication primitives
1. SEND - Message & its Destination
2. RECEIVE - Source of message & Buffer for storing the message
 Client-Server Computation Model
o Client sends Message to server and waits
o Server replies after computation

 Blocking Vs Non Blocking Primitives


 Non Blocking
o SEND primitive return the control to the user process as soon
as the message copied from user buffer to kernel buffer.
 Advantage:
o Programs have maximum flexibility in performing computation
and communication in any order.
 Drawback:
o Programming becomes tricky and difficult.

Dr. S.Sivakumar, Assistant Professor, PG Department of Computer Science, C.Mutlur, Chidambaram. 3|Page
ADVANCED OPERATING SYSTEMS
UNIT 2- DISTRIBUTED OPERATING SYSTEMS
 Blocking
o SEND primitive does not return the control to the user process
until message has been sent or acknowledgement has been
received.
 Advantage:
o Program behavior is predictable.
 Drawback:
o Lack of flexibility in programming
 Synchronous vs. Asynchronous Primitives
 Synchronous
o SEND primitive is blocked until corresponding RECEIVE
primitive is executed at the target computer.
o Also known as rendezvous.
 Asynchronous
o Messages are buffered.
o SEND primitive does not block even if there is no corresponding
execution of the RECEIVE primitive.
o The corresponding RECEIVE primitive can be either blocking or
non-blocking.
o Drawback: buffering message is more complex, as it involves
creating, manging & destroying buffers.
 Details to be handled in Message Passing
o Pairing of Response with Requests
o Data Representation
o Sender should know the address of Remote machine
o Communication and System failures

2. Remote procedure call (RPC)


 RPC is a protocol that one program can use to request a service from a
program located in another computer in a network without having to
understand network details.
 RPC uses the client/server model.
 The requesting program is a client and the service-providing program is
the server.
 The main idea of an RPC is to allow a local computer (client) to
remotely call procedures on a remote computer (server).
 RPC is an interaction between a client and a server.
 Client invokes procedure on sever.
 Servers execute the procedure and pass the result back to client.
 Calling process is suspended and proceeds only after getting the result
from server.
 RPC design issues
o Structure
o Binding
o Parameter and Result Passing
o Error handling, semantics
and Correctness

Dr. S.Sivakumar, Assistant Professor, PG Department of Computer Science, C.Mutlur, Chidambaram. 4|Page
ADVANCED OPERATING SYSTEMS
UNIT 2- DISTRIBUTED OPERATING SYSTEMS
Lamport”s Logical clock:
 Lamport”s logical clock was created by Leslie Lamport”s.
 It is a procedure to determine the order of the events occurring.
 A Lamport logical clock is a numerical software counter value
maintained in each process.
 Conceptually this logical clock can be thought of as a clock only has
meaning in relationship to messages moving between processes.
 When a process receives a message it resynchronizes it logical clock
with that sender.

Algorithm:
 Happened before relation (->): a -> b, means ‘a’ happened before ‘b’.
 Logical Clock: The criteria for the logical clocks are :
o [C1]: Ci(a) < Ci(b), [Ci -> Logical Clock, If ‘a’ happened before ‘b’,
then time of ‘a’ will be less than ‘b’ in a particular process.
o [C2]: Ci(a) < Cj(b), [Clock value of Ci(a) is less than Cj(b)]
o Since time cannot run backwards.
 Event countering example :
o Three systems P1, P2, P3….
o Events a, b, c…
o Local Event counters on each
system.
o Systems occasionally communicate.
o If a & b are Events in same process
and a occurs before b, then a -> b
is true.

Lamport”s algorithm:
 Each message carries a time stamp of the sender’s clock.
 When a message arrives:
if receivers clock< message timestamp
Set system clock to (message timestamp+1)
else do nothing
 clock must be advanced between any two events in the same
process

The algorithm for sending:


 time = time + 1 ;
 time stamp = time ;
 send (message ,time stamp);

The algorithm for receiving:


 message (time stamp)= receive();
 time = message(time stamp, time)+ 1 :

Dr. S.Sivakumar, Assistant Professor, PG Department of Computer Science, C.Mutlur, Chidambaram. 5|Page
ADVANCED OPERATING SYSTEMS
UNIT 2- DISTRIBUTED OPERATING SYSTEMS
Example: Lamport”s logical clocks
 Three processes each with its own clock.
 The clocking at different rates.

Deadlock handling strategies

Deadlock:
 Deadlock is a situation where a set of processes are blocked because each
process is holding a resource and waiting for another resource acquired
by some other process.

Deadlock Characteristics:
 The deadlock has the following characteristics
1. Mutual Exclusion
2. Hold and Wait
3. No preemption
4. Circular Wait

Dr. S.Sivakumar, Assistant Professor, PG Department of Computer Science, C.Mutlur, Chidambaram. 6|Page
ADVANCED OPERATING SYSTEMS
UNIT 2- DISTRIBUTED OPERATING SYSTEMS

1. Mutual Exclusion:
 Only one process can use a resource at any given time i.e. the
resources are non-sharable.
 At least one resource must be held non-sharable; i.e. only a single
process at a time can utilize the resources.
 If another process demands that resources, the request must be
postponed until the resource is released.
 One or more than one resource is non-sharable (only one process can
use at a time).
 For Example:

2. Hold and wait:


 A process is holding at least one resource at a time and is waiting to
acquire other resources held by some other process.
 A job must hold at least one resource and wait to obtain supplementary
resources currently being held by several other processes.
 For example:

3. No preemption:
 The resource can be released by a process voluntarily i.e. after
execution of the process.
 Resources can’t be anticipated; i.e., a resource can get released only
willingly by the process holding it, and then after that, the process has
completed its task.
 For Example:

Dr. S.Sivakumar, Assistant Professor, PG Department of Computer Science, C.Mutlur, Chidambaram. 7|Page
ADVANCED OPERATING SYSTEMS
UNIT 2- DISTRIBUTED OPERATING SYSTEMS
4. Circular wait:
 A set of processes are waiting for each other in a circular fashion. For
example, let’s say there are a set of processes {p0,p1,p2,p3}such that p0
depends on p1,
 P1 depends on p2, p2 depends p3 and p3 depends p0.
 This creates a circular relation between all these processes and they
have to wait forever to be executed.
 The circular-wait situation implies the hold-and-wait state or condition;
hence, all four conditions are not entirely independent. they are
interconnected among each other.
 For Example:

Method for handling Deadlock:


1. Deadlock Ignorance
2. Deadlock prevention
3. Deadlock Avoidance
4. Deadlock detection and recovery

1. Deadlock Ignorance:
 In the Deadlock ignorance method, the OS acts like the deadlock never
occurs and completely ignores it even if the deadlock occurs.
 This method only applies if the deadlock occurs very rarely.
 The algorithm is very simple.
 It says” if the deadlock occurs, simply reboot the system and act like the
deadlock never occurred.”

2. Deadlock prevention:
 The possibility of deadlock is excluded before making requests, by
eliminating one of the necessary conditions for deadlock.
 Example: Only allowing traffic from one direction, will exclude the
possibility of blocking the road.
 The operating system takes steps to prevent deadlocks from occurring by
ensuring that the system is always in a safe state, where deadlocks
cannot occur.

Dr. S.Sivakumar, Assistant Professor, PG Department of Computer Science, C.Mutlur, Chidambaram. 8|Page
ADVANCED OPERATING SYSTEMS
UNIT 2- DISTRIBUTED OPERATING SYSTEMS
 Some ways of prevention are as follows
o Preempting resources:
• Take the resources from the process and assign them to other
processes.
o Rollback:
• When the process is taken away from the process, roll back and
restart it.
o Aborting:
• Aborting the deadlocked processes.
o Sharable resource:
• If the resource is sharable, all processes will get all resources, and
a deadlock situation won’t come.
3. Deadlock Avoidance:
 The Operating system runs an algorithm on requests to check for a safe
state. Any request that may result in a deadlock is not granted.
 Example: Checking each car and not allowing any car that can block
the road. If there is already traffic on the road, then a car coming from
the opposite direction can cause blockage.
 In deadlock avoidance, the request for any resource will be granted if
the resulting state of the system doesn’t cause deadlock in the system.
4. Deadlock Detection and Recovery:
 Deadlock detection and recovery is the process of detecting and
resolving deadlocks in an operating system.
 A deadlock occurs when two or more processes are blocked, waiting for
each other to release the resources they need.
 This can lead to a system-wide stall, where no process can make
progress.
DEADLOCK DETECTION:
 Detecting deadlocks is one of the most important steps in preventing
them.
 A deadlock can happen anytime when two or more processes are trying
to acquire a resource, and each process is waiting for other processes
to release the resource.
 The deadlock can be detected in the resource-allocation graph as
shown in fig below.

 This graph checks if there is a cycle in the Resource Allocation Graph


and each resource in the cycle provides only one instance, If there is a
cycle in this graph then the processes will be in a deadlock state.
 So always remember detecting deadlocks is one of the most important
steps in preventing them.

Dr. S.Sivakumar, Assistant Professor, PG Department of Computer Science, C.Mutlur, Chidambaram. 9|Page
ADVANCED OPERATING SYSTEMS
UNIT 2- DISTRIBUTED OPERATING SYSTEMS
DEADLOCK RECOVERY:
 A traditional operating system such as Windows doesn’t deal with
deadlock recovery as it is a time and space-consuming process.
 Real-time operating systems use Deadlock recovery.
1. Killing the process:–
 Killing all the processes involved in the deadlock.
 Killing process one by one.
 After killing each process check for deadlock again and keep
repeating the process till the system recovers from deadlock.
 Killing all the processes one by one helps a system to break circular
wait conditions.
2. Resource Preemption:–
 Resources are preempted from the processes involved in the
deadlock, and preempted resources are allocated to other processes
so that there is a possibility of recovering the system from the
deadlock. In this case, the system goes into starvation.
3. Concurrency Control:–
 Concurrency control mechanisms are used to prevent data
inconsistencies in systems with multiple concurrent processes.
 These mechanisms ensure that concurrent processes do not access
the same data at the same time, which can lead to inconsistencies
and errors.
 Deadlocks can occur in concurrent systems when two or more
processes are blocked, waiting for each other to release the
resources they need.
 This can result in a system-wide stall, where no process can make
progress. Concurrency control mechanisms can help prevent
deadlocks by managing access to shared resources and ensuring
that concurrent processes do not interfere with each other.

Advantages of Deadlock Detection and Recovery in Operating


Systems:

1. Improved System Stability:


 Deadlocks can cause system-wide stalls, and detecting and
resolving deadlocks can help to improve the stability of the system.
2. Better Resource Utilization:
 By detecting and resolving deadlocks, the operating system can
ensure that resources are efficiently utilized and that the system
remains responsive to user requests.
3. Better System Design:
 Deadlock detection and recovery algorithms can provide insight into
the behavior of the system and the relationships between processes
and resources, helping to inform and improve the design of the
system.

Dr. S.Sivakumar, Assistant Professor, PG Department of Computer Science, C.Mutlur, Chidambaram. 10 | P a g e


ADVANCED OPERATING SYSTEMS
UNIT 2- DISTRIBUTED OPERATING SYSTEMS
Disadvantages of Deadlock Detection and Recovery in Operating
Systems:

1. Performance Overhead:
 Deadlock detection and recovery algorithms can introduce a
significant overhead in terms of performance, as the system must
regularly check for deadlocks and take appropriate action to resolve
them.
2. Complexity:
 Deadlock detection and recovery algorithms can be complex to
implement, especially if they use advanced techniques such as the
Resource Allocation Graph or Time stamping.
3. False Positives and Negatives:
 Deadlock detection algorithms are not perfect and may produce
false positives or negatives, indicating the presence of deadlocks
when they do not exist or failing to detect deadlocks that do exist.
4. Risk of Data Loss:
 In some cases, recovery algorithms may require rolling back the
state of one or more processes, leading to data loss or corruption.

DISTRIBUTED FILE SYSTEM


 Distributed file systems (DFS) play a crucial role in operating
systems (OS) designed for distributed computing environments.
 They provide a means for managing files and directories across
multiple nodes in a network.
 Here are a few examples of distributed file systems in various
operating systems:

 Network File System (NFS):


o NFS is a distributed file system protocol developed by Sun
Microsystems (now Oracle).
o It allows a user on a client computer to access files over a
network in a manner similar to how local storage is accessed.
o NFS is widely used in UNIX-based systems and has been
implemented for various operating systems including Linux,
macOS, and some versions of Microsoft Windows.
 Andrew File System (AFS):
o AFS is a distributed file system that was developed at Carnegie
Mellon University.
o It provides location-independent file access, meaning users can
access files regardless of their physical location.
o AFS has features like scalability, security, and data caching to
improve performance.
o It has been used in academic and research environments.

Dr. S.Sivakumar, Assistant Professor, PG Department of Computer Science, C.Mutlur, Chidambaram. 11 | P a g e


ADVANCED OPERATING SYSTEMS
UNIT 2- DISTRIBUTED OPERATING SYSTEMS
 Google File System (GFS):
o GFS is a distributed file system developed by Google for its own
internal use.
o It is designed to handle large amounts of data across multiple
servers.
o GFS provides high reliability, scalability, and performance, and
it is optimized for handling large files.
o While GFS itself is not an operating system, it is closely tied to
the distributed computing infrastructure used by Google,
forming a fundamental part of its distributed systems.
 Hadoop Distributed File System (HDFS):
o HDFS is a distributed file system that is part of the Apache
Hadoop project.
o It is designed to store large datasets reliably and to stream
those datasets at high bandwidth to user applications.
o HDFS is highly fault-tolerant and is optimized for use with
MapReduce, a programming model for processing large
datasets.
 Windows Distributed File System (DFS):
o DFS is a distributed file system available in Windows Server
operating systems.
o It allows administrators to group shared folders located on
different servers into one or more logically structured
namespaces.
o DFS provides fault tolerance and load balancing, and it
simplifies the management of distributed file shares in Windows
environments.
 These examples illustrate how distributed file systems are
integrated into various operating systems to provide distributed
storage solutions for different use cases and environments.

DISTRIBUTED FILE SYSTEM


 A Distributed File System [DFS] is a file system that is distributed on
various file servers and locations.
 It permits programs to access and store isolated data in the same
methods as in the local files.
 It allows network users to share information and files in a regulated and
permitted manner.
 The servers have complete control over the data and provide users
access control.
DFS has two components in services
1. Local Transparency
2. Redundancy

Dr. S.Sivakumar, Assistant Professor, PG Department of Computer Science, C.Mutlur, Chidambaram. 12 | P a g e


ADVANCED OPERATING SYSTEMS
UNIT 2- DISTRIBUTED OPERATING SYSTEMS
1. Local Transparency;
 Location Transparency achieves through the namespace component.
 There are four types of Local Transparency
1. STRUCTURE TRANSPARENCY
 The client does not need to be aware of the number or location of
file servers and storage devices. In structure transparency, multiple
file servers must be given to adaptability, and performance.
2. NAMING TRANSPARENCY
 There should be no hint of the file’s location in the files name.
When the file is transferred from one node to other, the file name
should not be changed.
3. ACCESS TRANSPRAENCY
 Local and remove files must be accessible in the same method. The
file system must automatically locate the accessed file and deliver it
to the client.
4. REPLICATION TRANSPARENCY
 When a file is copied across various nodes, the copies files and their
locations must be hidden from one node to the next.
WORKING OF DFS
 There are two methods of DFS in which they might be implemented.
1. Standalone DFS namespace
2. Domain-based DFS namespace
1. Standalone DFS namespace
 It does not use Active Directory and only permits DFS roots that exist
on the local system.
 A Standalone DFS may only be acquired on the system that created
it. It offers no-fault liberation and may not be linked to other DFS.
2. Domain-based DFS namespace
 It stores the DFS configuration in Active Directory and creating
namespace root at domain name DFS root or FQDN DFS root.
ADVANTAGES IN DFS
 It allows the users to access and store the data
 It helps to improve the access time, network efficiency, and availability of
files.
 It permits the data to be shared remotely.
 It helps to enhance the ability to change the amount of data and
exchange data.
DISADVANTAGES IN DFS
 In a DFS the database connection is complicated.
 In a DFS database handling is also more complex than in a single-user
system.
 If all nodes try to transfer data simultaneously, there is a chance that
overloading will happen.
 There is a possibility that message and data would be missed in the
network while moving from one node to another.

Dr. S.Sivakumar, Assistant Professor, PG Department of Computer Science, C.Mutlur, Chidambaram. 13 | P a g e


ADVANCED OPERATING SYSTEMS
UNIT 2- DISTRIBUTED OPERATING SYSTEMS
Case studies of Distributed System

1. Google's Kubernetes:

 Background:
o Kubernetes is an open-source container orchestration platform
initially developed by Google and now maintained by the Cloud
Native Computing Foundation (CNCF).
 Key Features:
o Kubernetes provides automated deployment, scaling, and
management of containerized applications. It abstracts away the
underlying infrastructure and allows users to define application
requirements declaratively.
 Challenges:
o Kubernetes needed to address the challenges of managing
distributed applications across a dynamic and heterogeneous
environment efficiently. This involved designing robust scheduling,
networking, and service discovery mechanisms.

 Applications:
o Kubernetes is widely used for deploying and managing microservices-
based applications, providing scalability, resilience, and portability
across different cloud and on-premises environments.

2. Apache Spark:

 Background:
o Apache Spark is an open-source distributed computing framework
designed for processing large-scale datasets in parallel.
 Key Features:
o Spark provides an in-memory computing engine that supports a
variety of workloads, including batch processing, real-time stream
processing, machine learning, and graph analytics.
 Challenges:
o Spark needed to address the challenges of efficiently distributing
computation across a cluster of machines while providing fault
tolerance and high throughput. This involved designing distributed
data structures, task scheduling, and fault recovery mechanisms.
 Applications:
o Spark is widely used in big data analytics for processing and
analyzing large datasets, powering applications such as data
warehousing, ETL (Extract, Transform, Load) pipelines, and real-time
analytics.

Dr. S.Sivakumar, Assistant Professor, PG Department of Computer Science, C.Mutlur, Chidambaram. 14 | P a g e


ADVANCED OPERATING SYSTEMS
UNIT 2- DISTRIBUTED OPERATING SYSTEMS
3. Apache Hadoop:

 Background:
o Apache Hadoop is an open-source distributed computing framework
for storing and processing large datasets across clusters of
commodity hardware.
 Key Features:
o Hadoop consists of two main components: Hadoop Distributed File
System (HDFS) for distributed storage and MapReduce for
distributed processing. It provides fault tolerance, scalability, and
high throughput.
 Challenges:
o Hadoop needed to address the challenges of efficiently distributing
data and computation across a cluster of machines while providing
fault tolerance and scalability. This involved designing distributed file
storage, data replication, and parallel processing frameworks.
 Applications:
o Hadoop is widely used for batch processing of large datasets in
various domains, including web search, e-commerce, social media,
and bioinformatics.

The Sun Network File System-Coda:

1. Sun Network File System (NFS):


 NFS is a distributed file system protocol developed by Sun Microsystems
(now Oracle). It allows remote clients to access files over a network as if
they were stored locally.
 NFS operates on a client-server model, where a central NFS server
exports directories that can be mounted by NFS clients. Clients can
perform file operations (e.g., read, write, create, delete) on these remote
files.
 NFS is widely used in UNIX and UNIX-like operating systems for sharing
files and resources across networks. It provides a simple and
transparent mechanism for accessing shared files, making it suitable for
a wide range of applications.

2. Coda:
 Coda is a distributed file system developed at Carnegie Mellon
University. It is designed to provide transparent access to files across a
network of computers, even in the presence of disconnections and
network partitions.
 Coda employs a client-server architecture with disconnected operation
support. Clients cache copies of files locally and can continue working
with them even when disconnected from the network. Changes made
locally are later synchronized with the server.
 Coda's design emphasizes scalability, fault tolerance, and performance
optimization, making it suitable for distributed computing environments
where users need access to files regardless of network connectivity.

Dr. S.Sivakumar, Assistant Professor, PG Department of Computer Science, C.Mutlur, Chidambaram. 15 | P a g e


ADVANCED OPERATING SYSTEMS
UNIT 2- DISTRIBUTED OPERATING SYSTEMS

In summary, while both NFS and Coda are distributed file systems, they
have different architectures, features, and use cases. NFS is widely used for
network file sharing in traditional client-server environments, while Coda is
designed for distributed computing environments with support for
disconnected operation and transparent access to files across networks.

THE SUN NETWORK FILE SYSTEM (NFS)


 This file system was developed by sun Microsystems, so its name is SUN
NFS
 A network file system is a network abstraction over a file system that
allows a remote client to access it over a network in a similar way to a
local file system
 The network File system is a client – server applications
 In this, a user can view, store and update the files on a remote computer
 NFS example:
o If you were using a computer linked to a second computer via NFS,
you could access files on the second computer as if they resided in a
directory on the first computer

NFS Model

NFS Architecture:

Dr. S.Sivakumar, Assistant Professor, PG Department of Computer Science, C.Mutlur, Chidambaram. 16 | P a g e


ADVANCED OPERATING SYSTEMS
UNIT 2- DISTRIBUTED OPERATING SYSTEMS
Coda files system:
 Coda is a distributed file system developed as a research project at CMU.
It was designed for mobile clients that disconnect as their machines
move.
 To make disconnects transparent, each client keeps a cached Copy of
remote files once connecting to the server.
 The Coda distributed file system is a state-of-the-art experimental file
system developed in the group of M.Satyanarayanan at CarnegieMellon
University.
History of coda
 CODA" stands for "child of deaf adults."
 The film portrays the story of 17-year-old Ruby (Emilia Jones), the
hearing child of deaf parents (Oscar winner Marlee Matlin and Kotsur),
who's caught between helping her family's fledgling fish business in
Gloucester, Massachusetts, and pursuing her singing aspirations in
college
Important of CODA
 The film received strong reviews and won a number of awards. Despite
the glaring flaws in the film, it opened the doors for more disability
representation.
 Perhaps more important than that, it showed filmmakers that the
academy loves a story about disability.
The Advantages of a distributed system
 Resource sharing: Distributed systems allow multiple users and
programmers to share resources. Computing resources, such as
processing power, memory, and storage, can be efficiently utilized and
shared across the system, resulting in resource allocation optimization.
The disadvantage of a distributed system
 Coda cannot handle highly concurrent, fine granularity data access.
Coda has a very simple conflict handling.
 During conflict the first update would always go through and the
subsequent updates are lost or rolled back.

Dr. S.Sivakumar, Assistant Professor, PG Department of Computer Science, C.Mutlur, Chidambaram. 17 | P a g e

You might also like