Professional Documents
Culture Documents
ON
DISTRIBUTED FILE SYSTEM
INSTITUTE OF ENGINEERING AND TECHNOLOGY BUNDELKHAND UNIVERSITY
Submitted By:
Abhishek gaur, Akanksha singh, Akanksha Singh Maurya, Arjun yadav , Saumya katiyar
CHAPTER 12
DISTRIBUTED FILE SYSTEM
INTRODUCTION
Performance transparency:
Client programs should : Client programs should continue to
perform well on load within a specified continue to perform
well on load within a specified range.
Scaling transparency:
increase in size of storage and : increase in size of storage and
network size should be transparent. network size should be
transparent.
CONCURRENCY REPLICATION
PRROPERTIES: PROPERTIES:
EFFICIENCY:
Goal for distributed file systems is usually performance
comparable to local file system.
FILE SERVICE ARCHITECTURE
A file service architecture has three divisions of
responsibilities:
Flat file service: Concerned with the implementing
operations on the contents of the file. UFIDs (Unique File
Identifies) are used to refer to files. UFIDs also differentiate
between directory and file .
Directory Service: For mapping between text names of
files and their UFIDs.
Client Module: A client module runs in each client
computer. It integrates and extends operations of flat file service
and directory service under a single interface.
FLAT FILE SERVICE OPERATIONS
DIRECTORY SERVICE OPERATIONS
FILE GROUPS
A collection of files that can be located on any server or moved
between servers while maintaining the same names. A file can not
change its group.
o Similar to a UNIX file system
Note: The file system mounted at /usr/students in the client is actually the
subtree located at /export/people in Server 1; the file system mounted at
/usr/staff in the client is actually the subtree located at /nfs/users in Server 2.
AUTOMOUNTER
NFS client catches attempts to access 'empty' mount points
and routes them to the Automounter
-Automounter has a table of mount points and multiple
candidate serves for each
- it sends a probe message to each candidate server and then
uses the mount service to mount the file system at the first
server to respond
Keeps the mount table small
Provides a simple form of replication for read-only file
systems
E.g. if there are several servers with identical copies of /usr/lib
then each server will have a chance of being mounted at some
clients.
KERBERIZED NFS
Kerberos protocol is too costly to apply on each file access request
Kerberos is used in the mount service:
-to authenticate the user's identity
-User's UserlD and GrouplD are stored at the server with the
client's IP address.
For each file request:
-The UserlD and GrouplD sent must match those stored at the
server
-IP addresses must also match
This approach has some problems:
-cant accommodate multiple users sharing the same client computer
-all remote filestores must be mounted each time a user logs in
NFS OPTIMIZATION- SERVER CACHING
Similar to UNIX file caching for local files:
- pages (blocks) from disk are held in a main memory buffer
cache until the space is required for newer pages. Read-ahead
and delayed-write optimizations.
- For local files, writes are deferred to next sync event (30
second intervals)
-Works well in local context, where files are always accessed
through the local cache, but in the remote case it doesn't offer
necessary synchronization guarantees to clients.
NFS OPTIMIZATION- SERVER CACHING(CONT.)
- delayed commit -pages are held only in the cache until a commit()
call is received for the relevant file. This is the default mode used
by NFS v3 clients. A commit()is issued by the client whenever a
file is closed: pre-emptive
NFS OPTIMIZATION- CLIENT CACHING
Server caching does nothing to reduce RPC traffic between client
and server
How does AFS gain control when an open or close system call
referring to a file in the shared file space is issued by a client?
How does AFS ensure that the cached copies of files are up-to-
date when files may be updated by several clients?
DISTRIBUTION OF PROCESSES IN THE ANDREW FILE SYSTEM
AFS is implemented as two software components that exist
as UNIX processes called Vice and Venus.
NFS enhancements
AFS enhancements
Improvements in storage organization
New design approaches
NFS ENHANCEMENTS