You are on page 1of 7

Homework type/no: _4

Course code: cse301_______

Course instructor: _Miss.Amandeep kaur_

course tutor:__

Date of allotment: _4/11/2010______

Date of submission: 16/11/2010__

Student roll no: __A18__

Section no:- __RD2802__

Declaration: I declare that this assignment is my individual work. I have not copied from any other student’s work or from any other source except where due acknowledgment is made explicitly in the text, nor has been written for me another person.

Student’s signature: V RUN KUM R

Evaluator’s comments:

Marks obtained _____________________ out of________________________ Content of home work should start from this page only


ANS 2: Yes I agree “Thomas write rule modifies the time -stamp ordering protocol”. . then a less recent transaction does not need to have written its change. or mechanism. that guarantees Serializability It is also the name of the resulting class (set) of transaction schedules. Shrinking phase: Locks are released and no locks are acquired. The protocol utilizes locks that during a transaction's life block other transactions from accessing data accessed by the transaction. Locks are applied and removed in two phases: 1. For example: Assuming that the timestamp of T1 is less than that of T2. Two-phase locking.PART-A Question 1: What variations are possible on two-phase locking protocol ANS1: In databases and transaction processing (transaction management). (2PL) is a concurrency control locking protocol. Question 2 “Thomas write rule modifies the time-stamp ordering protocol” Do you agree? Justify your answer. T1's write is discarded. Acc to Thomas write rule. 2. Expanding phase: Number of locks can only increase. if a more recent transaction has already written the value of an object. locks are acquired and no locks are released.

deadlocks can be handled ANS 3: Yes there is a possibility of deadlocks in databases . AVOIDANCE 3. in any case.) A "no preemption" (lockout) condition may also be difficult or impossible to avoid as a process has to be able to have a resource for a certain amount of time. this advance knowledge is frequently difficult to satisfy and. is an inefficient use of resources. Deadlocks are particularly troubling because there is no general solution to avoid (soft) deadlocks. (Such algorithms. are known as the all-or-none algorithms. Following are the ways by which deadlocks can be handled: 1. Computers intended for the time-sharing and/or real.Question 3:”In databases is there a possibility of deadlocks”. This proves impossible for resources that cannot be spooled. such as serializing tokens. Another way is to require processes to release all their resources before requesting all the resources they will need. This too is often impractical. If yes why? In how a many ways. The "hold and wait" conditions may be removed by requiring processes to request all the resources they will need before starting up (or before embarking upon a particular set of operations). and thus neither ever doe Deadlock is a common problem in multiprocessing where many processes share a specific type of mutually exclusive resource known as a software lock or soft lock. A deadlock is a situation wherein two or more competing actions are each waiting for the other to finish. Algorithms that avoid mutual exclusion are called non-blocking synchronization algorithms. PREVENTION 2. or the . and even with spooled resources deadlock could still occur.time markets are often equipped with a hardware lock (or hard lock) which guarantees exclusive access to processes. DETECTION Prevention Removing the mutual exclusion condition means that no process may have exclusive access to a resource. forcing serialized access.

One known algorithm that is used for deadlock avoidance is the Banker's algorithm. However. it must know in advance at any time the number and type of all resources in existence. In order for the system to be able to figure out whether the next state will be safe or unsafe. However. Detection Detecting the possibility of a deadlock before it occurs is much more difficult and is. which requires resource usage limit to be known in advance. For every resource request.) Algorithms that allow preemption include lock-free and wait-free algorithms and optimistic concurrency control. Avoidance Deadlock can be avoided if certain information about processes is available in advance of resource allocation. even the memory address of resources has been used to determine ordering) and Dijkstra's solution. Model checking. and is to be avoided. in fact. because the halting problem can be rephrased as a deadlock scenario. (Note: Preemption of a "locked out" resource generally implies a rollback. deadlock detection may be decidable. available. This approach constructs a Finite State-model on which it performs a progress analysis and finds all possible terminal sets in the model. The circular wait condition: Algorithms that avoid circular waits include "disable interrupts during critical sections". using specific means of locking resources. since it is very costly in overhead. generally undecidable. These then each represent a deadlock. and requested. inability to enforce preemption may interfere with a priority algorithm. Deadlock detection techniques include.processing outcome may be inconsistent or thrashing may occur. in specific environments. and "use a hierarchy to determine a partial ordering of resources" (where no obvious hierarchy exists. it is not possible to distinguish between algorithms that are merely waiting for a very unlikely set of circumstances to occur and algorithms that will never finish because of deadlock. meaning a state that could result in deadlock. the system sees if granting the request will mean that the system will enter an unsafe state. but is not limited to. However. . In the general case. The system then only grants requests that will lead to safe states. for many systems it is impossible to know in advance what every process will request. This means that deadlock avoidance is often impossible.

.If system crashes or the transaction aborts. commit> : When Ti partially commis. log record is written to the log. Xj . start> : Before starts its execution. but def erring the execution of all write operations of a transaction until the transaction partially commits. <Ti . this record is written to the log . then the information on the log is simply ignored. Example: Example transactions T0 and T1 (T0 executes before T1): T0: read (A) A: A-50 Write (A) Read (B) B: B+50 Write (B) IMMEDIATE DATABASE MODIFICATIONS: T1: read (C) C: C-100 Write (C) .Operation : <Ti . <Ti . V2> : The write operation by Ti results in the writing of new record to the log.PART-B Question 4: Compare deferred and immediate database modifications with help of an example ANS 4: DEFERRED DATABASE MODIFICATION: It ensures transaction atomicity by r ecording all databasemodification in the log.

¢ ¢ ¢ . as follows: 1. vehicles (i. for i = 1. each track ri has unit capacity. . and each If system crashes or the transaction aborts. C(ri) = 1 for all i = NS + 1. single engines. where NS is the numberof stations. in order to avoid concurrency related problems in the above system? How can the deadlock be avoided in this system . using the shadow page table. i. . start> <T1. Note that the value of A all * Output current page table to disk. is viewedas a resource that vehicles can acquire and it is denotedby ri.Pages not pointed to from current/shadow page table should be freed (garbage collected). ANS 6: Yes Shadow paging uses the concept of paging scheme.Once pointer to shadow page table has been written. a finite capacity C(ri) ¸ 1 is assigned torecordstation ri.A result of is performed T1 and T2. but it is hard to extend paging to allow multiple EXAMPLE: transactions. for i = NS + 1. simply update the pointer to point to current page table on disk. 2100> B = 2100 <T1. stations.Railway lines. commit> <T2. ¢ ¢ ¢ . Keep a pointer to the shadow page table at a fixed (known) location on disk. . where NT is theoverall number of tracks. shadow paging is an <Ti . 2. 3000. vNV g that collects all the vehicles moving over the lines and the stations. <T1.etc..ANS:. start> <T1. each track of the RNS state. ¢ ¢ ¢ .Vnew> alternative to log-based recovery techniques. each station is described by aresource ri. B . commit> Question 5: Assume that the Railway reservation system is implemented using an RDBMS.e. A .NS + NT . What are the concurrency control measures one has to take.The railway lines are divided into several tracks and each track can be occupied by onlybe output to database while the transaction is still in the active Allows database modifications to one vehicle at atime. In our framework. Moreover. then the old-value field of log each is used to track can accommodate only one train at a time. trains.Let us consider the set V = fv1. Moreover.NS. 2000. Xj . C . . * Make the current page table the new shadow page table. 900> A = 900 <T1. The paging is very similar to paging schemes used by the concurrent operating system for memory management. It may require fewer disk accesses. Do you agree? Justify your answer.) travelling over these lines.NS. To make the current page table the new changed in database before partially committed : shadow page table.To commit a transaction : * Flush is modified pages in main memory to disk.NS + NT .e. transaction is committed. Vold . How Shadow Paging the execution ? . 2800> C = 2800 <T2. ¢ ¢ ¢ . 1000. for restore. i = 1.No recovery is needed after a crash: New transactions can start Log Database right away.. which has both advantages and disadvantages. Question 6: “Shadow paging uses the concept of paging scheme (in Log Record Format : operating system)”. ¢ ¢ ¢ . Since each station is composed of one or more tracks.