ASSIGNMENT NO: 4

DBMS

SUBMITTED BY: NAME: KUMAR ASHISH ROLL: 25 SEC: C1801

SUBMITTED TO: CHAVI MAM

Answer: According to me. When Ti attempts to write data item Q. Any other transition Tk with TS(Tk)<TS(Tj) never need to read and will rolled back. this {write} operation can be ignored. . i. Otherwise this protocol is the same as the timestamp ordering protocol. but this is not possible in serializibility. Do you agree? Justify your answer. It easily succeeds the transaction and go to transaction Tj. rather than rolling back Ti as the timestamp ordering protocol would have done. Why we term validation based protocol optimistic protocol? Explain the protocol with concurrent transactions? Answer: It is called optimistic concurrency control since transaction executes fully in the hope that all will go well during validation. But when Ti executes the write(Q) we find that it is already done by Tj. If the Ti comes before Tj so we assume that TS(Ti)<TS(Tj). delay time may reduce. Thus the write(Q) by Ti will be rejected and will rolled back. yes it modifies the time-stamp ordering protocol. Concurrency execution increases the number of transaction per unit of time. Any transition Tl with TS(Tl)>TS(Tj) must read the value of Q written by Tj . if TS(Ti) < Wtimestamp(Q). Hence. we can execute the short transaction before the large one. Question2: “Thomas write rule modifies the time-stamp ordering protocol”. Question3.PART-A Question1: Why there is a need of concurrency protocols when we have the serializability concept? Ans: In concurrency execution.e. rather than written by Ti. TS(Ti)<W-timestamp(Q). This is the reason why concurrency protocol is preffered over serializability concept. That is it increases the throughput of the system. then Ti is attempting to write an obsolete value of {Q}.

Ti is rolled back. That is because the serializability order is not pre-decided and relatively less transactions will have to be rolled back. but each transaction must go through the three phases in that order. 1. Write phase: If Ti is validated. This protocol is useful and gives greater degree of concurrency if probability of conflicts is low. the updates are applied to the database. The three phases of concurrently executing transactions can be interleaved. T1 read(a) T2 read(a) a:=a+ 455 read(b) b:=b-590 read(b) (validate) display(A+B) (validate) write(a) write(b) PART-B Question 4: Do we need recovery in database management system. Read and execution phase: Transaction Ti writes only to temporary local variables 2. .Execution of transaction Ti is done in three phases. 3. Justify your answer with techniques which you will use. Validation phase: Transaction Ti performs a ``validation test'' to determine if local variables can be written without violating serializability. otherwise.

The current page is updated with each writeoperation. if failure occurs database is left unaltered.The idea is to maintain two page tables during the life of a transaction: the current page table and the shadow page table. b) Immediate log based: in this lag is maintained and is immediately written on database but we maintain one extra log of old value of database . We generally use two techniques for this purpose:1. if there will not be any recovery mechanism. Log-Based recovery Shadow paging Log-Based recovery: in this we maintain logs of our transaction by the help of which we can recover our database during failure. It is performed by two waysa) Deffered log based recovery: we maintain logs and after the transaction is partially committed log is written in database. We have lots of valuable data stored.Recovery is an essential part of database management system without which the data safety cannot be assures. The paging is very similar to paging schemes used by the operating system for memorymanagement . Each table entry points to a page on the disk. The shadow page is never changed during the life of the transaction. When the transaction is committed.Answer: . the shadow page entry becomes a copy of the current page . 2. if failure occurs old value is used to recover. When the transaction starts. both tables are identical. during some failure it leads to the inconsistent database and we may loose our valuable data. so yes we need recovery in our database. Shadow paging: This recovery scheme does not require the use of a log.

If the shadow is stored in nonvolatile memory and a system crash occurs. In this case the simplest way is – each transition will lock its data before it executes. Question 6: “Shadow paging uses the concept of paging scheme (in operating system)”. By this we can avoid deadlock in the system. Lock will not allow the other person to book that seat which already booked. which is from where shadow paging mechanism is being inherited. This guarantees that the shadow page table will point to the database pages corresponding to the state of the database prior to any transaction that was active at the time of the crash. in order to avoid concurrency related problems in the above system? How can the deadlock be avoided in this system? Answer: We have various concurrency control protocols we can use the multiple granularity concurrency protocols to avoid inconsistency in Railway reservation system . Operating system divides pages into – a) b) Frames Pages .table entry and the disk block with the old data is released. then the shadow page table is copied to the current page table. What are the concurrency control measures one has to take. In railway reservation system if one seat is booked then we apply locking at different-different levels. making aborts automatic. Answer:In operating system paging divide the physical memory in pages and frames. Do you agree? Justify your answer. Question 5: Assume that the Railway reservation system is implemented using an RDBMS. And prevent it from cyclic wait.

Shadow paging also divides database into fixed block called pages it has two page tablea) b) Current page table. By this we can very well say paging uses the concept of paging scheme (in operating system) . Shadow page table.