You are on page 1of 11

Homework Title / No.

: _______HOMEWORK _3__________Course Code : __301_______

Course Instructor : ______Sheena__Pahuja______________ Course Tutor (if applicable) :

Date of Allotment : ___________ Date of submission : _____________

Student’s Roll No.____RTB012A04_____________ Section No. : ______TB012________

I declare that this assignment is my individual work. I have not copied from any
other student’s work or from any other source except where due
acknowledgment is made explicitly in the text, nor has any part been written for
me by another person.

Student’s Signature : __VEZOTO__

Evaluator’s comments:

Marks obtained : ___________ out of ______________________

Content of Homework should start from this page only:

not from SQL Server. One way to help accomplish this is to reduce the number of round trips between your application and SQL Server by using stored procedures or keeping transactions with a single batch. • Have the application access server objects in the same order each time. • Keep transactions as short as possible. . As you might imagine. cache it by storing it in a variable or an array. don't allow any user input. users may very well become confused as to what is happening when they receive deadlock error messages on their computer. SQL Server identifies the problem and ends the deadlock by automatically choosing one process and aborting the other process. the transaction that requires the least amount of overhead to rollback is the transaction that is aborted. • Avoid cursors. Collect it before the transaction begins. This process. If your application does need to read the same data more than once. The aborted transaction is rolled back and an error message is sent to the user of the aborted process. if it happens often on your server. and then re-reading it from there. allowing the other process to continue. Most well-designed applications. If the application has not been written to trap deadlock errors and to automatically resubmit the aborted transaction. deadlocks can use up SQL Server's resources. will resubmit the aborted transaction. can drag down performance. after receiving a deadlock message. especially CPU power. Here are some tips on how to avoid deadlocking on your SQL Server: • Ensure the database design is properly normalized. When this happens. Generally. which most likely can now run successfully. Homework 3 CAP301: Database Management System Part A Q1:. wasting it unnecessarily. • During transactions.Illustrate the concept of deadlock? What can be the possible preventing measures for deadlock handling ? Deadlocking occurs when two user processes have locks on separate objects and each process is trying to acquire a lock on the object that the other process has. Another way of reducing the time a transaction takes to complete is to make sure you are not performing the same reads over and over again.

which may consist each of several "simple" read/write operations) requires that they are noncommutative (changing their order also changes their combined result). and since changing orders of commutative operations (non-conflicting) does not change an overall operation sequence result.. For example.g. Only precedence (time order) in pairs of conflicting (non- commutative) operations is important when checking equivalence to a serial schedule. . and then releases them at the very earliest time. reduce lock escalation by using the ROWLOCK or PAGLOCK. such that both schedules have the same sets of respective chronologically-ordered pairs of conflicting operations (same precedence relations of respective conflicting operations). Operations upon data are read or write (a write: either insert or modify or delete). already supported in the old IBM's IMS "fast path"). such that respective transactions in the two schedules read and write the same data values ("view" the same data values). Two operations are conflicting. Each such pair of conflicting operations has a conflict type: It is either a read-write. A more general definition of conflicting operations (also for complex operations. Try to develop your application so that it grabs locks at the latest possible time. use as low of an isolation level as possible for the user connection running the transaction. • Consider using the NOLOCK hint to prevent locking if the data being locked is not modified often. Conflict-serializability is defined by equivalence to a serial schedule (no overlapping transactions) with the same transactions. or a write-write conflict. The transaction of the second operation in the pair is said to be in conflict with the transaction of the first operation.• Reduce lock time. • Consider using bound connections Q2:. since schedules consisting of the same transactions can be transformed from one to another by changing orders between different transactions' operations (different transactions' interleaving). but do not need to be considered conflicting (write-write conflict type) since they are commutative (e. • If appropriate. upon the same datum (data item). the operations increment and decrement of a counter are both writeoperations (both modify the counter). if they are of different transactions. Each such operation needs to be atomic by itself (by proper system support) in order to be considered an operation for a commutativity check. and at least one of them is write.Give various reasons in support of your answer ? View-serializability of a schedule is defined by equivalence to a serial schedule (no overlapping transactions) with the same transactions. • If appropriate. or write-read.Why do we emphasize conflict serializability rather than view serializability.

If one part of the transaction fails. 4. the entire transaction fails and the database state is left unchanged. DBMS (Database Management System). 2. Application failure: The application attempts to post data that violates a rule that the database itself enforces.g..Justify the use of the ACID properties of transaction for any database ? Atomicity Main article: Atomicity (database systems) Atomicity requires that database modifications must follow an "all or nothing" rule. Database failure: E. and the schedule is conflict-serializable by definition. It is critical that the database management system maintain the atomic nature of transactions in spite of any application. preventing some of the transaction's database changes from taking effect. then the outcome of both schedules is the same. Q3:. Atomicity means that users do not have to worry about the effect of incomplete transactions. while preserving operation order inside each transaction). the database runs out of room to hold additional data. . but typically not when conflicting operations change order).. This means that if a schedule can be transformed to any serial schedule without changing orders of conflicting operations (but changing orders of non- conflicting. such as attempting to insert a duplicate value in a column. a schedule outcome (the outcome is preserved through order change between non-conflicting operations. 3. An atomic transfer cannot be subdivided and must be processed in its entirety or not at all. Consistency states that only valid data will be written to the database.e. Hardware failure: A disk drive fails. i. operating system or hardware failure. Each transaction is said to be atomic. System failure: The user loses their connection to the application before providing all necessary information. Transactions can fail for several kinds of reasons: 1. The consistency property ensures that any transaction the database performs will take it from one consistent state to another. Consistency Main article: Consistency (database systems) It ensures the truthfulness of the database.

 nullify the relevant fields in all records that point to the deleted record. Application developers are responsible for ensuring application level consistency. over and above that offered by the DBMS. except that one transaction may be forced to wait for the completion of another transaction that has modified data that the waiting transaction requires. Thus. and then a second transaction reads and . the database is in a consistent state even though this rule (unknown to the DBMS) has been violated Isolation Main article: Isolation (database systems) Isolation refers to the requirement that other operations cannot access data that has been modified during a transaction that has not yet completed. Thus. If a transaction consisted of an attempt to delete a record referenced by another. if a user withdraws funds from an account and the new balance is lower than the account's minimum balance threshold. the DBMS could decide to reject attempts to put fractional values there. The consistency rule applies only to integrity rules that are within its scope. rolling back to the consistent. if one transaction is in the process of modifying data but has not yet completed. if a DBMS allows fields of a record to act as references to another record. for some reason. then consistency implies the DBMS must enforce referential integrity: by the time any transaction ends. This could happen. Each transaction must remain unaware of other concurrently executing transactions. The question of isolation occurs in case of concurrent transactions (multiple transactions occurring at the same time).  delete all records that reference the deleted record (this is known as cascade delete). These are examples of propagation constraints. each of the following mechanisms would maintain consistency:  abort the transaction. If the isolation system does not exist. as far as the DBMS is concerned. if the database schema says that a particular field is for holding integer numbers.or it would be equally valid for the DBMS to take some patch-up action to get the database in a consistent state. then the data could be put into an inconsistent state. prior state. If. a transaction is executed that violates the database’s consistency rules. each and every reference in the database must be valid.The consistency property does not say how the DBMS should handle an inconsistency other than ensure the database is clean at the end of the transaction. some database systems allow the database designer to specify which option to choose when setting up the schema for a database. or. or it could round the supplied values to the nearest whole number: both options maintain consistency. Thus. the entire transaction could be rolled back to the pre-transactional state .

A transaction is deemed committed only after it is entered in the log. Durability Main article: Durability (database systems) Durability is the ability of the DBMS to recover the committed transaction updates against any kind of system failure (hardware or software). in two records. the database has two fields. Durability does not imply a permanent state of the database. and that all integrity constraints have been satisfied. the uncommitted data from another transaction. that violation of transactional isolation will cause data inconsistency. In the previous example. If the database retains A's new value. A dirty read means that a transaction is allowed to read. so the DBMS won't need to reverse the transaction. it would be valid. atomicity and the constraint would both be violated. Another way to provide isolation for read transactions is via MVCC which gets around the blocking lock issues of reads blocking writes. but not modify. Many DBMSs implement durability by writing transactions into a transaction log that can be reprocessed to recreate the system state right before any later failure. In these examples. most database systems would not allow such a rule to be . If it succeeds. because the data continues to satisfy the constraint. Atomicity failure The transaction subtracts 10 from A and adds 10 to B. the transaction's data changes will survive system failure. one rule was a requirement that A + B = 100. A subsequent transaction may modify data changed by a prior transaction without violating the durability principle. some modern databases allow dirty reads which is a way to bypass some of the restrictions of the isolation system. Examples The following examples are used to further explain the ACID properties. However.but to satisfy the consistency property a database system only needs to enforce those rules that are within its scope. The read is done on a prior version of data and not on the data that is being locked for modification thus providing the necessary isolation between transactions. If the first transaction fails and the second one succeeds.modifies that uncommitted data from the first transaction. Due to performance and deadlocking concerns with multiple competing transactions. assume that after removing 10 from A. A and B. Durability is the DBMS's guarantee that once the user has been notified of a transaction's success the transaction will not be lost. Consistency failure Consistency is a very general term that demands the data meets all validation rules that the overall application expects . Atomicity requires that both parts of this transaction complete or neither. the transaction is unable to modify B. An integrity constraint requires that the value in A and the value in B must sum to 100.

This is known as a write-write failure. we assume two transactions execute at the same time. leaving an invalid database. and that foreign keys are all valid. It then adds 10 to B. T2 subtracts 10 from it. although T2 must wait. Also. B + 10. if T1 fails. Power fails and the changes are lost. Combined. B − 10.  subtract 10 from B  add 10 to A. Durability failure Assume that a transaction transfers 10 from A to B. What should A's value be? T2 has already changed it. T1 still subtracts 10 from A. T1 never changed B. Validation rules that cannot be enforced by the database system are the responsibility of the application programs using the database. and A's value will be unchanged. a "success" message is sent to the user. Part B . Consider two transactions.but they would be able to ensure the values were whole numbers. The database eliminates T1's effects. At this point. Consider what happens. T2 adds 10 to A restoring it to its initial value. Example of rules that can be enforced by the database system are that the primary key values of a record uniquely identify that record. Again consider what happens. If T2 is allowed to complete. each attempting to modify the same data.specified. T1 transfers 10 from A to B. It removes 10 from A. However. there are four actions:  subtract 10 from A  add 10 to B. isolation is maintained. and T2sees only valid data. because two transactions attempted to write to the same data field. and so would have no responsibility to enforce it . By interleaving the transactions. say) and in the right range. the actual order of actions might be: A − 10. that the values stored in fields are the right type (the schema might require that both A and B are integers. The user assumes that the changes have been made. T2 transfers 10 from B to A. the changes are still queued in the disk buffer waiting to be committed to the disk. If these operations are performed in order. A + 10. One of the two must wait until the other completes in order to maintain isolation. Isolation failure To demonstrate isolation. Now. Now T1 fails. if T1 fails half-way through. B's value will be 10 too low.

3. access to external services. This section explains the components present in the WebML architecture before the present work. as illustrated by service S1. ER schema for web based application is given below. into a new. Integrating databases and web services: the application A2 in Figure 1 allows a user to exploit. S1 combines database information. Interaction patterns for using the target applications and services. database content. shown in Figure 2 in regular lines and fonts. and human user interaction. 1.A WebML site definition consists of a data schema structured in Entities and Relationships. local resources. and a hypertext describing the content of the site in terms of the underlying data model. In the hypertext model. The same methodology allows for specifying new web services by composing existing services. Q1:. A1 enables a user to conduct an interaction (continuous lines) with such services. and input from human users. and the remotely developed service (dashed lines). that an external peer p may in turn use (dotted lines). A unit is an elementary piece of content corresponding to a parameterised query over the underlying E-R data model. . added-value component. Specify key attributes of each entity type and the constraints on each of its relationship type. Figure 1. 2. Interacting with web services: an application like A1 in Figure 1 is an HTML-based shell around the XML syntax specific to the external web service.Design an ER schema for any web based application and draw ER diagram for that schema. pages and their correlations are specified using WebML's library of graphical primitives. Units. web pages are defined as sets of interconnected units. together.

link and page included in the site design. using the descriptors and the data sources in the Data Layer. such as a SELECT statement. . such as UPDATE. Shared locks are compatible with other Shared locks or Update locks.Illustrate any two situations where we should prefer shared locks rather than exclusive locks. that in turn calls the JSP engine. which supports the WebML model Q2:. Design and execution environments of WebML-specified Web applications. an XML descriptor including the information needed to instantiate it in the functional web site. This architecture is currently being implemented into the commercial tool WebRatio [6]. INSERT. top). At runtime (lower part of Figure 2) the site is instantiated and "run". At compile time (Figure 2. The content required by the user is assembled by the WebML runtime component. Shared locks are used for operations that do not change or update data. the generated pages include customized JSP tags that define the placement of the WebML units into the JSP pages. Clients issue requests via a browser to the Web server. and for each unit. from the graphical site definition are generated: a set of JSP pages. or DELETE. Exclusive locks are used for the data modification operations. Figure 2.

These processes start one after another.Cursors that are executed by oracle engine for its internal processing are referred to __implicit______. because Shared locks are compatible with other Shared locks. fetches. because Exclusive lock is not compatible with other lock types. which attempt to lock the same page of the same table. forcing a write transaction to wait indefinitely. After Process3 was finished. because there are no another locks on this page.Discuss. Process2 is the second process and so on. Q3:. so Process1 is the first process. and close for you automatically. Process3 transfer Update lock into Exclusivelock to modify data. Process4 sets the Shared lock on the page to select data. PL/SQL allows you to refer to the most recent implicit cursor as the SQL cursor. So. Let me to describe it on the real example.Exclusive locks are not compatible with other lock types. So. Process1 : SELECT Process2 : SELECT Process3 : UPDATE Process4 : SELECT Process1 sets the Shared lock on the page. Process3 sets Update lock. Oracle will perform the open. . there is no Lock starvation. If the SQL statement returns more than one row. Process4 waits before Process3 will be finished. The Oracle server implicitly opens a cursor to process each SQL statement not associated with an explicitly declared cursor. Implicit cursors Implicit cursors are automatically created and used by Oracle every time you issue a Select statement in PL/SQL. Lock starvation occurs when read transactions can monopolize a table or page. Process4 cannot set Shared lock on the page before Process3 will be finished. There are four processes. Process3 wants to modify data and wants to set Exclusive lock. So. If you use an implicit cursor. but it cannot make it before Process1 and Process2 will be finished. an error will occur. Implicit cursors are used in statements that return only one row. After Process1 and Process2 were finished. Process2 sets the Shared lock on the page.

during the processing of an implicit cursor.For a long time there have been debates over the relative merits of implicit cursors and explicit cursors. . The implicit cursor is used to process INSERT. The process of an implicit cursor is as follows: Whenever an SQL statement is executed. The short answer is that implicit cursors are faster and result in much neater code so there are very few cases where you need to resort to explicit cursor. An INSERT statement requires a place to accept the data that is to be inserted in the database. and SELECT INTO statements. A cursor is automatically associated with every DML statement (UPDATE. All UPDATE and DELETE statements have cursors those recognize the set of rows that will be affected by the operation. the implicit cursor fulfills this need. and CLOSE operations. UPDATE. Oracle automatically performs the OPEN. and INSERT). as long as an explicit cursor does not exist for that SQL statement. DELETE. DELETE. any given PL/SQL block issues an implicit cursor. The most recently opened cursor is called the “SQL%” Cursor. FETCH.