You are on page 1of 27



Q1. Differentiate between Traditional File System & Modern Database System? Describe the properties of Database & the Advantage of Database?

Traditional File Systems Vs Modern Database Management Systems Traditional File System Modern Systems Traditional File system is the system that This is the Modern way which has was followed before the advent of DBMS replaced the older concept of File i.e., it is the older way. In Traditional file processing, system. data Data definition is part of the DBMS Application is independent and can be used with any application. Database Management

definition is part of the application program and works with only specific application.

File systems are Design Driven; they One extra column (Attribute) can be require design/coding change when new added without any difficulty kind of data occurs. Minor coding changes in the

E.g.: In a traditional employee the master Application program may be required. file has Emp_name, Emp_id, Emp_addr, Emp_design, Emp_dept, Emp_sal, if we want to insert one more column Emp_Mob number then it requires a complete restructuring of the file or redesign of the


application code, even though basically all the data except that in one column is the same. Traditional File system keeps redundant Redundancy is eliminated to the [duplicate] information in many locations. maximum extent in DBMS if properly This might result in the loss of Data defined. Consistency. For e.g.: Employee names might exist in separate files like Payroll Master File and also in Employee Benefit Master File etc. Now if an employee changes his or her last name, the name might be changed in the pay roll master file but not be changed in Employee Benefit Master File etc. This might result in the loss of Data Consistency. In a File system data is scattered in various This problem is completely solved files, and each of these files may be in here. different formats, making it difficult to write new application programs to retrieve the appropriate data. Security features are to be coded in the Coding for security requirements is Application Program itself. not required as most of them have been taken care by the DBMS. Hence, a data base management system is the software that manages a database, and is responsible for its storage, security, integrity, concurrency, recovery and access.


The DBMS has a data dictionary, referred to as system catalog, which stores data about everything it holds, such as names, structure, locations and types. This data is also referred to as Meta data. Properties of Database The following are the important properties of Database: 1. A database is a logical collection of data having some implicit meaning. If the data are not related then it is not called as proper database. E.g. Student studying in class II got 5th rank. Stud_name Vijetha Class Class II Rank obtained 5th

2. A database consists of both data as well as the description of the database structure and constraints. E.g. Field Name Stud_name Class Type Character Alpha numeric Description It is the students name It is the class of the student

3. A database can have any size and of various complexity. If we consider the above example of employee database the name and address of the employee may consists of very few records each with simple structure. E.g. Emp_name Prasad Emp_id Emp_addr 100 Shubhodaya, Katariguppe Big BSK II stage, Bangalore Emp_desig Near Project Bazaar, Leader Emp_Sal 40000




#165, 4th main Chamrajpet, Software Bangalore engineer Towers, Lecturer







Bangalore Peter 103 Syndicate house, Manipal IT executive 15000

Like this there may be n number of records. 4. The DBMS is considered as general-purpose software system that facilitates the process of defining, constructing and manipulating databases for various applications. 5. A database provides insulation between programs, data and data abstraction. Data abstraction is a feature that provides the integration of the data source of interest and helps to leverage the physical data however the structure is. 6. The data in the database is used by variety of users for variety of purposes. For E.g. when you consider a hospital database management system the view of usage of patient database is different from the same used by the doctor. In this case the data are stored separately for the different users. In fact it is stored in a single database. This property is nothing but multiple views of the database. 7. Multiple user DBMS must allow the data to be shared by multiple users simultaneously. For this purpose the DBMS includes concurrency control software to ensure that the updating done to the database by variety of users at single time must get updated correctly. This property explains the multiuser transaction processing. Advantages of using DBMS 1. Redundancy is reduced 2. Data located on a server can be shared by clients 3. Integrity (accuracy) can be maintained 4. Security features protect the Data from unauthorized access


5. Modern DBMS support internet based application. 6. In DBMS the application program and structure of data are independent. 7. Consistency of Data is maintained 8. DBMS supports multiple views. As DBMS has many users and each one of them might use it for different purposes, and may require viewing and manipulating only on a portion of the database, depending on requirement.

Q2. What is the disadvantage of sequential file organization? How do you overcome it? What are the advantages & disadvantages of Dynamic Hashing? In this file organization, the records of the file are stored one after another both physically and logically. That is, record with sequence number 16 is located just after the 15th record. A record of a sequential file can only be accessed by reading all the previous records. The records are discriminated from one another using the record length declared in the associated FD statement of the FILE-SECTION. For example, if the record structure that the programmer has declared is 52 bytes, blocks of 52 byte data (records) are assumed to place one after another in the file. If the programmer is reading the data in a sequential file, every READ statement brings 52 bytes into the memory. If the file contains, say, 52 byte records; but the programmer tries to read this file with a program which has declared 40 byte records (i.e. the total length of the FD structure is 40 bytes), the program will certainly read some pieces of information into the memory but the after the first READ statement, some meaningless pieces of records will be brought into memory and the program will start processing some physical records which contain logically meaningless data. It is the programmer's responsibility to take care of the record sizes in files. You must be careful when declaring record structures for files. Any mistake you make in


record sizes will cause your program to read/write erroneous information. This is especially dangerous if the file contents are being altered (changed, updated). Since the records are simply appended to each other when building SEQUENTIAL files, you simply end up with a STREAM of byte. If this string does not contain any "Carriage Return/Line Feed" control characters in it, the whole file will appear as a single LINE of character and would be impossible to process with regular text editors. As you should know by now, text editors are good in reading/writing/modifying text files. These programs will assume that the file consists of LINES and expect the lines to separate from each other by a pair of control characters called "Carriage Return/Line Feed" (or CR/LF). COBOL has a special type of sequential file organization, which is called the LINE SEQUENTIAL ORGANIZATION which places a CR/LF pair at the end of each record while adding records to a file and expects such a pair while reading. LINE SEQUENTIAL files are much easier to use while developing programs because you can always use a simple text editor to see the contents of your sequential file and trace/debug your program. Please note that LINE SEQUENTIAL files have two extra characters for each record. For files, which have millions of records, this might use up a significant amount of disk space. SEQUENTIAL files have only one ACCESS MODE and that is "sequential access". Therefore you need not specify an ACCESS MODE in the SELECT statement. Typical SELECT statements for SEQUENTIAL files are: SELECT MYFILE ASSIGN TO DISK "MYFILE.DAT" ORGANIZATION IS SEQUENTIAL. SELECT MYFILE-2 ASSIGN TO DISK "C:\DATADIR\MYFILE2.TXT" ORGANIZATION IS LINE SEQUENTIAL. In the FILE-SECTION, you must provide FD blocks for each file; hence for a sequential file you could have something like: FD MYFILE.


01 MYFILE-REC. 02 M-NAMES PIC X (16). 02 M-SURNAME PIC X(16). 02 M-BIRTHDATE. 03 M-BD-YEAR PIC 9999. 03 M-BD-MONTH PIC 99. 03 M-BD-DAY PIC 99.

Note: You must NOT provide record fields for the extra two CR/LF bytes in record descriptions of LINE SEQ files. Once you declare the file to be a LINE SEQ file, these two extra bytes are automatically taken in consideration and added for all new records that are added to a file. It is NOT possible to delete records of a seq file. If you do not want a specific record to be kept in a seq file any more, all you can do is to modify the contents of the record so that it contains some special values that your program will recognize as deleted (remember to open the file in I-O mode and REWRITE a new record). Can be only processed sequentially. If you need to read record number N, you must first read the previous N-1 records. Especially no good for programs that make frequent searches in the file. To overcome these disadvantages some of the following hashing techniques are in use: One disadvantage of sequential file organization is that we must use linear search or binary search to locate the desired record and that results in more I/O operations. In this there are a number of unnecessary comparisons. In hashing technique or direct file organization, the key value is converted into an address by performing some arithmetic manipulation on the key value, which provides very fast access to records.

Let us consider a hash function h that maps the key value k to the value h (k). The VALUE h (k) is used as an address.


The basic terms associated with the hashing techniques are: 1) Hash table: It is simply an array that is having address of records. 2) Hash function: It is the transformation of a key into the corresponding location or address in the hash table (it can be defined as a function that takes key as input and transforms it into a hash table index). 3) Hash key: Let 'R' be a record and its key hashes into a key value called hash key. The different hashing techniques are: Internal Hashing Dynamic hashing Extendable hashing

Dynamic Hashing Technique A major drawback of the static hashing is that address space is fixed. Hence it is difficult to expand or shrink the file dynamically. In dynamic hashing, the access structure is built on the binary representation of the hash value. In this, the number of buckets is not fixed [as in regular hashing] but grows or diminishes as needed. The file can start with a single bucket, once that bucket is full, and a new record is inserted, the bucket overflows and is slit into two buckets. The records are distributed among the two buckets based on the value of the first [leftmost] bit of their hash values. Records whose hash values start with a 0 bit are stored in one bucket, and those whose hash values start with a 1 bit are stored in another bucket. At this point, a binary tree structure called a directory is built. The directory has two types of nodes. 1. Internal nodes: Guide the search, each has a left pointer corresponding to a 0 bit, and a right pointer corresponding to a 1 bit. 2. Leaf nodes: It holds a pointer to a bucket a bucket address. Each leaf node holds a bucket address. If a bucket overflows, for example: a new record is inserted into the bucket for records whose hash values start with 10 and causes


overflow, then all records whose hash value starts with 100 are placed in the first split bucket, and the second bucket contains those whose hash value starts with 101. The levels of a binary tree can be expanded dynamically.

Advantages of dynamic hashing: 1. The main advantage is that splitting causes minor reorganization, since only the records in one bucket are redistributed to the two new buckets. 2. The space overhead of the directory table is negligible. 3. The main advantage of extendable hashing is that performance does not degrade as the file grows. The main space saving of hashing is that no buckets need to be reserved for future growth; rather buckets can be allocated dynamically. Disadvantages: 1. The index tables grow rapidly and too large to fit in main memory. When part of the index table is stored on secondary storage, it requires extra access. 2. The directory must be searched before accessing the bucket, resulting in two-block access instead of one in static hashing. 3. A disadvantage of extendable hashing is that it involves an additional level of indirection. Q3. What is relationship type? Explain the difference among a relationship instance, relationship type & a relation set?

Relationships: In the real world, items have relationships to one another. E.g.: A book is published by a particular publisher. The association or relationship that exists between the entities relates data items to each other in a meaningful way. A relationship is an association between entities. A collection of relationships of the same type is called a relationship set.


A relationship type R is a set of associations between E, E2.En entity types mathematically, R is a set of relationship instances RI. E.g.: Consider a relationship type WORKS_FOR between two entity types - employee and department, which associates each employee with the department the employee works for. Each relationship instance in WORKS_FOR associates one employee entity and one department entity, where each relationship instance is rig which connects employee and department entities that participate in rig. Employee el, e3 and e6 work for department d1, e2 and e4 work for d2 and e5 and e7 work for d3. Relationship type R is a set of all relationship instances.

Some instances of the WORKS_FOR relationship

Degree of relationship type: The number of entity sets that participate in a relationship set. A unary relationship exists when an association is maintained with a single entity.

A binary relationship exists when two entities are associated.


A tertiary relationship exists when there are three entities associated.

Degree of relationship type Constraints on Relationship Types Relationship types usually have certain constraints that limit the possible combination of entities that may participate in the relationship instance. E.g.: If the company has a rule that each employee must work for exactly one department. The two main types of constraints are cardinality ratio and participation constraints. The cardinality ratio specifies the number of entities to which another entity can be associated through a relationship set. Mapping cardinalities should be one of the following. One-to-One: An entity in A is associated with at most one entity in B and vice versa.

Employee can manage only one department and that a department has only one manager.


One-to-Many: An entity in A is associated with any number in B. An entity in B however can be associated with at most one entity in A.

Each department can be related to numerous employees but an employee can be related to only one department Many-to-One: An entity in A is associated with at most one entity in B. An entity in B however can be associated with any number of entities in A. Many depositors deposit into a single account. Man-to-Many: An entity in A is associated with any number of entities in B and an entity in B is associated with any number of entities in A.

An employee can work on several projects and several employees can work on a project. Participation Roles: There are two ways an entity can participate in a relationship where there are two types of participations. 1. Total: The participation of an entity set E in a relationship set R is said to be total if every entity in E participates in at lest one relationship in R. Every employee must work for a department. The participation of employee in WORK FOR is called total.


Some instances of the WORKS_FOR relationship Total participation is sometimes called existence dependency.

2. Partial: If only some entities in E participate in relationship in R, the participation of entity set E in relationship R is said to be partial.

Some instances of the WORKS_FOR relationship We do not expect every employee to manage a department, so the participation of employee in MANAGES relationship type is partial.

Q4. What is SQL? Discuss.


SQL stands for Structured Query language The Structured Query language is used for programming the database. The history of SQL began in an IBM laboratory in San Jose, California, where SQL was developed in the late 1970's. SQL stands for structured Query Language. It is a non-procedural language, meaning that SQL describes what data to retrieve delete or insert, rather than how to perform the operation. It is the standard command set used to communicate with the RDBMS. A SQL query is not-necessarily a question to the database. It can be command to do one of the following. Create or delete a table. Insert, modify or delete rows. Search several rows for specifying information and return the result in order. Modify security information. THE SQL STATEMENT CAN BE GROUPED INTO FOLLOWING CATEGORIES. 1. DDL (Data Definition Language) 2. DML (Data Manipulation Language) 3. DCL (Data Control Language) 4. TCL (Transaction Control Language) DDL: Data Definition Language DML: (Data Manipulation Language) The DML statements are used to alter the database tables in someway. The UPDATE, INSERT and DELETE statements alter existing rows in a database tables, insert new records into a database table, or remove one or more records from the database table.


DCL: (Data Control Language) The Data Control Language Statements are used to Grant permission to the user and Revoke permission from the user, Lock certain Permission for the user. SQL DBA>Revoke Import from Akash; SQL DBA>Grant all on emp to public; SQL DBA>Grant select, Update on EMP to L.Suresh; SQlDBA>Grant ALL on EMP to Akash with Grant option; Revoke: Revoke takes out privilege from one or more tables or views. SQL DBA>rEOKE UPDATE, DELETE FROM l.sURES; SQL DBA>Revoke all on emp from Akash TCL: (Transaction Control Language) It is used to control transactions. Eg: Commit The DDL statement provides commands for defining relation schema i.e. for creating tables, indexes, sequences etc. and commands for dropping, altering, renaming objects. SQL* COMMANDS: This subsection discusses the often used commands in sql environment. For example, if your SQL commands are saved in a file (typically in note pad) you can execute this file using an "at" @command, similarly there are a number of such commands:


@<file name> Runs the command file stored in <filename>

DATA TYPES IN ORACLE 8i SQL: The fig. shows the complete listing of the data types allowed in oracle. DATA TYPE CHAR (sizes) VARCHAR2(size) DATE DESCRIPTION Fixed length character. Max = 2000 Variable length character. Max=4000 Date, valid range is from jan1, 4712 B.C to. DEC 31,4712 A.D. BLOB CLOB Binary large object Max =4GB Character large object Max=4G.B.


Pointer to binary OS file Character data of variable size, Max=2G.B. Raw binary data. Rest is same as long



Numbers. Max. size =40 digits Numbers, range=1.0E-130 to 9.9E125 Same as NUMBER. Size /d can't be specified Same as NUMBER Same as NUMBER Size /d can't be specified Same as NUMBER

Q5. What is Normalization? Discuss various types of Normal Forms? Introduction to Normalization In Unit 8 you learnt about how to create database using SQL. In this unit we will study how to normalize the data in the database. Normalization is the process of building database structures to store data, because any application ultimately depends on its data structures. If the data structures are poorly designed, the application will start from a poor foundation. This will require a lot more work to create a useful and efficient application. Normalization is the formal process for deciding which attributes should be grouped together in a relation. Normalization serves as a tool for validating and improving the logical design, so that the logical design avoids unnecessary duplication of data, i.e. it eliminates redundancy and promotes integrity. In the normalization process we analyze and decompose the complex relations into smaller, simpler and well-structured relations. Normal forms Based on Primary Keys A relation schema R is in first normal form if every attribute of R takes only single atomic values. We can also define it as intersection of each row and column containing one and only one value. To transform the un-normalized table (a table that contains one or more repeating groups) to first normal form, we identify and remove the repeating groups within the table.


E.g. Dept. D.Name R&D HRD Figure A Consider the figure that each dept can have number of locations. This is not in first normal form because D.location is not an atomic attribute. The dormain of D location contains multivalued. There is a technique to achieve the first normal form. Remove the attribute D.location that violates the first normal form and place into separate relation Dept_location D.No 5 4 D. location [England, London, Delhi) Bangalore

Functional dependency: The concept of functional dependency was introduced by Prof. Cod in 1970 during the emergence of definitions for the three normal forms. A functional dependency is the constraint between the two sets of attributes in a relation from a database. Given a relation R, a set of attributes X in R is said to functionally determine another attribute Y, in R, (X->Y) if and only if each value of X is associated with one value of Y. X is called the determinant set and Y is the dependant attribute. For eg.: Consider the example of STUDENT_COURSE database. STUDENT_COURSE


In the STUDENT_COURSE database (Sid) student id does not uniquely identifies a tuple and therefore it cannot be a primary key. Similarly (Cid) course id cannot be primary key. But the combination of (Sid, Cid) uniquely identifies a row in STUDENT_COURSE. Therefore (Sid, Cid) is the primary key which uniquely retrieves Sname, address, course, marks, which are dependent on the primary key.

Second Normal Form (2 NF) A second normal form is based on the concept of full functional dependency. A relation is in second normal form if every non-prime attribute A in R is fully functionally dependent on the Primary Key of R.


Emp_Project:Emp_ProjectFigure 9.2: 2NF and 3 NF, (a) Normalizing EMP_PROJ into 2NF relations

Normalizing EMP_DEPT into 3NF relations A Partial functional dependency is a functional dependency in which one or more nonkey attributes are functionally dependent on part of the primary key. It creates a redundancy in that relation, which results in anomalies when the table is updated. Third Normal Form (3NF) This is based on the concept of transitive dependency. We should design relational schema in such a way that there should not be any transitive dependencies, because they lead to update anomalies. A functional dependence [FD] x->y in a relation schema 'R' is a transitive dependency. If there is a set of attributes 'Z' Le x->, z->y is transitive. The dependency SSN->Dmgr is transitive through Dnum in Emp_dept relation because SSN>Dnum and Dnum->Dmgr, Dnum is neither a key nor a subset [part] of the key.


According to codd's definition, a relational schema 'R is in 3NF if it satisfies 2NF and no no_prime attribute is transitively dependent on the primary key. Emp_dept relation is not in 3NF, we can normalize the above table by decomposing into E1 and E2. Note: Transitive is a mathematical relation that states that if a relation is true between the first value and the second value, and between the second value and the 3rd value, then it is true between the 1st and the 3rd value. Example 2: Consider a relation schema 'Lots' which describes the parts of land for sale in various countries of a state. Suppose there are two candidate keys: property_ID and {Country_name.lot#}; that is, lot numbers are unique only within each country, but property_ID numbers are unique across countries for entire state. Based on the two candidate keys property_ID and {country name, Lot} we know that functional dependencies FD1 and FD2 hold. Suppose the following two additional functional dependencies hold in LOTS. FD3: Country_name -> tax_rate FD4: Area -> price Here, FD3 says that the tax rate is fixed for a given country says that price of a Lot is determined by its area, violates 2NF, because tax_rate is partially , FD4 . The Lots relation schema dependent upon candidate key


{ Country_namelot#} Due to this, it decomposes lots relation into two relations - lots1 and lots 2. Lots1 violates 3NF, because price is transitively dependent on candidate key of Lots1 via attribute area. Hence we could decompose LOTS1 into LOTS1A and LOTS1B. A relation schema R is in 3NF when it satisfies the conditions below. 1. It is fully functionally dependent on every key of 'R' 2. It is non_transitively dependent on every key of 'R' Fourth Normal Form (4NF) Multi valued dependencies are based on the concept of first normal form, which prohibits attributes having a set of values. If we have two or more multi valued independent attributes in the same relation, we get into a situation where we have to repeat every value of one of the attributes, with every value of the other attributes to keep the relation state consistent, and to maintain independence among the attributes involved. This constraint is specified by a Multi valued dependency. Consider a table employee that has the attribute name, project and hobby. An employee can work in more than one project and can have more than one hobby. The employees projects and hobbies are independent of one another. A given project or hobby is associated with any number of employees. To keep the Relation State consistent we must have separate tuples to represent every combination of employee's project and employees hobbies. The drawback of EMPLOYEE relation is redundant data. This redundant data leads to update anomaly. For example, if we wish to add one more project on Sybase, so that employ B is handling, then we must add two more tuples for each hobby. The values Reading and Movie of hobby are repeated with each value of project. This redundancy is undesirable. One way to remove redundancy is to decompose EMPLOYEE relation into two relations PROJECT AND HOBBY.


NOW, if we wish to insert Sybase in PROJECT relation, then there is only one entry required. Definition (MVD): A relation R(X.Y.Z) is said to have multivalued dependency if the set of Y values for a given [X,Z] pair does not depend on Z, but depends only on X, then we say "X multi-determines y" or "y is multi-dependent on x". Then such

FD is called Multivalued Dependency (MVD) and is represented by double arrows We can also define MVD as, for each value of X there is a set of values for Y, and a set of values for Z. However, the set of values for Y and Z are independent of each other. So wherever two independent one_to_many relationships (A:B and A:C) are mixed on the same relation, a multivalued dependency arises. Multivalued dependency can be avoided using the fourth normal form. EMPLOYEE NAME A A A A B B B B PROJECT Microsoft Oracle Microsoft Oracle INTEL Sybase INTEL Sybase HOBBY Cricket Music Music Cricket Movies Reading Reading Movies

Decomposed relation to reduce redundancy PROJECT



PROJECT Microsoft Oracle Intel Sybase

PROJECT Cricket Music Movie Reading

Fourth Normal Form (4NF): The definition of 4NF is violated when a relation has undesirable multivalued dependencies, and hence identify such relations and decompose into 4NF relations. Alternate definition: A relation R is said to be in 4NF if for every MVD holds over R, one of the following is true: B A (trivial), or that

AB = R or A is a super key The Employee relation is not in 4NF because of the non-trivial MVDs (project and hobby attributes of employee relation are independent of each other) and NAME is not a super key of EMPLOYEE. To make this relation into 4NF you have to decompose EMPLOYEE to PROJECT AND HOBBY.


Q6. What do you mean by Shared Lock & Exclusive lock? Describe briefly two phase locking protocol? Shared Locks: It is used for read only operations, i.e., used for operations that do not change or update the data. E.G., SELECT statement:, Shared locks allow concurrent transaction to read (SELECT) a data. No other transactions can modify the data while shared locks exist. Shared locks are released as soon as the data has been read. Exclusive Locks: Exclusive locks are used for data modification operations, such as UPDATE, DELETE and INSERT. It ensures that multiple updates cannot be made to the same resource simultaneously. No other transaction can read or modify data when locked by an exclusive lock. Exclusive locks are held until transaction commits or rolls back since those are used for write operations. There are three locking operations: read_lock(X), write_lock(X), and unlock(X). A lock associated with an item X, LOCK(X), now has three possible states: "read locked", "write-locked", or "unlocked". A read-locked item is also called share-locked, because other transactions are allowed to read the item, whereas a write-locked item is called exclusive-locked, because a single transaction exclusive holds the lock on the item. Each record on the lock table will have four fields: <data item name, LOCK, no_of_reads, locking_transaction(s)>. The value (state) of LOCK is either read-locked or write-locked. read_lock(X): B, if LOCK(X)='unlocked' Then begin LOCK(X)"read-locked"


No_of_reads(x)1 end else if LOCK(X)="read-locked" then no_of_reads(X)no_of_reads(X)+1 else begin wait(until)LOCK(X)="unlocked" and the lock manager wakes up the transaction); goto B end; write_lock(X): B: if LOCK(X)="unlocked" Then LOCK(X)"write-locked"; else begin wait(until LOCK(X)="unlocked" and the lock manager wkes up the transaction); goto B end; unlock(X): if LOCK(X)="write-locked" Then begin LOCK(X)"un-locked"; Wakeup one of the waiting transctions, if any end else if LOCK(X)=read-locked" then begin


no_of_reads(X)no_of_reads(X)-1 if no_of_reads(X)=0 then begin LOCK(X)=unlocked"; wakeup one of the waiting transactions, if any end end; The Two Phase Locking Protocol The two phase locking protocol is a process to access the shared resources as their own without creating deadlocks. This process consists of two phases. 1. Growing Phase: In this phase the transaction may acquire lock, but may not release any locks. Therefore this phase is also called as resource acquisition activity. 2. Shrinking phase: In this phase the transaction may release locks, but may not acquire any new locks. This includes the modification of data and release locks. Here two activities are grouped together to form second phase. IN the beginning, transaction is in growing phase. Whenever lock is needed the transaction acquires it. As the lock is released, transaction enters the next phase and it can stop acquiring the new lock request.