You are on page 1of 19

Name: …………………………………………………………………… Roll No: …………………..

ABES Institute of Technology, Ghaziabad


NCS-502: Database Management System
THIRD YEAR –CSE & IT
MODEL TEST PAPER WITH SOLUTION (ODD SEMESTER 2017-18)

Time :[ 3 Hours ] [ Total Marks: 100]

GENERAL INSTRUCTIONS:
1. Attempt all parts.
2. Be precise in your answers.
3. Answer the questions considering the word limit.

COURSE OUTCOMES
CO1. To describe basic database concepts and architecture, understand the use of database languages in
implementing data related system and data modelling using ER diagram, utilize the concepts of keys while
creating database.
CO2. To create the structure and operation of the relational data model and construct simple and moderately
advanced database queries using Structured Query Language (SQL).
CO3. To implement and successfully apply logical database design principles, make use of functional
dependencies and database normalization.
CO4. To identify the concept of a database transaction, including concurrency control, backup and recovery
and data object locking and protocols. Describe advanced database topic, such as distributed database
systems.
CO5. To explain the concurrency control mechanisms and locking techniques and analyze the case study of
oracle.

SECTION-A
1. Answer all parts. All parts carry equal marks. Write answer of each part in short.
(In one word to approx. 30 words) (2×10=20)

(a) Define cardinality. (CO2)


Ans. Cardinality refers to the uniqueness of data values contained in a column. The
column with all unique values would be having the high cardinality and the column
with all duplicate values would be having the low cardinality. Cardinality also
sometimes refers to the relationships between tables. Cardinality between tables can
be one-to-one, many-to-one or many-to-many.

(b) Compute the closure of (AC)+ of a relation R(A,B,C,D,E,F) from the set of Functional
dependency F=( A->B, BC->D, C->E, CD->F) (CO3)
Ans.
(c) Explain 4NF and BCNF. (CO3)

Ans. BCNF stands for “Boyce-Codd Normal Form”. This normal form also known as
the 3.5 Normal form of database normalization. To achieve BCNF, the database must
be already achieved to third normal form. Then following steps should be done to
achieve the BCNF.

1. Identify all candidate keys in the relations


2. Identify all functional dependencies in the relations.

3. If there are functional dependencies in the relation, where their determinants


are not candidate keys for the relation, remove the functional dependencies by
placing them in a new relation along with a copy of their determinant.
Fourth Normal Form

• Database must be in third normal form, before normalizing it to the fourth


normal form.

• If the database is already in third normal form, next step should be to remove
the multi-valued dependencies.

• (If one or more rows imply the presence of one or more other rows in the same
table, it is called multi-valued dependency.)

(d) Describe snapshot isolation. (CO5)


Ans. Snapshot isolation is implemented within multiversion concurrency
control (MVCC), where generational values of each data item (versions) are
maintained.
• Snapshot isolation is a guarantee that all reads made in a transaction will see
a consistent snapshot of the database (in practice it reads the last committed
values that existed at the time it started).
• The transaction itself will successfully commit only if no updates it has made
conflict with any concurrent updates made since that snapshot.

(e) What are lock based protocols? (CO5)


Ans. Database systems equipped with lock-based protocols use a mechanism by which
any transaction cannot read or write data until it acquires an appropriate lock on it.
Locks are of two kinds −

• Binary Locks − A lock on a data item can be in two states; it is either locked or
unlocked.

• Shared/exclusive − This type of locking mechanism differentiates the locks


based on their uses. If a lock is acquired on a data item to perform a write
operation, it is an exclusive lock. Allowing more than one transaction to write
on the same data item would lead the database into an inconsistent state. Read
locks are shared because no data value is being changed.
(f) Explain properties of transaction? (CO4)
Ans. Atomicity − This property states that a transaction must be treated as an
atomic unit, that is, either all of its operations are executed or none.
Consistency − The database must remain in a consistent state after any
transaction. No transaction should have any adverse effect on the data residing in
the database.
Isolation − No transaction will affect the existence of any other transaction.
Durability − The database should be durable enough to hold all its latest updates
even if the system fails or restarts.

(g) Describe view and index.


(CO2)
Ans. A view is simply any SELECT query that has been given a name and saved in the
database. For this reason, a view is sometimes called a named query or a stored query.
To create a view, you use the SQL syntax:
CREATE OR REPLACE VIEW <view_name> AS
SELECT <any valid select query>;

An index, as you would expect, is a data structure that the database uses to find
records within a table more quickly.
CREATE INDEX <indexname> ON <tablename> (<column>, <column>...);

(h) Define timestamp? (CO4)

Ans. Whenever a transaction begins, it receives a timestamp. This timestamp


indicates the order in which the transaction must occur, relative to the other
transactions. So, given two transactions that affect the same object, the operation of
the transaction with the earlier timestamp must execute before the operation of the
transaction with the later timestamp.

(i) Describe trigger and cursor. (CO2)

Ans. Triggers are stored programs, which are automatically executed or fired when
some events occur.
Triggers are written to be executed in response to any of the following events

• A database manipulation (DML) statement (DELETE, INSERT, or UPDATE)

• A database definition (DDL) statement (CREATE, ALTER, or DROP).

• A database operation
A cursor is a pointer to this context area. PL/SQL controls the context area through a
cursor. A cursor holds the rows (one or more) returned by a SQL statement. The set of
rows the cursor holds is referred to as the active set

(j) Define relational data model? (CO2)

Ans. Relational data model is the primary data model, which is used widely around
the world for data storage and processing.

This model is simple and it has all the properties and capabilities required to process
data with storage efficiency.
It is a technique or way of structuring data using relations, which are grid-like
mathematical structures consisting of columns and rows.
Codd proposed the relational model for IBM.
SECTION-B
Note: Attempt any five questions from this section. (In approx. 250 words) (10×5=50)

Q (2) Construct an ER diagram for small marketing company. (CO1)

Ans. REFER 2016-17 QP Q.3(b). SECTION C

Q (3) A relation R(A,B,C,D,E,F) has the following set of functional dependency:


A->CD, B->C, F->DE, F->A
Is the decomposition of R in R1 (A,B,C), R2 (A,F,D) and R3 (E,F) lossless decomposition? (CO3)

Ans. Lossless Join :-


• Decomposition of a relation is done when a relation in relational model is not in
appropriate normal form.

• Relation R is decomposed into two or more relations if decomposition is lossless


join as well as dependency preserving.

Lossless Join Decomposition

If we decompose a relation R into relations R1 and R2,

Decomposition is lossy if R1 ⋈ R2 ⊃ R

Decomposition is lossless if R1 ⋈ R2 = R

Dependency Preserving Decomposition:-

If we decompose a relation R into relations R1 and R2, All dependencies of R either


must be a part of R1 or R2 or must be derivable from combination of FD’s of R1 and R2.
For Example, A relation R (A, B, C, D) with FD set{A->BC} is decomposed into
R1(ABC) and R2(AD) which is dependency preserving because FD A->BC is a part of
R1(ABC).

CHECK FOR LOSSLESS DECOMPOSITION:-

STEP1: Put α where there are attributes present for each R1, R2, R3.

A B C D E F
R1 Α α Α
R2 Α α α
R3 α α

STEP2: Check for all dependencies. Left attribute should be two times occurring and right side
attribute should be occurring one time at least.
A B C D E F
R1 Α α Α
R2 Α α α
R3 Α α α

STEP3: Check for all dependencies again. There are no changes at this step.

A B C D E F
R1 Α α Α
R2 Α α α
R3 Α α α

Since none of rows have all α so it is not lossless decomposition.

Q (4) Describe in detail timestamp ordering protocol and Thomas’ write rule. (CO4)

Ans. Every transaction has a timestamp associated with it, and the ordering is
determined by the age of the transaction.

A transaction created at 0002 clock time would be older than all other transactions
that come after it.

For example, any transaction 'y' entering the system at 0004 is two seconds younger
and the priority would be given to the older one.
The timestamp-ordering protocol ensures serializability among transactions in their
conflicting read and write operations.

This is the responsibility of the protocol system that the conflicting pair of tasks
should be executed according to the timestamp values of the transactions.

• The timestamp of transaction Ti is denoted as TS(Ti).

• Read time-stamp of data-item X is denoted by R-timestamp(X).

• Write time-stamp of data-item X is denoted by W-timestamp(X).

Suppose that transaction Ti issues read(Q).


1. If TS(T i) < W-timestamp(Q), then Ti needs to read a value
of Q that was already overwritten. Hence, the read operation is
rejected, and Ti is rolled back.
2. If TS(Ti) ≥ W-timestamp(Q), then the read operation is
executed, and R-timestamp(Q) is set to the maximum of R-
timestamp(Q) and TS(Ti).
• Suppose that transaction Ti issues write(Q).
1. If TS(Ti) < R-timestamp(Q), then the value of Q that Ti is
producing was needed previously, and the system assumed that
that value would never be produced. Hence, the system rejects
the write operation and rolls Ti back.
2. If TS(Ti) < W-timestamp(Q), then Ti is attempting to write an
obsolete value of Q. Hence, the system rejects this write
operation and rolls Ti back.
3. Otherwise, the system executes the write operation and sets W-
timestamp(Q) to TS(Ti).

Q (5) Mention the constraints and their types in DBMS. (CO1)

Ans. Constraints enforce limits to the data or type of data that can be
inserted/updated/deleted from a table. The whole purpose of constraints is to maintain
the data integrity during an update/delete/insert into a table. In this tutorial we will
learn several types of constraints that can be created in RDBMS.

Types of constraints

• NOT NULL

• UNIQUE

• DEFAULT

• CHECK

• Key Constraints – PRIMARY KEY, FOREIGN KEY

• Domain constraints

• Mapping constraints
NOT NULL:

NOT NULL constraint makes sure that a column does not hold NULL value. When we
don’t provide value for a particular column while inserting a record into a table, it
takes NULL value by default. By specifying NULL constraint, we can be sure that a
particular column(s) cannot have NULL values.

Example:
CREATE TABLE STUDENT(
ROLL_NO INT NOT NULL,
UNIQUE:

UNIQUE Constraint enforces a column or set of columns to have unique values. If a


column has a unique constraint, it means that particular column cannot have duplicate
values in a table.

CREATE TABLE STUDENT(


ROLL_NO INT NOT NULL,
STU_NAME VARCHAR (35) NOT NULL UNIQUE,
DEFAULT:

The DEFAULT constraint provides a default value to a column when there is no value
provided while inserting a record into a table.

CREATE TABLE STUDENT(


ROLL_NO INT NOT NULL,
STU_NAME VARCHAR (35) NOT NULL,
STU_AGE INT NOT NULL,
EXAM_FEE INT DEFAULT 10000,
CHECK:

This constraint is used for specifying range of values for a particular column of a table.
When this constraint is being set on a column, it ensures that the specified column
must have the value falling in the specified range.

CREATE TABLE STUDENT(


ROLL_NO INT NOT NULL CHECK(ROLL_NO >1000) ,

Key constraints:

PRIMARY KEY:

Primary key uniquely identifies each record in a table. It must have unique values and
cannot contain nulls

Domain constraints:

Each table has certain set of columns and each column allows a same type of data,
based on its data type. The column does not accept values of any other data type.

Mapping constraints can be explained in terms of mapping cardinality:

Mapping Cardinality:
One to One: An entity of entity-set A can be associated with at most one entity of
entity-set B and an entity in entity-set B can be associated with at most one entity of
entity-set A.

One to Many: An entity of entity-set A can be associated with any number of entities
of entity-set B and an entity in entity-set B can be associated with at most one entity of
entity-set A.

Many to One: An entity of entity-set A can be associated with at most one entity of
entity-set B and an entity in entity-set B can be associated with any number of entities
of entity-set A.

Many to Many: An entity of entity-set A can be associated with any number of entities
of entity-set B and an entity in entity-set B can be associated with any number of
entities of entity-set A.

Q (6) What is the use of cursor in PL/SQL explain its types. (CO2)

Ans. A cursor is a temporary work area (CONTEXT AREA) created in the system memory
when a SQL statement is executed.

1. A cursor contains information on a select statement and the rows of data accessed by
it.

2. This temporary work area is used to store the data retrieved from the database, and
manipulate this data.

3. A cursor can hold more than one row, but can process only one row at a time.

4. The set of rows the cursor holds is called the active set.

5. There are two types of cursors in PL/SQL:

Implicit cursors
These are created by default when DML statements like, INSERT, UPDATE, and DELETE
statements are executed. They are also created when a SELECT statement that returns just
one row is executed.

Explicit cursors
They must be created when you are executing a SELECT statement that returns more than
one row. Even though the cursor stores multiple records, only one record can be processed
at a time, which is called as current row. When you fetch a row the current row position
moves to next row.

Both implicit and explicit cursors have the same functionality, but they differ in the way
they are accessed.

Implicit Cursors: Application


• When you execute DML statements like DELETE, INSERT, UPDATE and SELECT
statements, implicit statements are created to process these statements.

• Oracle provides few attributes called as implicit cursor attributes to check the status
of DML operations.

• The cursor attributes available are %FOUND, %NOTFOUND, %ROWCOUNT, and


%ISOPEN.

• For example, When you execute INSERT, UPDATE, or DELETE statements the cursor
attributes tell us whether any rows are affected and how many have been affected.
When a SELECT... INTO statement is executed in a PL/SQL Block, implicit cursor
attributes can be used to find out whether any row has been returned by the SELECT
statement.

• PL/SQL returns an error when no data is selected.

• The status of the cursor for each of these attributes are defined in the below table.

Attributes Return Value Example

%FOUND The return value is TRUE, if the DML SQL%FOUND


statements like INSERT, DELETE and UPDATE
affect at least one row and if SELECT
….INTO statement return at least one row.

The return value is FALSE, if DML statements


like INSERT, DELETE and UPDATE do not
affect row and if SELECT….INTO statement
do not return a row.

%NOTFOUND The return value is FALSE, if DML statements SQL%NOTFOUND


like INSERT, DELETE and UPDATE at least
one row and if SELECT ….INTO statement
return at least one row.

The return value is TRUE, if a DML


statement like INSERT, DELETE and UPDATE
do not affect even one row and if SELECT
….INTO statement does not return a row.

%ROWCOUNT Return the number of rows affected by the SQL%ROWCOUNT


DML operations INSERT, DELETE, UPDATE,
SELECT

Q (7) Explain in detail DBMS vs File System.

Ans.7. A Database Management System (DMS) is a combination of computer


software, hardware, and information designed to electronically manipulate data via
computer processing.
• Two types of database management systems are DBMS’s and FMS’s.
• In simple terms, a File Management System (FMS) is a Database
Management System that allows access to single files or tables at a time.
• FMS’s accommodate flat files that have no relation to other files.
• The FMS was the predecessor for the Database Management System (DBMS),
which allows access to multiple files or tables at a time.

Basic difference between file system and DBMS is given in the below table.
Advantage of DBMS over file system

There are several advantages of Database management system over file system.
Few of them are as follows:

• No redundant data – Redundancy removed by data normalization


• Data Consistency and Integrity – data normalization takes care of it too
• Secure – Each user has a different set of access
• Privacy – Limited access
• Easy access to data
• Easy recovery
• Flexible
Disadvantages of DBMS:

• DBMS implementation cost is high compared to the file system


• Complexity: Database systems are complex to understand
• Performance: Database systems are generic, making them suitable for
various applications. However this feature affect their performance for
some applications.
Drawbacks of File system:

• Data Isolation: Because data are scattered in various files, and files may
be in different formats, writing new application programs to retrieve the
appropriate data is difficult.
• Duplication of data – Redundant data
• Dependency on application programs – Changing files would lead to
change in application programs.

Q (8) Describe conflict and view serializability. (CO4)


Ans. REFER DBMS NOTES PROVIDED.

Q (9) Discuss in detail the concurrency control problems? (CO5)


The uncontrolled execution of concurrent transactions in a multi-user
environment can lead to various problems. The three main problems and
examples of how they can occur are listed below:
1. LOST UPDATE PROBLEM :

This problem occurs when two transactions, accessing the same data
items, have their operations interleaved in such a way, that one
transaction will access the data before the other has applied any
updates.
2. UNCOMMITTED DATA :
• This problem occurs when one transaction updates a data item,
but has not yet committed the data permanently to the
database.
• Because of failure, the transaction is rolled back and the data
item is returned to its previous value.
• A second transaction accesses the updated data item before it is
returned to its original value.
3. INCONSISTENT DATA:

This problem occurs when one transaction is calculating summary


functions on particular data items while other transactions are
updating those data items.
The transaction performing the calculations may read some data
items before they are updated and others after.

Most concurrency problems in a multi-user environment occur because the


order of operations is incorrect. Interleaving operations must be
sequenced correctly to ensure isolation and serializability.
Determining the correct order of interleaved operations is the task of the
scheduler. The scheduler determines the order by employing the
concurrency control techniques

SECTION-C
Note: Attempt any two questions from this section. (In approx. 400 words) (15×2=30)
Q (10) Illustrate in detail distributed database. Explain fragmentation. (CO4)

Ans.

• A distributed database appears to a user as a single database but is, in fact, a


set of databases stored on multiple computers.

• The data on several computers can be simultaneously accessed and modified


using a network.

• Each database server in the distributed database is controlled by its local


DBMS, and each cooperates to maintain the consistency of the global database.

• Above figure illustrates a representative distributed database system.

• A centralized distributed database management system (DDBMS) integrates the


data logically so it can be managed as if it were all stored in the same location.

• The DDBMS synchronizes all the data periodically and ensures that updates
and deletes performed on the data at one location will be automatically reflected
in the data stored elsewhere.

• Distributed databases can be homogenous or heterogeneous.

• In a homeogenous distributed database system, all the physical locations have


the same underlying hardware and run the same operating systems and
database applications.
• In a heterogeneous distributed database, the hardware, operating systems or
database applications may be different at each of the locations.

Characteristics of Distributed Database Management System

A DDBMS has the following characteristics:

1. A collection of logically related shared data;


2. The data is split into a number of fragments;

3. Fragments may be replicated;

4. Fragments/replicas are allocated to sites;

5. The sites are linked by a communications network;

6. The data at each site is under the control of a DBMS;

7. The DBMS at each site can handle local applications, autonomously;

8. Each DBMS participates in at least one global application

FRAGMENTATION:- VERTICAL
FRAGMENTATION FRAGMENTATION

HORIZONTAL
FRAGMENTATION

1. A Distributed Database Management System (DDBMS) consists of a single


logical database that is split into a number of fragments.

2. Each fragment is stored on one or more computers under the control of a


separate DBMS, with the computers connected by a communications network.

3. Each site is capable of independently processing user requests that require


access to local data (that is, each site has some degree of local autonomy) and is
also capable of processing data stored on other computers in the network.

4. Users access the distributed database via applications.

5. Applications are classified as those that do not require data from other sites
(local Applications) and those that do require data from other sites (global
applications). We require a DDBMS to have at least one global application.

6. A fragmentation schema of a database is a definition of a set of fragments


that includes all attributes and tuples in the database and satisfies the condition
that the whole database can be reconstructed from the fragments by applying
some sequence of OUTER UNION (or OUTER JOIN) and UNION operations.

Types of Fragmentation –

Horizontal: partitions a relation along its tuples –


Vertical: partitions a relation along its attributes –

Mixed/hybrid: a combination of horizontal and vertical fragmentation

It is not necessary for every site in the system to have its own local database, as
illustrated by the topology of the DDBMS shown in figure.

Replicating Data
• You can create replicas of data at the various sites of a distributed database to
make access to data faster for local clients.

• Data can be replicated using snapshots or replicated master tables. Replicated


master tables require the replication option.

TRANSPARENCY IN DDBMS:-
1. The functionality of a distributed database system must be provided
in such a manner that the complexities of the distributed database
are transparent to both the database users and the database
administrators.

2. For example, a distributed database system should provide methods


to hide the physical location of objects throughout the system from
applications and users.

3. Location transparency exists if a user can refer to the same table the
same way, regardless of the node to which the user connects.
4. Location transparency is beneficial for the following reasons:

• Access to remote data is simplified, because the database users


do not need to know the location of objects.

• Objects can be moved with no impact on end-users or database


applications.

Q (11) Consider the Relational Schema R(A<B,C,D,E,F,G,H,I,J,) and a set of following functional
dependencies. F={AB->C, A->DE, B->F, F->GH, D->IJ}
Determine the Keys of R?
Decompose R into 2nd Normal Form. (CO3)

Ans.
First find the keys by closure:
DESCRIPTION :

• First of all find the keys.


• This is what I do for it, find all attributes not appearing on RHS of
FDs.
• These attributes must be prime attributes- must be in present in
some candidate key.
• Now, find their minimal cover and add other attributes so that the
minimal cover covers the entire attributes.
• Here, A and B are not on RHS, so must be in candidate key.
• AB determines {A,B,C,D,E,F,G,H,I,J} and hence becomes the
candidate key as well.
• Now, A-> D, A->E B ->F are partial FDs.
• So, the relation is not in 2NF.
• To make it in 2NF, we decompose in to 3 relations
• A->DEIJ giving ADEIJ, B->F
• F->GH giving BFGH
• AB -> C giving ABC.

These 3 relations are in 2NF. and ABC is in BCNF.


• In ADEIJ, we have transitive dependencies
• A-> D and D -> IJ
• and in BFGH we have transitive dependency B -> F and F-> GH
• So, to make the table into 3NF, we make a separate relation for DIJ
and FGH.
• So, now we get 5 relations
• ABC, ADE, DIJ, BF and FGH which is in 3NF.
• None of the prime-attribute comes on right hand side of any FD =>
so as soon as the relation is in 3NF, it is in BCNF as well.

Q (12) Explain in brief multiple granularity. (CO5)

• It is a mechanism to allow the system to define multiple levels of granularity.

• We can make one by allowing data items to be of various sizes and defining a hierarchy
of data granularities, where the small granularities are nested within larger ones.

• Such a hierarchy can be represented graphically as a tree.

• A nonleaf node of the multiple-granularity tree represents the data associated with its
descendants.
• In the tree protocol, each node is an independent data item.

• As an illustration, consider the tree of Figure, which consists of four levels of nodes.

• The highest level represents the entire database.

• Below it are nodes of type area; the database consists of exactly these areas.

• Each area in turn has nodes of type file as its children.

• Each area contains exactly those files that are its child nodes. No file is in more than
one area.

• Finally, each file has nodes of type record.

• As before, the file consists of exactly those records that are its child nodes, and no
record can be present in more than one file.

• Each node in the tree can be locked individually.

• As we did in the two-phase locking protocol, we shall use shared and exclusive lock
modes.

• When a transaction locks a node, in either shared or exclusive mode, the transaction
also has implicitly locked all the descendants of that node in the same lock mode.

You might also like