You are on page 1of 98

Syllabus

314446: SOFTWARE LABORATORY – I

Teaching Scheme: Credits: Examination Scheme:


Practical: 4 Hours/Week 02 Practical: 50 Marks
Oral: 50 Marks
Term Work: 25 Marks

Prerequisites:

1. Data structures and files.


2. Discrete Structure.
3. Software engineering principles and practices.

Course Objectives:

1. Understand the fundamental concepts of database management. These concepts


include aspects of database design, database languages, and database-system
implementation.
2. To provide a strong formal foundation in database concepts, recent technologies and
best industry practices.
3. To give systematic database design approaches covering conceptual design, logical
design and an overview of physical design.
4. To learn the SQL and NoSQL database system.
5. To learn and understand various Database Architectures and its use for application
development.
6. To programme PL/SQL including stored procedures, stored functions, cursors and
packages.

Course Outcomes:

1. To install and configure database systems.


2. To analyze database models & entity relationship models.
3. To design and implement a database schema for a given problem-domain
4. To understand the relational and document type database systems.
5. To populate and query a database using SQL DML/DDL commands.
6. To populate and query a database using MongoDB commands.

Group A: Introduction to Databases (Study assignment – Any 2)


1. Study and design a database with suitable example using following database systems:
 Relational: SQL / PostgreSQL / MySQL
 Key-value: Riak / Redis
 Columnar: Hbase
 Document: MongoDB / CouchDB
 Graph: Neo4J
Compare the different database systems based on points like efficiency, scalability,
characteristics and performance.
2. Install and configure client and server for MySQL and MongoDB (Show all
commands and necessary steps for installation and configuration).
3. Study the SQLite database and its uses. Also elaborate on building and installing of
SQLite.

Group B: SQL and PL/SQL

1. Design any database with at least 3 entities and relationships between them. Apply
DCL and DDL commands. Draw suitable ER/EER diagram for the system.

2. Design and implement a database and apply at least 10 different DML queries for
the following task. For a given input string display only those records which match
the given pattern or a phrase in the search string. Make use of wild characters and
LIKE operator for the same. Make use of Boolean and arithmetic operators
wherever necessary.

3. Execute the aggregate functions like count, sum, avg etc. on the suitable database.
Make use of built in functions according to the need of the database chosen.
Retrieve the data from the database based on time and date functions like now (),
date (), day (), time () etc. Use group by and having clauses.

4. Implement nested sub queries. Perform a test for set membership (in, not in), set
comparison (<some, >=some, <all etc.) and set cardinality (unique, not unique).

5. Write and execute suitable database triggers. Consider row level and statement level
triggers.

6. Write and execute PL/SQL stored procedure and function to perform a suitable task
on the database. Demonstrate its use.

7. Write a PL/SQL block to implement all types of cursor.


8. Execute DDL statements which demonstrate the use of views. Try to update the
base table using its corresponding view. Also consider restrictions on updatable
views and perform view creation from multiple tables.

Group C: MongoDB

1. Create a database with suitable example using MongoDB and implement


 Inserting and saving document (batch insert, insert validation)
 Removing document
 Updating document (document replacement, using modifiers, upsets, updating
multiple documents, returning updated documents)

2. Execute at least 10 queries on any suitable MongoDB database that demonstrates


following querying techniques:
 find and findOne (specific values)
 Query criteria (Query conditionals, OR queries, $not, Conditional semantics)
 Type-specific queries (Null, Regular expression, Querying arrays)
3. Execute at least 10 queries on any suitable MongoDB database that demonstrates
following:
 $ where queries
 Cursors (Limits, skips, sorts, advanced query options)
 Database commands

4. Implement Map reduce example with suitable example.

5. Implement the aggregation and indexing with suitable example in MongoDB.


Demonstrate the following:
 Aggregation framework
 Create and drop different types of indexes and explain () to show the advantage of the
indexes.

Group D: Mini Project / Database Application Development

Student group of size 3 to 4 students should decide the statement and scope of the
project which will be refined and validated by the faculty considering number of
students in the group.
Draw and normalize the design up to at ER Diagram least 3NF in case of back end as
RDBMS.

Suggested Directions for development of the mini project.



 Build a suitable GUI by using forms and placing the controls on it for any application.
(E.g Student registration for admission, railway reservation, online ticket booking
etc.). Proper data entry validations are expected.

 Develop two tier architecture and use ODBC/JDBC connections to store and retrieve
data from the database. Make a user friendly interface for system interaction. You
may consider any applications like employee management system, library
management system etc.

Implement the basic CRUD operations and execute a transaction that ensures ACID
properties. Make use of commands like commit, save point, and rollback. You may
use examples like transfer of money from one account to another, cancellation of e-
tickets etc.

References
1. Ramon A. Mata-Toledo, Pauline Cushman, Database management systems, TMGH,
ISBN: IS978-0-07-063456-5, 5th Edition.
2. Kristina Chodorow, MongoDB The definitive guide, O’Reilly Publications, ISBN:978-
93-5110-269-4, 2nd Edition.
3. Dr. P. S. Deshpande, SQL and PL/SQL for Oracle 10g Black Book, DreamTech.
4. Ivan Bayross, SQL, PL/SQL: The Programming Language of Oracle, BPB Publication.
5. Reese G., Yarger R., King T., Williums H, Managing and Using MySQL, Shroff
Group A: Introduction to Databases
(Study assignment – Any 2)
ASSIGNMENT 1

Aim: Study of Open source NOSQL Databases and Compare the different database
systems based on points like efficiency, scalability, characteristics and performance.

Objective: To be aware different Open Source Databases.

Theory:

1) A brief description of Open Source Database:

Open Source Software:

Open-source software (OSS) is computer software with its source code made available and
licensed with a license in which the copyright holder provides the rights to study change and
distribute the software to anyone and for any purpose.

Open-source software is very often developed in a public, collaborative manner. Open-


source software is the most prominent example of open-source development and often
compared to (technically defined) user-generated content or (legally defined) open-
content movements.

A database is a base for data. An Open Source database is a base for data that includes
Free and Open Source Software. Open source software is software that makes the source code
available to anyone. The user is allowed to implement, share and further develop the database
software to suit various needs.

Popular Open Source Databases

Following are open source databases

. 1) MySQL

MySQL”My S-Q-L", officially, but also called”My Sequel" is the world’s most widely
used open-source relational database management system (RDBMS).It is named after co-
founder Michael Widenius's daughter, My. The SQL phrase stands for Structured Query
Language. MySQL is a popular choice of database for use in web applications, and is a central
component of the widely used LAMP open source web application software stack. LAMP is an
acronym for "Linux, Apache, MySQL, Perl/PHP/Python." Free-software-open source projects
that require a full-featured database management system often use MySQL.

MySQL is also used in many high-profile, large-scale websites, including Wikipedia, Google
(though not for searches), Facebook, Twitter, Flickr, and YouTube. MySQL is the most
popular and widely used relational database management system that provides multi-user
access to number of databases. MySQL is now owned by Oracle and uses Sequential Query
Language to manage database. Its source is available under GNU license and propriety
agreements. MySQL is most popular among PHP developers and used for websites, web
applications and online services.

Features of MySQL:

 Because of its unique storage engine architecture MySQL performance is very high.
 Supports large number of embedded applications which makes MySql very flexible.
 Use of Triggers, Stored procedures and views which allows the developer to give a
higher productivity.
 Allows transactions to be rolled back, commit and crash recovery
 Embedded database library
 Full-text indexing and searching
 Updatable views
 Cursors
 Triggers
 Cross-platform support

Limitation of MySQL:

 Like other SQL databases, MySQL does not currently comply with the full SQL
standard for some of the implemented functionality, including foreign key
references when using some storage engines other than the default of InnoDB
 No triggers can be defined on views.
 MySQL, like most other transactional relational databases, is strongly limited by
hard disk performance. This is especially true in terms of write latency.

2) PostgreSQL
It is developed by PostgreSQL Global Development Group and is an ORDBMS (Object
Relational Database Management System). Available for all platforms Mac, Windows, Solaris
and Linux under MIT license, PostgreSQL supports all the properties of major databases.
PostgreSQL is currently available as version 9.1.

3) SQLite
SQLite is a small lightweight embedded database used in Application File formats, Database
for mobile apps and websites. SQLite has compliance with ACID properties of database. It is
faster and has simple to use API. SQLite comes with a standalone command-line interface
(CLI) client that can be used to administer SQLite databases.

4) Berkeley DB
Owned by Oracle, Berkeley DB provides the foundational storage services for your
application, no matter how demanding and unique your requirements may seem to be
. Berkeley DB APIs are available in almost all programming languages including ANSI-C, C+
+, Java, C#, Perl, Python, Ruby and Erlang.

A program accessing the database is free to decide how the data is to be stored in a record.
Berkeley DB puts no constraints on the record’s data. The record and its key can both be up to
four gigabytes long. Note that Berkeley DB is not a full DBMS.

5) Firebird
Firebird has always been more fully featured than MySQL, and has, unlike PostgreSQL, always
worked well on Windows as well as Linux and other ‘Nix variants. Firebird provides a lot of the
features available in commercial databases, including stored procedures, triggers, hot backups
(backups while the database is running) and replication. Firebird database comes in two
variations, classic server and super server.

6) MongoDB

MongoDB is a cross-platform document-oriented database. Classified as a NoSQL database,


MongoDB eschews the traditional table-based relational database structure in favor of JSON-
like documents with dynamic schemas (MongoDB calls the format BSON), making the
integration of data in certain types of applications easier and faster. Released under a
combination of the GNU Affero General Public License and the Apache License, MongoDB
is free and open-source software.

Development of MongoDB began in 2007, when the company (then named 10gen) was
building a platform as a service similar to Windows Azure or Google App Engine. In 2009,
MongoDB was open sourced as a stand-alone product with an AGPL license. MongoDB has
been adopted as backend software by a number of major websites and services, including eBay,
Foursquare, SourceForge, Viacom, and the New York Times, among others. MongoDB is the
most popular NoSQL database system.

Some of the main features include:

Ad hoc queries

MongoDB supports search by field, range queries, regular expression searches. Queries
can return specific fields of documents and also include user-defined
JavaScript functions.

Indexing

Any field in a MongoDB document can be indexed (indices in MongoDB are


conceptually similar to those in RDBMSes). Secondary indices are also available.
Replication

MongoDB provides high availability with replica sets A replica set consists of two or
more copies of the data. Each replica set member may act in the role of primary or
secondary replica at any time. The primary replica performs all writes and reads by
default. Secondary replicas maintain a copy of the data on the primary using built-in
replication. When a primary replica fails, the replica set automatically conducts an
election process to determine which secondary should become the primary. Secondaries
can also perform read operations, but the data is eventually consistent by default.

Load balancing

MongoDB scales horizontally using sharing. The user chooses a shard key, which
determines how the data in a collection will be distributed. The data is split into ranges
(based on the shard key) and distributed across multiple shards. (A shard is a master
with one or more slaves.) MongoDB can run over multiple servers, balancing the load
and/or duplicating data to keep the system up and running in case of hardware failure.
Automatic configuration is easy to deploy, and new machines can be added to a running
database.

File storage

MongoDB can be used as a file system, taking advantage of load balancing and data
replication features over multiple machines for storing files.
This function, called GridFS is included with MongoDB drivers and available with no
difficulty for development languages. MongoDB exposes functions for file manipulation
and content to developers. GridFS is used, for example, in plugins for NGINX and
lighttpd. Instead of storing a file in a single document, GridFS divides a file into parts,
or chunks, and stores each of those chunks as a separate document.In a multi-machine
MongoDB system, files can be distributed and copied multiple times between machines
transparently, thus effectively creating a load-balanced and fault-tolerant system.
Aggregation

Map Reduce can be used for batch processing of data and aggregation operations. The
aggregation framework enables users to obtain the kind of results for which
the SQL GROUP BY clause is used.

Server-side JavaScript execution

JavaScript can be used in queries, aggregation functions (such as MapReduce), and sent
directly to the database to be executed.

Capped collections

MongoDB supports fixed-size collections called capped collections. This type of


collection maintains insertion order and, once the specified size has been reached,
behaves like a circular queue.

Limitations of MongoDB:

 On 32-bit, it has limitation of 2.5 Gb data


 4 MB/16 MB document size limitation depending on version
 Read/write lock is currently global level
 No joins across collections
 No transaction support
 No referential integrity support
 Need to have enough memory to fit your working set into memory.

7) CouchDB

Apache CouchDB, commonly referred to as CouchDB, is an open source database that focuses
on ease of use and on being "a database that completely embraces the web”. It is
a NoSQL database that uses JSON to store data, JavaScript as its query language using Map
Reduce, and HTTP for an API. One of its distinguishing features is multi-master replication.
CouchDB was first released in 2005 and later became an Apache project in 2008.
Unlike in a relational database, CouchDB does not store data and relationships in tables.
Instead, each database is a collection of independent documents. Each document maintains its
own data and self-contained schema. An application may access multiple databases, such as one
stored on a user's mobile phone and another on a server. Document metadata contains revision
information, making it possible to merge any differences that may have occurred while the
databases were disconnected.
CouchDB implements a form of Multi-Version Concurrency Control (MVCC) in order to avoid
the need to lock the database file during writes. Conflicts are left to the application to resolve.
Resolving a conflict generally involves first merging data into one of the documents, then
deleting the stale one.
CouchDB (Couch is an acronym for cluster of unreliable commodity hardware) is a project
created in April 2005 by Damien Katz, former Lotus Notes developer at IBM. Damien Katz
defined it as a "storage system for a large scale object database". His objectives for the database
were to become the database of the Internet and that it would be designed from the ground up to
serve web applications. He self-funded the project for almost two years and released it as an
open source project under the GNU General Public License.
In February 2008, it became an Apache Incubator project and the license was changed to
the Apache License. A few months after, it graduated to a top-level project. This led to the first
stable version being released in July 2010.In early 2012, Damien Katz left the project to focus
on Server. Since the departure of Damien Katz, the Apache CouchDB project has continued,
releasing 1.2 in April 2012 and 1.3 in April 2013. In July 2013, the CouchDB community
merged the codebase for Big Couch, Cloudant's clustered version of CouchDB, into the Apache
project. The BigCouch clustering framework is prepared to be included in an upcoming release
of Apache CouchDB.
System Properties Comparison CouchDB vs. MongoDB
vs. MySQL
Name CouchDB MongoDB MySQL

Description A document One of the most Widely used open


store inspired by popular source RDBMS
Lotus Notes document stores

Developer Apache Software MongoDB, Inc Oracle


Foundation

Initial release 2005 2009 1995

License Open Source Open Source Open Source


Implementation Erlang C++ C and C++
language
Database model Document store Document store Relational DBMS
Data scheme schema-free schema-free yes
SQL no no yes
Supported programming C C C
languages C# C# C#
Java Java Java
JavaScript JavaScript PHP
Lisp Lisp Python
Triggers yes no Yes
Foreign keys no no yes
Map Reduce yes yes no

Conclusion: Awareness of Different Open Source Databases.


ASSIGNMENT 2
Aim: Install and configure client and server for MySQL and MongoDB
Objective: Study all commands and necessary steps for installation and configuration.

Theory:

Setting Up the MySQL Database Server in the Windows Operating System

 Starting the Download


 Starting the Installation

Starting the Download


1. Go to http://dev.mysql.com/downloads/installer/.
2. Click the Download button.
3. Save the installer file to your system.
Starting the Installation
After the download completes, run the installer as follows:
1. Right-click the downloaded installation file (for example, mysql-installer-community-
5.6.14.0.msi) and click Run.
The MySQL Installer starts.
2. On the Welcome panel, select Install MySQL Products.
3. On the License Information panel, review the license agreement, click the
acceptance checkbox, and click Next.
4. On the Find latest products panel, click Execute.
When the operation is complete, click Next.
5. On the Setup Type panel, choose the Custom option and click Next.
6. On the Feature Selection panel, ensure MySQL Server 5.6.x is selected, and click Next.
7. On the Check Requirements panel, click Next.
8. On the Installation panel, click Execute.
When the server installation is completed successfully, the information message appears
on the Installation panel. Click Next.
9. On the Configuration panel, click Next.
10. At the first MySQL Server Configuration page (1/3), set the following options:
 Server Configuration Type. Select the Development Machine option.
 Enable TCP/IP Networking. Ensure the checkbox is selected and specify the options
below:
 Port Number. Specify the connection port. The default setting is 3306 - leave it
unchanged if there is not special reason to change it.
 Open Firewall port for network access. Select to add firewall exception for the
specified port.
 Advanced Configuration. Select the Show Advanced Options checkbox to display
an additional configuration page for setting advanced options for the server instance if
required.
Note: Choosing this option is necessary to get to the panel for setting the network
options where you will turn off the firewall for the port used by the MySQL server.
11. Click Next.
12. At the second MySQL Server Configuration page (2/3), set the following options:
 Root Account Password.
 MySQL Root Password. Enter the root user's password.
 Repeat Password. Retype the root user's password.
Note: The root user is a user who has full access to the MySQL database server -
creating, updating, and removing users, and so on. Remember the root password - you
will need it later when creating a sample database.
 MySQL User Accounts. Click Add User to create a user account. In the MySQL
User Details dialog box, enter a user name, a database role, and a password (for
example, !phpuser). Click OK.
Click Next.
13. At the third MySQL Server Configuration page (3/3), set the following options:
 Windows Service Name. Specify a Windows Service Name to be used for the MySQL
server instance.
 Start the MySQL Server at System Startup. Leave the checkbox selected if the
MySQL server is required to automatically start at system startup time.
 Run Windows Service as. Choose either:
 Standard System Account. Recommended for most scenarios.
 Custom User. An existing user account recommended for advanced scenarios.
Click Next.
14. At the Configuration Overview page, click Next.
15. When the configuration is completed successfully, the information message appears
on the Complete panel. Click Finish.
Note: To check that the installation has completed successfully, run the Task Manager.
If theMySQLd-nt.exe is on the Processes list - the database server is running.

Installation Steps for Mongodb.

At Server side:

1) Extract Zip File


2) C:\Users\admin>cd E:\mongodb-win32-x86_64-2008plus-2.6.2\mongodb-win32-x86_64-
2008plus-2.6.2\bin
Ex: E:\Teacher (this is the folder which contains information related to Teacher. Here we are
creating Teacher Database. Which contain the information of Teacher_id, name of a teacher,
department of a teacher, salary and status of a teacher. Here status is wheather teacher is
approved by the university or not. Our main idea is to implement all the DDL & DML queries
on the Teacher Database.
3. C:\Users\admin>E:
4. E:\mongodb-win32-x86_64-2008plus-2.6.2\mongodb-win32-x86_64-2008plus-
2.6.2\bin>mongod.exe --dbpath E:\Teacher
Note: keep the server in running state.

At Client Side:

• Open Another CMD prompt

• Go to bin folder of Mongodb

1. C:\Users\admin>cd E:\mongodb-win32-x86_64-2008plus-2.6.2\mongodb-win32-x86_64-
2008plus-2.6.2\bin
2. C:\Users\admin>E:
3. E:\mongodb-win32-x86_64-2008plus-2.6.2\mongodb-win32-x86_64-2008plus-
2.6.2\bin>mongo.exe Teacher
4. MongoDB shell version: 2.6.2
connecting to: Teacher
Now Teacher database is ready. U can perform all the related operations on the teacher
database.
At server side:

U will find the following:


2014-06-20T17:44:09.233+0530 [initandlisten] connection accepted from 127.0.0.1:
49360 #1 (1 connection now open)
Now it indicates both server and client are ready.
Group B:

SQL and PL/SQL


ASSIGNMENT 1

Aim: Design any database with at least 3 entities and relationships between them. Apply
DCL and DDL commands. Draw suitable ER/EER diagram for the system.

Objective: To understand DCL, DDL commands, and ER/EER diagram for the system.

Theory:

1 Introduction to SQL:

The Structured Query Language (SQL) comprises one of the fundamental building blocks of
modern database architecture. SQL defines the methods used to create and manipulate
relational databases on all major platforms.

SQL comes in many flavors. Oracle databases utilize their proprietary


PL/SQL. Microsoft SQL Server makes use of Transact-SQL. However, all of these variations
are based upon the industry standard ANSI SQL.

SQL commands can be divided into two main sublanguages.

1. Data Definition Language


2. Data Manipulation Language

1.2 DATA DEFINITION LANGUAGE (DDL)

It contains the commands used to create and destroy databases and database objects. These
commands will primarily be used by database administrators during the setup and removal
phases of a database project.

DDL Commands:

a) Create table command :

Syntax

CREATE TABLE table_name


(
column_name1 data_type(size),
column_name2 data_type(size),
.......
)
Example 1
This example demonstrates how you can create a table named "Person", with four columns. The
column names will be "LastName", "FirstName", "Address", and "Age":
CREATE TABLE Person
( LastName varchar,
FirstName varchar,
Address varchar,
Age int )

This example demonstrates how you can specify a maximum length for some columns:

Example 2

CREATE TABLE Person


(
LastName varchar(30),
FirstName varchar,
Address varchar,
Age int(3)
)

Creating table from another (existing table) table:

Syntax

CREATE TABLE tablename


[(columnname,columnaname)]]
AS SELECT columnname,columnaname
FROM tablename;

b. Alter table command:

Once table is created within a database, we may wish to modify the definition of that
table.The ALTER command allows to make changes to the structure of a table without deleting
and recreating it.

Let's begin with creation of a table called testalter_tbl.


mysql> create table testalter_tbl
-> (
-> i INT,
-> c CHAR(1)
-> );
Query OK, 0 rows affected (0.05 sec)
mysql> SHOW COLUMNS FROM testalter_tbl;
Dropping, Adding or Repositioning a Column:
Suppose you want to drop an existing column i from above MySQL table then you will
use DROP clause along with ALTER command as follows:
mysql> ALTER TABLE testalter_tbl DROP i;
A DROP will not work if the column is the only one left in the table.
To add a column, use ADD and specify the column definition. The following statement restores
the icolumn to testalter_tbl:
mysql> ALTER TABLE testalter_tbl ADD i INT;
After issuing this statement, testalter will contain the same two columns that it had when you
first created the table, but will not have quite the same structure. That's because new columns
are added to the end of the table by default. So even though i originally was the first column in
mytbl, now it is the last one.

To indicate that you want a column at a specific position within the table, either use FIRST to
make it the first column or AFTER col_name to indicate that the new column should be placed
after col_name. Try the following ALTER TABLE statements, using SHOW COLUMNS after
each one to see what effect each one has:

ALTER TABLE testalter_tbl DROP i;


ALTER TABLE testalter_tbl ADD i INT FIRST;
ALTER TABLE testalter_tbl DROP i;
ALTER TABLE testalter_tbl ADD i INT AFTER c;

The FIRST and AFTER specifiers work only with the ADD clause. This means that if you want
to reposition an existing column within a table, you first must DROP it and then ADD it at the
new position.
Changing a Column Definition or Name:
To change a column's definition, use MODIFY or CHANGE clause along with ALTER
command. For example, to change column c from CHAR(1) to CHAR(10), do this:
mysql> ALTER TABLE testalter_tbl MODIFY c CHAR(10);

With CHANGE, the syntax is a bit different. After the CHANGE keyword, you name the
column you want to change, then specify the new definition, which includes the new name. Try
out the following example:

mysql> ALTER TABLE testalter_tbl CHANGE i j BIGINT;

If you now use CHANGE to convert j from BIGINT back to INT without changing the column
name, the statement will be as expected:

mysql> ALTER TABLE testalter_tbl CHANGE j j INT;

Changing a Column's Default Value:


You can change a default value for any column using ALTER command. Try out the following
example.

Renaming a Table:
To rename a table, use the RENAME option of the ALTER TABLE statement. Try out the
following example to rename testalter_tbl to alter_tbl.
mysql> ALTER TABLE testalter_tbl RENAME TO alter_tbl;

c. Drop table command:

DROP command allows us to remove entire database objects from our DBMS. For example, if
we want to permanently remove the personal_info table that we created, we'd use the following
command:
Syntax

DROP TABLE table_name;

Example
DROP TABLE personal_info;

DATA INTEGRITY:
Enforcing data integrity ensures the quality of the data in the database. For example, if
an employee is entered with an employee_id value of “123”, the database should not allow
another employee to have an ID with the same value.
Two important steps in planning tables are to identify valid values for a column and to decide
how to enforce the integrity of the data in the column. Data integrity falls into four categories:

 Entity integrity
 Domain integrity
 Referential integrity
 User-defined integrity

There are several ways of enforcing each type of integrity.

Integrity type Recommended options


Entity PRIMARY KEY constraint
UNIQUE constraint

Domain FOREIGN KEY constraint


CHECK constraint
NOT NULL
Referential FOREIGN KEY constraint
CHECK constraint
User-defined All column- and table-level constraints in CREATE TABLE
StoredProcedures Triggers

ENTITY INTEGRITY:
Entity integrity defines a row as a unique entity for a particular table. Entity integrity enforces
the integrity of the identifier column(s) or the primary key of a table (through indexes, UNIQUE
constraints, PRIMARY KEY constraints, or IDENTITY properties).

DOMAIN INTEGRITY:
Domain integrity is the validity of entries for a given column. You can enforce domain integrity
by restricting the type (through data types), the format (through CHECK constraints and rules),
or the range of possible values (through FOREIGN KEY constraints, CHECK constraints,
DEFAULT definitions, NOT NULL definitions, and rules).
REFERENTIAL INTEGRITY:
Referential integrity preserves the defined relationships between tables when records are entered
or deleted. In Microsoft® SQL Server™, referential integrity is based on relationships between
foreign keys and primary keys or between foreign keys and unique keys. Referential integrity
ensures that key values are consistent across tables. Such consistency requires that there be no
references to nonexistent values and that if a key value changes, all references to it change
consistently throughout the database.

a. PRIMARY KEY CONSTRAINT:

Definition:- The primary key of a relational table uniquely identifies each record in the table.

A primary key constraint ensures no duplicate values are entered in particular columns
and that NULL values are not entered in those columns.

b. NOT NULL CONSTRAINT:

This constraint ensures that NULL values are not entered in those columns.

c. UNIQUE CONSTRAINT:

This constraint ensures that no duplicate values are entered in those columns.

d. CHECK CONSTRAINT:

The CHECK constraint enforces column value restrictions. Such constraints can restrict
a column, for example, to a set of values, only positive numbers, or reasonable dates.

e. FOREIGN KEY CONSTRAINT:

Foreign keys constrain data based on columns in other tables. They are called foreign
keys because the constraints are foreign--that is, outside the table. For example, suppose a table
contains customer addresses, and part of each address is a United States two-character state
code. If a table held all valid state codes, a foreign key constraint could be created to prevent a
user from entering invalid state codes.

To create a table with different types of constraints:

Syntax

CREATE TABLE table_name


(
column_name1 data_type [constraint],
column_name2 data_type [constraint],
.......
)
Example
Create table customer
( customer-name char(20) not null,
customer-street char(30),
customer-city char(30),
primary key ( customer-name));

create table branch


( branch-name char(15) not null,
branch-city char(30),
assets number,
primary key ( branch-name));

create table account


( branch-name char(15),
account-number char(10) not null,
balance number,
primary key ( account-number),
foreign key ( branch-name) references branch,
check (balance>500));

create table depositor


( customer-name char(20) not null,
account-number char(10) not null,
primary key ( customer-name,account-number),
foreign key ( account-number) references account,
foreign key ( customer-name) references customer);
MySQL CREATE INDEX
MySQL, index can be created on a table when the table is created with CREATE TABLE
command. Otherwise, CREATE INDEX enables to add indexes to existing tables. A multiple-
column index can be created using multiple columns.
The indexes are formed by concatenating the values of the given
columns. CREATE INDEX cannot be used to create a PRIMARY KEY.
Syntax:-
Syntax CREATE INDEX [index name] ON [table name] ([column name]);

Arguments

Name Description

index name Name of the index.

table name Name of the Table

column name Name of the column.

Example

CREATE INDEX autid ON newauthor(aut_id);

The above MySQL statement will create an INDEX on 'aut_id' column for 'newauthor' table.

MySQL Create UNIQUE INDEX

Using CREATE UNIQUE INDEX, you can create an unique index in MySQL.

CREATE UNIQUE INDEX newautid ON newauthor(aut_id);

The above MySQL statement will create an UNIQUE INDEX on 'aut_id' column for
'newauthor' table.

My SQL Sequence
A sequence is a set of integers 1, 2, 3, that are generated in order on demand. Sequences are
frequently used in databases because many applications require each row in a table to contain a
unique value and sequences provide an easy way to generate them.
Using AUTO_INCREMENT column:
The simplest way in MySQL to use Sequences is to define a column as AUTO_INCREMENT
and leave rest of the things to MySQL to take care.

Example:
Try out the following example. This will create table and after that it will insert few rows in this
table where it is not required to give record ID because it's auto incremented by MySQL.

mysql> CREATE TABLE insect


-> (
-> id INT UNSIGNED NOT NULL AUTO_INCREMENT,
-> PRIMARY KEY (id),
-> name VARCHAR(30) NOT NULL, # type of insect
-> date DATE NOT NULL, # date collected
-> origin VARCHAR(30) NOT NULL # where collected
);
Query OK, 0 rows affected (0.02 sec)
mysql> INSERT INTO insect (id,name,date,origin) VALUES
-> (NULL,'housefly','2001-09-10','kitchen'),
-> (NULL,'millipede','2001-09-10','driveway'),
-> (NULL,'grasshopper','2001-09-10','front yard');
Query OK, 3 rows affected (0.02 sec)
Records: 3 Duplicates: 0 Warnings: 0
mysql> SELECT * FROM insect ORDER BY id;
+ + -+ + -+
| id | name | date | origin |
+ + -+ + -+
| 1 | housefly | 2001-09-10 | kitchen |
| 2 | millipede | 2001-09-10 | driveway |
| 3 | grasshopper | 2001-09-10 | front yard |
+ + -+ + -+
My SQL Synonym
A synonym is merely another name for a table or a view. Synonyms are usually created so that a
user can avoid having to qualify another user's table or view to access the table or view.
Synonyms can be created as PUBLIC or PRIVATE. A PUBLIC synonym can be used by any
user of the database; a PRIVATE synonym can be used only by the owner and any users that
have been granted privileges.
Creating Synonyms
The general syntax to create a synonym is as follows:

CREATE [PUBLIC|PRIVATE] SYNONYM SYNONYM_NAME FOR TABLE|VIEW

You create a synonym called CUST, short for CUSTOMER_TBL, in the following example.
This frees you from having to spell out the full table name.

CREATE SYNONYM CUST FOR CUSTOMER_TBL;

SELECT CUST_NAME FROM CUST;


CUST_NAME

LESLIE GLEASON
NANCY BUNKER
ANGELA DOBKO
WENDY WOLF
MARYS GIFT SHOP

Dropping Synonyms
Dropping synonyms is like dropping most any other database object. The general syntax to drop
a synonym is as follows:

DROP [PUBLIC|PRIVATE] SYNONYM SYNONYM_NAME


DROP SYNONYM CUST;
Entity Relationship (ER) and Extended Entity Relationship (EER) Diagram

Entity Relationship (ER) Diagram

An entity relationship diagram (ERD) shows the relationships of entity sets stored in a database.
An entity in this context is a component of data. In other words, ER diagrams illustrate the
logical structure of databases. ER-Diagram is a visual representation of data that describes how
data is related to each other.

Basic Building Blocks of ER Diagram


Sample ER Diagram
Complete E-R diagram of banking organization database

Extended Entity Relationship

The enhanced entity–relationship (EER) model (or extended entity–relationship model) in


computer science is a high-level or conceptual data model incorporating extensions to the
original entity–relationship (ER) model, used in the design of databases.

The Extended Entity-Relationship Model is a more complex and high-level model that extends
an E-R diagram to include more types of abstraction, and to more clearly express constraints.
All of the concepts contained within an E-R diagram are included in the EE-R model, along
with additional concepts that cover more semantic information. These additional concepts
include generalization/specialization, union, inheritance, and subclass/super class.
Sample EER Diagram

Conclusion: Understand DCL, DDL commands, and ER/EER diagram for the system
ASSIGNMENT 2

Aim: Design and implement a database and apply at least 10 different DML queries for the
following task. For a given input string display only those records which match the given
pattern or a phrase in the search string. Make use of wild characters and LIKE operator for
the same. Make use of Boolean and arithmetic operators wherever necessary.
.

Objective: To understand the concept of DML statement like Insert, Select, Update, and
LIKE operator.

Theory:

DATA MANIPULATION LANGUAGE (DML):

After the database structure is defined with DDL, database administrators and users can utilize
the Data Manipulation Language to insert, retrieve and modify the data contained within it.

INSERT COMMAND:

The INSERT command in MYSQL is used to add records to an existing table.

Format 1:-Inserting a single row of data into a table

Syntax

INSERT INTO table_name


[(columnname,columnname)]
VALUES (expression,expression);

To add a new employee to the personal_info table

Example

INSERT INTO personal_info


values('bart','simpson',12345,$45000)

Format 2: Inserting data into a table from another table


Syntax
INSERT INTO tablename
SELECT columnname,columnname
FROM tablename
SELECT COMMAND:

Syntax
SELECT * FROM tablename.

OR

SELECT columnname,columnname,…..
FROM tablename ;

UPDATE COMMAND:

The UPDATE command can be used to modify information contained within a table.

Syntax UPDATE tablename

SET columnname=expression,columnname=expression,…..
WHERE columnname=expression;

Each year,company gives all employees a 3% cost-of-living increase in their salary. The
following SQL command could be used to quickly apply this to all of the employees stored in
the database:

Example
UPDATE personal_info
SET salary=salary*1.03

DELETE COMMAND:

The DELETE command can be used to delete information contained within a table.

Syntax

DELETE FROM tablename

WHERE search condition

The DELETE command with a WHERE clause can be used to remove his record from the
personal_info table:

Example
DELETE FROM personal_info
WHERE employee_id=12345
The following command deletes all the rows from the table
Example DELETE FROM personal_info;
LIKE Operator

The LIKE operator is used in a WHERE clause to search for a specified pattern in a column.

There are two wildcards used in conjunction with the LIKE operator:

 % - The percent sign represents zero, one, or multiple characters


 _ - The underscore represents a single character

The percent sign and the underscore can also be used in combinations.

LIKE Syntax
SELECT column1, column2,
FROM table_name
WHERE columnN LIKE pattern;

The basic syntax of % and _ is as follows:

SELECT FROM table_name


WHERE column LIKE 'XXXX%'
or
SELECT FROM table_name
WHERE column LIKE '%XXXX%'
or
SELECT FROM table_name
WHERE column LIKE 'XXXX_'
or
SELECT FROM table_name
WHERE column LIKE '_XXXX'
or
SELECT FROM table_name
WHERE column LIKE '_XXXX_'
Here are some examples showing different LIKE operators with '%' and '_' wildcards:

LIKE Operator Description

WHERE CustomerName LIKE 'a%' Finds any values that starts with "a"

WHERE CustomerName LIKE '%a' Finds any values that ends with "a"

WHERE CustomerName LIKE '%or%' Finds any values that have "or" in any position

WHERE CustomerName LIKE '_r%' Finds any values that have "r" in the second position

WHERE CustomerName LIKE 'a_%_ Finds any values that starts with "a" and are at least 3
%' characters in length

WHERE ContactName LIKE 'a%o' Finds any values that starts with "a" and ends with "o"

WHERE LIKE Examples


Problem: List all products with names that start with 'Ca'

SELECT Id, ProductName, UnitPrice, Package

FROM Product

WHERE ProductName LIKE 'Ca%'


Results:

Id ProductName UnitPrice Package

18 Carnarvon Tigers 62.50 16 kg pkg.

60 Camembert Pierrot 34.00 15-300 g rounds

Conclusion: Implemented all SQL DML Commands like Insert, Select, Update, Delete with
LIKE
ASSIGNMENT 3

Aim: Execute the aggregate functions like count, sum, avg etc. and date functions like now (), date (),
day (), time () etc. on the suitable database.

Objective: Understand the aggregate functions like count, sum, avg etc. and date functions like
now (), date (), day (), time () etc

Theory:

Aggregate Functions
Aggregate functions return a single result row based on groups of rows, rather than on single
rows. Aggregate functions can appear in select lists and in ORDER BY and HAVING clauses.
They are commonly used with the GROUP BY clause in a SELECT statement. In a query
containing a GROUP BY clause, the elements of the select list can be aggregate
functions, GROUP BY expressions, constants, or expressions involving one of these.

Aggregate functions are used to compute against a "returned column of numeric data" from
your SELECT statement. They basically summarize the results of a particular column of
selected data.

SQL has many built-in functions for performing calculations on data.

MIN returns the smallest value in a given column


MAX returns the largest value in a given column
SUM returns the sum of the numeric values in a given column
AVG returns the average value of a given column
COUNT returns the total number of values in a given column
COUNT(*) returns the number of rows in a table
ROUND() Rounds a numeric field to the number of decimals specified

The AVG ( ) Function


The AVG () function returns the average value of a numeric column.

SQL AVG () Syntax


SELECT AVG (column_name) FROM table_name
SQL AVG() Example

The following SQL statement gets the average value of the "Price" column from the
"Products" table:

SELECT AVG(Price) AS PriceAverage FROM Products;

The COUNT ( ) Function


The COUNT() function returns the number of rows that matches a specified criteria.

SQL COUNT(column_name) Syntax

The COUNT(column_name) function returns the number of values (NULL values will not be
counted) of the specified column:

SELECT COUNT(column_name) FROM table_name;


SQL COUNT(*) Syntax

The COUNT(*) function returns the number of records in a table:

SELECT COUNT(*) FROM table_name;

SQL COUNT(*) Example

The following SQL statement counts the total number of orders in the "Orders" table:

SELECT COUNT(*) AS NumberOfOrders FROM Orders;

The MAX ( ) Function


The MAX() function returns the largest value of the selected column.

SQL MAX() Syntax


SELECT MAX(column_name) FROM table_name;

SQL MAX() Example

The following SQL statement gets the largest value of the "Price" column from the "Products"
table:
SELECT MAX(Price) AS HighestPrice FROM Products;

The MIN ( ) Function


The MIN() function returns the smallest value of the selected column.

SQL MIN() Syntax


SELECT MIN(column_name) FROM table_name;

SQL MIN() Example

The following SQL statement gets the smallest value of the "Price" column from the "Products"
table:

SELECT MIN(Price) AS SmallestOrderPrice FROM Products;

The ROUND ( ) Function


The ROUND () function is used to round a numeric field to the number of decimals specified.

SQL ROUND () Syntax


SELECT ROUND(column_name,decimals) FROM table_name;

Parameter Description
column_name Required. The field to round.
decimals Required. Specifies the number of decimals to be returned

SQL ROUND () Example

The following SQL statement selects the product name and rounds the price in the "Products"
table:

SELECT ProductName, ROUND(Price,0) AS RoundedPrice


FROM Products;

The SUM ( ) Function


The SUM() function returns the total sum of a numeric column.

SQL SUM() Syntax


SELECT SUM(column_name) FROM table_name;

SQL SUM() Example

The following SQL statement finds the sum of all the "Quantity" fields for the "OrderDetails"
table:

SELECT SUM(Quantity) AS TotalItemsOrdered FROM OrderDetails;

Date Functions
The following table lists the most important built-in date functions.

Function Description
NOW() Returns the current date and time
CURDATE() Returns the current date
CURTIME() Returns the current time
DATE() Extracts the date part of a date or date/time expression
EXTRACT() Returns a single part of a date/time
DATE_ADD() Adds a specified time interval to a date
DATE_SUB() Subtracts a specified time interval from a date
DATEDIFF() Returns the number of days between two dates
DATE_FORMAT() Displays date/time data in different formats

Date Data Types

 DATE - format YYYY-MM-DD


 DATETIME - format: YYYY-MM-DD HH:MI:SS
 TIMESTAMP - format: YYYY-MM-DD HH:MI:SS
 YEAR - format YYYY or YY

NOW ( ) Function
NOW () returns the current date and time.

 Syntax

 NOW()

 Example
 The following SELECT statement:

 SELECT NOW(),CURDATE(),CURTIME()

 will result in something like this:

NOW() CURDATE() CURTIME()


2014-11-22 12:45:34 2014-11-22 12:45:34

DATE () Function

The DATE () function extracts the date part of a date or date/time expression.

Syntax
DATE (date)
Example

Assume we have the following "Orders" table:

OrderId ProductName OrderDate


1 Jarlsberg Cheese 2014-11-22 13:23:44.657

The following SELECT statement:

SELECT ProductName, DATE(OrderDate) AS OrderDate


FROM Orders
WHERE OrderId=1

Will result in this:

ProductName OrderDate
Jarlsberg Cheese 2014-11-22

EXTRACT () Function
The EXTRACT () function is used to return a single part of a date/time, such as year, month,
day, hour, minute, etc.

Syntax
EXTRACT(unit FROM date)
Example

Assume we have the following "Orders" table:

OrderId ProductName OrderDate


1 Jarlsberg Cheese 2014-11-22 13:23:44.657

The following SELECT statement:

SELECT EXTRACT(YEAR FROM OrderDate) AS OrderYear,


EXTRACT(MONTH FROM OrderDate) AS OrderMonth,
EXTRACT(DAY FROM OrderDate) AS OrderDay
FROM Orders
WHERE OrderId=1

Will result in this:

OrderYear OrderMonth OrderDate


2014 11 22

DATE_ADD () Function
The DATE_ADD () function adds a specified time interval to a date.

Syntax
DATE_ADD(date,INTERVAL expr type)

Where date is a valid date expression and expr is the number of interval you want to add.

Example

Assume we have the following "Orders" table:

OrderId ProductName OrderDate


1 Jarlsberg Cheese 2014-11-22 13:23:44.657

Now we want to add 30 days to the "OrderDate", to find the payment date.

We use the following SELECT statement:


SELECT OrderId,DATE_ADD(OrderDate,INTERVAL 30 DAY) AS OrderPayDate
FROM Orders

Result:

OrderId OrderPayDate
1 2014-12-22 13:23:44.657

DATE_SUB () Function
The DATE_SUB () function subtracts a specified time interval from a date.

Syntax
DATE_SUB(date,INTERVAL expr type)

Where date is a valid date expression and expr is the number of interval you want to subtract.

Example

Assume we have the following "Orders" table:

OrderId ProductName OrderDate


1 Jarlsberg Cheese 2014-11-22 13:23:44.657

Now we want to subtract 5 days from the "OrderDate" date.

We use the following SELECT statement:

SELECT OrderId, DATE_SUB (OrderDate, INTERVAL 5 DAY) AS SubtractDate


FROM Orders

Result:

OrderId SubtractDate
1 2014-11-17 13:23:44.657
DATEDIFF () Function
The DATEDIFF () function returns the time between two dates.

Syntax
DATEDIFF (date1,date2)

Where date1 and date2 are valid date or date/time expressions.

Example

The following SELECT statement:

SELECT DATEDIFF ('2014-11-30','2014-11-29') AS DiffDate

Will result in this:

DiffDate
1
Example

The following SELECT statement:

SELECT DATEDIFF('2014-11-29','2014-11-30') AS DiffDate

Will result in this:

DiffDate
1

DATE_FORMAT ( ) Function
The DATE_FORMAT () function is used to display date/time data in different formats.

Syntax
DATE_FORMAT (date,format)

Where date is a valid date and format specifies the output format for the date/time.
Example

The following script uses the DATE_FORMAT () function to display different formats. We will
use the NOW () function to get the current date/time:

DATE_FORMAT(NOW(),'%b %d %Y %h:%i %p')


DATE_FORMAT(NOW(),'%m-%d-%Y')
DATE_FORMAT(NOW(),'%d %b %y')
DATE_FORMAT(NOW(),'%d %b %Y %T:%f')

The result would look something like this:

Nov 04 2014 11:45 PM


11-04-2014
04 Nov 14
04 Nov 2014 11:45:34:243

Conclusion: Implemented all Date and Time Functions in SQL.


ASSIGNMENT 4

Aim: Implement nested sub queries. Perform a test for set membership (in, not in), set
comparison (<some, >=some, <all etc.) and set cardinality (unique, not unique).

Objective:
1. To learn nested sub queries using set comparison and set cardinality operators.

Theory:

Nested Sub Query

Definition of Nested Sub Query:-

A Sub query or Inner query or Nested query is a query within another SQL query and
embedded within the WHERE clause. A sub query is used to return data that will be used in
the main query as a condition to further restrict the data to be retrieved.

Sub queries can be used with the SELECT, INSERT, UPDATE, and DELETE statements
along with the operators like =, <, >, >=, <=, IN, BETWEEN etc.

A sub query can be nested inside other sub queries. SQL has an ability to nest queries within
one another. A sub query is a SELECT statement that is nested within another SELECT
statement and which return intermediate results. SQL executes innermost sub query first, then
next level.

Sub queries with the SELECT Statement:


Sub queries are most frequently used with the SELECT statement.

The basic syntax is as follows:

SELECT column_name [, column_name ]


FROM table1 [, table2]
WHERE column_name OPERATOR
(SELECT column_name [, column_name
] FROM table1 [, table2]
[WHERE])
Example of IN Operator
Consider the CUSTOMERS table having the following records:

+ + + + + +
| ID | NAME | AGE | ADDRESS | SALARY |
+ + + + + +
| 1 | Ramesh | 35 | Ahmedabad | 2000.00 |
| 2 | Khilan | 25 | Delhi | 1500.00 |
| 3 | kaushik | 23 | Kota | 2000.00 |
| 4 | Chaitali | 25 | Mumbai | 6500.00 |
| 5 | Hardik | 27 | Bhopal | 8500.00 |
| 6 | Komal | 22 | MP | 4500.00 |
| 7 | Muffy | 24 | Indore | 10000.00 |
+ + + + + +

Now, let us check following sub query with SELECT statement:

SELECT *
FROM CUSTOMERS
WHERE ID IN (SELECT ID
FROM CUSTOMERS
WHERE SALARY > 4500);

This would produce the following result

+ + + + +
| ID | NAME | AGE | ADDRESS | SALARY |
+ + + + + +
| 4 | Chaitali | 25 | Mumbai | 6500.00 |
| 5 | Hardik | 27 | Bhopal | 8500.00 |
| 7 | Muffy | 24 | Indore | 10000.00 |
+ + + + + +

Example of NOT IN Operator

Now, let us check following sub query with SELECT statement:

SELECT *
FROM CUSTOMERS
WHERE ID NOT IN (SELECT ID
FROM CUSTOMERS
WHERE SALARY > 4500);
This would produce the following result

+ + + + + +
| ID | NAME | AGE | ADDRESS | SALARY |
+ + + + + +
| 1 | Ramesh | 35 | Ahmedabad | 2000.00 |
| 2 | Khilan | 25 | Delhi | 1500.00 |
| 3 | kaushik | 23 | Kota | 2000.00 |
| 6 | Komal | 22 | MP | 4500.00 |
|+ + + + + +

Sub queries with the INSERT Statement:


Sub queries also can be used with INSERT statements. The INSERT statement uses the data
returned from the sub query to insert into another table. The selected data in the sub query can
be modified with any of the character, date or number functions.

Example:
Consider a table CUSTOMERS_BKP with similar structure as CUSTOMERS table. Now to
copy complete CUSTOMERS table into CUSTOMERS_BKP, following is the syntax:

INSERT INTO CUSTOMERS_BKP SELECT * FROM CUSTOMERS WHERE ID IN (SELECT


ID FROM CUSTOMERS);

Sub queries with the UPDATE Statement:


The sub query can be used in conjunction with the UPDATE statement. Either single or
multiple columns in a table can be updated when using a sub query with the UPDATE
statement.

Example:
Assuming, we have CUSTOMERS_BKP table available which is backup of CUSTOMERS
table.

Following example updates SALARY by 0.25 times in CUSTOMERS table for all the
customers whose AGE is greater than or equal to 27:

UPDATE CUSTOMERS
SET SALARY = SALARY * 0.25
WHERE AGE IN (SELECT AGE FROM CUSTOMERS_BKP
WHERE AGE >= 27);
This would impact two rows and finally CUSTOMERS table would have the following records:

+ + + + +
| ID | NAME | AGE | ADDRESS | SALARY |
+ + + + + +
| 1 | Ramesh | 35 | Ahmedabad | 125.00 |
| 2 | Khilan | 25 | Delhi | 1500.00 |
| 3 | kaushik | 23 | Kota | 2000.00 |
| 4 | Chaitali | 25 | Mumbai | 6500.00 |
| 5 | Hardik | 27 | Bhopal | 2125.00 |
| 6 | Komal | 22 | MP | 4500.00 |
| 7 | Muffy | 24 | Indore | 10000.00 |
+ + + + + +

Sub queries with the DELETE Statement:

The sub query can be used in conjunction with the DELETE statement like with any other
statements mentioned above.

Example:
Assuming, we have CUSTOMERS_BKP table available which is backup of CUSTOMERS
table.

Following example deletes records from CUSTOMERS table for all the customers whose AGE
is greater than or equal to 27:

DELETE FROM CUSTOMERS


WHERE AGE IN (SELECT AGE FROM CUSTOMERS_BKP
WHERE AGE >= 27);

This would impact two rows and finally CUSTOMERS table would have the following records:

+ + + + + +
| ID | NAME | AGE | ADDRESS | SALARY |
+ + + + + +
| 2 | Khilan | 25 | Delhi | 1500.00 |
| 3 | kaushik | 23 | Kota | 2000.00 |
| 4 | Chaitali | 25 | Mumbai | 6500.00 |
| 6 | Komal | 22 | MP | 4500.00 |
| 7 | Muffy | 24 | Indore | 10000.00 |
+ + + + + +

Conclusion: Implemented all Nested sub query.


ASSIGNMENT 5

Aim: Study and implementation of database MYSQL Triggers


Objectives: To understand the concept of database MYSQL Trigger

Theory:

1) Introduction to MYSQL Trigger

What is a Trigger?

A trigger is a MYSQL block structure which is fired when a DML statements like Insert,
Delete, Update is executed on a database table. A trigger is triggered automatically when an
associated DML statement is executed.

2) Types of triggers
I) Types of Triggers

There are two types of triggers based on the which level it is triggered.

1) Row level trigger - An event is triggered for each row upated, inserted or deleted.
2) Statement level trigger - An event is triggered for each sql statement executed.

Trigger Execution Hierarchy

The following hierarchy is followed when a trigger is fired.

1) BEFORE statement trigger fires first.


2) Next BEFORE row level trigger fires, once for each row affected.
3) Then AFTER row level trigger fires once for each affected row. This events will
alternates between BEFORE and AFTER row level triggers.
4) Finally the AFTER statement level trigger fires.

Syntax of Triggers

The Syntax for creating a trigger is:

CREATE TRIGGER trigger_name


{BEFORE | AFTER | }
{INSERT [OR] | UPDATE [OR] | DELETE}
[OF col_name]
ON table_name
[REFERENCING OLD AS o NEW AS n]
[FOR EACH ROW]
WHEN
(condition)
BEGIN
--- sql statements
END;

 CREATE TRIGGER trigger_name - This clause creates a trigger with the given name or
overwrites an existing trigger with the same name.
 {BEFORE | AFTER | INSTEAD OF } - This clause indicates at what time should the
trigger get fired. i.e for example: before or after updating a table. INSTEAD OF is used
to create a trigger on a view. before and after cannot be used to create a trigger on a
view.
 {INSERT [OR] | UPDATE [OR] | DELETE} - This clause determines the triggering
event. More than one triggering events can be used together separated by OR keyword.
The trigger gets fired at all the specified triggering event.
 [OF col_name] - This clause is used with update triggers. This clause is used when you
want to trigger an event only when a specific column is updated.
 CREATE [OR REPLACE ] TRIGGER trigger_name - This clause creates a trigger with
the given name or overwrites an existing trigger with the same name.
 [ON table_name] - This clause identifies the name of the table or view to which the
trigger is associated.
 [REFERENCING OLD AS o NEW AS n] - This clause is used to reference the old and
new values of the data being changed. By default, you reference the values as
:old.column_name or :new.column_name. The reference names can also be changed
from old (or new) to any other user-defined name. You cannot reference old values
when inserting a record, or new values when deleting a record, because they do not
exist.
 [FOR EACH ROW] - This clause is used to determine whether a trigger must fire when
each row gets affected ( i.e. a Row Level Trigger) or just once when the entire sql
statement is executed(i.e.statement level Trigger).
 WHEN (condition) - This clause is valid only for row level triggers. The trigger is fired
only for rows that satisfy the condition specified
Trigger Examples
Example 1)

This example is based on the following two tables:

CREATE TABLE T4 ( a INTEGER , b CHAR(10));

CREATE TABLE T5 ( c CHAR(10) , d INTEGER);

-- create a trigger that may insert a tuple into T5 when a tuple is inserted into T4. inserts the
reverse tuple into T5:

1) Create trigger as follows:

CREATE TRIGGER trig1 AFTER INSERT ON T4

FOR EACH ROW BEGIN

INSERT INTO t5 SET c = NEW.b,d = NEW.a;

END;

2) Insert values in T4.

3) Check the values in T5.

Example2)

1)The price of a product changes constantly. It is important to maintain the history of the
prices of the products. Create a trigger to update the 'product_price_history' table when the
price of the product is updated in the 'product' table.

Create the 'product' table and 'product_price_history'

table CREATE TABLE product_price_history


(product_id number(5),
product_name varchar2(32),
supplier_name varchar2(32),
unit_price number(7,2) );

CREATE TABLE product


(product_id number(5),
product_name varchar2(32),
supplier_name varchar2(32),
unit_price number(7,2) );
drop trigger if exists price_history_trigger;

CREATE TRIGGER

price_history_trigger BEFORE UPDATE

on product1

FOR EACH ROW BEGIN

INSERT INTO product_price_history

set product_id=old.product_id,

product_name=old.product_name,

supplier_name=old.supplier_name,

unit_price=old.unit_price;

END

3) Lets update the price of a product.

UPDATE PRODUCT SET unit_price = 800 WHERE product_id = 100

Once the above update query is executed, the trigger fires and updates the
'product_price_history' table.

-------------------------------------------------------------------------------------------------------
Example 3
create table account(accno int,amount int)

Create a trigger on account table before update in new inserted amount is less than “0” then
set amount “0” else if amount is greater than 100 then set amount 100

CREATE TRIGGER upd_check BEFORE UPDATE ON account

FOR EACH ROW

BEGIN

IF NEW.amount < 0 THEN


SET NEW.amount = 0;
ELSEIF NEW.amount > 100 THEN

SET NEW.amount = 100;

END IF;

END

update account set amount= -12 where accno=101

Deleting a trigger
DROP TRIGGER
Name
DROP TRIGGER -- Removes a trigger definition from a database.
Synopsis
DROP TRIGGER name ON table

Parameters
name

The name of the trigger you wish to remove.

table
The name of the table the trigger is on.

Results

Conclusion: Studied and implemented MYSQL Trigger

ASSIGNMENT 6
Aim: Write and execute PL/SQL stored procedure and function to perform a suitable task
on the database. Demonstrate its use

Objective: 1) To understand the differences between procedure and function

2) To understand commands related to procedure and function

Theory:
A subprogram is a program unit/module that performs a particular task. These subprograms are
combined to form larger programs. This is basically called the 'Modular design'. A subprogram
can be invoked by another subprogram or program which is called the calling program.
A subprogram can be created:

 At schema level
 Inside a package
 Inside a MYSQL block

Parts of a MYSQL Subprog ram

Each MYSQL subprogram has a name, and may have a parameter list. Like anonymous
PL/SQL blocks and, the named blocks a subprograms will also have following three parts:
1. Declarative Part
2. Executable part
3. Exception-handling

What is procedure? How to create it?

Procedures: these subprogram rams do not return a value directly, mainly used to perform an
action.

Creating a Procedure
A procedure is created with the CREATE OR REPLACE PROCEDURE statement. The
simplified syntax for the CREATE OR REPLACE PROCEDURE statement is as follows:

CREATE [OR REPLACE] PROCEDURE procedure_name


[(parameter_name [IN | OUT | IN OUT] type [, ...])]

BEGIN
< procedure_body >
END ;

Where,
procedure-name specifies the name of the procedure.
[OR REPLACE] option allows modifying an existing procedure.

The optional parameter list contains name, mode and types of the parameters. IN represents,
that value will be passed from outside and OUT represents that this parameter will be used to
return a value outside of the procedure.

Procedure-body contains the executable part.


The IS keyword is used for creating a standalone procedure.
T he following example creates a simple procedure that displays the string 'Hello World!' on the
screen when executed.

Delimiter //
CREATE OR REPLACE PROCEDURE greeting
Select concat('Hello World!');/

When above code is executed using SQL prompt, it will produce the following result:

Query ok

(2) How to execute procedure?

Executing a Standalone Procedure


 Calling the name of the procedure from a PL/SQL block

Call greeting( )

Hello World
Output:
HelloPL/SQL
world procedure successfully completed.

Deleting a Standalone Procedure


A standalone procedure is deleted with the DROP PROCEDURE statement. Syntax for deleting
a procedure is:

DROP PROCEDURE procedure-name;

So you can drop greetings procedure by using the following statement:

DROP PROCEDURE greetings;

Parameter modes in PL/SQL subprograms:


1. IN:
An IN parameter lets you pass a value to the subprogram.
It is a read-only parameter.
It is the default mode of parameter passing.
Parameters are passed by reference.

2. OUT:
An OUT parameter returns a value to the calling program.
The actual parameter must be variable and it is passed by value.

3. IN-OUT:
An IN OUT parameter passes an initial value to a subprogram and returns an updated value to
the caller.
Actual parameter is passed by value.
IN & OUT Mode Example 1
T his program finds the minimum of two values, here procedure takes two numbers using IN
mode and returns their minimum using OUT parameters.
delimiter $
create procedure addp()
begin
declare a,b,c int;
set a=2;
set b=3;
set c=a+b;
select concat('value',c);
end;
$

delimiter ;
call addp();
Result: value 5

mysql> delimiter //
mysql> create procedure difference (in a int,in b int, out c int)
-> begin
-> if a>b then
-> set c=1;
-> else if a=b then
-> set c=2;
-> else
-> set c=3;
-> end if;
mysql> call difference(5,9,@x);
-> select @x;
-> //
PROCEDURES
Query OK, 0 rowsON TABLES
affected (0.00 sec)
To run the procedures on table, lets create a sample table and insert some values in that.
mysql> create table student
-> ( sid int(5) not null,
-> student_name varchar(9),
-> DOB date,
-> primary key(sid));
Query OK, 0 rows affected (0.06 sec)

mysql> insert into student values(5,'Harry',20130412);


Query OK, 1 row affected (0.03 sec)

mysql> insert into student values(6,'Jhon',20100215);


Query OK, 1 row affected (0.03 sec)

mysql> insert into student values(7,'Mary',20140516);


Query OK, 1 row affected (0.03 sec)

mysql> insert into student values(8,'Kay',20131116);


Query OK, 1 row affected (0.01 sec)

mysql> select * from student;


+ + + +
| sid | student_name | DOB |
+ + + +
| 5 | Harry | 2013-04-12 |
| 6 | Jhon | 2010-02-15 |
| 7 | Mary | 2014-05-16 |
| 8 | Kay | 2013-11-16 |
+ + + +

Q] Write a Procedure to display SID & Student.

mysql> delimiter //
mysql> create procedure myprocedure()
-> select sid,student_name from student
-> //
Query OK, 0 rows affected (0.55 sec)

mysql> call myprocedure()//


+ + +
| sid | student_name |
+ + +
| 5 | Harry |
| 6 | Jhon |
| 7 | Mary |
| 8 | Kay |
Q] Write a procedure which gets the name of the student when the student id is passed ?

mysql> create procedure stud(IN id INT(5),OUT name varchar(9))


-> begin
-> select student_name into name
-> from student
-> where sid=id;
-> end//
Query OK, 0 rows affected (0.01 sec)

mysql> call stud(5,@x)//


Query OK, 0 rows affected (0.00 sec)

mysql> select @x//


+ +
| @x |
+ +
| Harry |
+ +
1 row in set (0.00 sec)

mysql> call stud(7,@x)//


Query OK, 0 rows affected (0.00 sec)

mysql> select @x//


+ +
| @x |
+------+
| Mary |
+------+
1 row in set (0.00 sec)

mysql> call stud(5,@x);


-> select @x;
-> //
Query OK, 0 rows affected (0.00 sec)

+ +
| @x |
+ +
| Harry |
+ +
Q] Write a procedure cleanup() to delete all the students records from student table.

mysql> create procedure cleanup()


-> delete from student;
-> //
Query OK, 0 rows affected (0.00 sec)

mysql> call cleanup()//


Query OK, 4 rows affected (0.03 sec)

mysql> select * from student;//


Empty set (0.00 sec)

2.*FUNCTIONS*

Functions: these subprograms return a single value, mainly used to compute and return a value.
Creating a Function:

A standalone function is created using the CREATE FUNCT ION statement. The simplified
syntax for the CREATE OR REPLACE PROCEDURE statement is as follows:

CREATE FUNCTION function_name


[(parameter_name [IN | OUT | IN OUT] type [, ...])]
RETURN return_datatype
BEGIN
< function_body >
RETURN variable
END [function_name];

Where,

function-name specifies the name of the function.


[OR REPLACE] option allows modifying an existing function.
The optional parameter list contains name, mode and types of the parameters. IN represents that
value will be passed from outside and OUT represents that this parameter will be used to return
a value outside of the procedure.
T he function must contain a return statement.
RETURN clause specifies that data type you are going to return from the function.

function-body contains the executable part.


Example:

Following example illustrates creating and calling a standalone function. The function returns
the total number of CUSTOMERS in the customers table. We will use the CUSTOMERS table.

delimiter &
mysql> create function hello(s char(20))
-> returns char(50)
-> return concat('hello,s,!');
-> &
When above code is executed using MYSQL prompt, it will produce the following result:

Query OK, 0 rows affected (0.01 sec)

Calling a Function
While creating a function, you give a definition of what the function has to do. T o use a
function, you will have to call that function to perform the defined task. When a program calls a
function, program control is transferred to the called function.
A called function performs defined task and when its return statement is executed or when it last
end statement is reached, it returns program control back to the main program.
T o call a function you simply need to pass the required parameters along with function name
and if function returns a value then you can store returned value. Following program calls the
function from an anonymous block:

->select hello('world');

When the above code is executed at SQL prompt, it produces the following result:

-> Hell world


mysql> delimiter *
mysql> create function add1(a int, b int) returns int
-> return (a+b);
-> select add1(10,20);
-> *
Query OK, 0 rows affected (0.00 sec)

+ +
| add1(10,20) |
+ +
| 30 |
+ +
1 row in set (0.02 sec)

Example:

The following is one more example which demonstrates Declaring , Defining , and Invoking a
Simple MYSQL Function that computes and returns the maximum of two values.

mysql> delimiter //
mysql> CREATE FUNCTION grt(a INT,b INT,c INT) RETURNS INT
-> BEGIN
-> if a>b AND a>c then
-> RETURN a;
-> end if;
-> if b>c AND b>a then
-> RETURN b;
-> end if;
-> RETURN c;
-> end;
-> //
Query OK, 0 rows affected (0.12 sec)

mysql> select grt(23,78,98);


-> //
+ +
| grt(23,78,98) |
+ +
| 98 |
+ +
1 row in set (0.05 sec)

mysql> select grt(23,98,72);


-> //
+ +
| grt(23,98,72) |
+ +
| 98 |
+ +
1 row in set (0.01 sec)

mysql> select grt(45,2,3); //


+ +
| grt(45,2,3) |
+ +
| 45 |
+ +
1 row in set (0.00 sec)

mysql> delimiter //
mysql> CREATE FUNCTION odd_even(a INT) RETURNS varchar(20)
-> BEGIN
-> if a%2=0 then
-> RETURN 'even';
-> end if;
-> RETURN 'odd';
-> end;
-> //
Query OK, 0 rows affected (0.06 sec)

mysql> select odd_even(54);


-> //
+ +
| odd_even(54) |
+ +
| even |
+ +
1 row in set (0.03 sec)

mysql> select odd_even(51); //


+ +
| odd_even(51) |
+ +
| odd |
+ +
1 row in set (0.00 sec)

Conclusion: performed implementation of procedures and functions in MYSQL successfully.


ASSIGNMENT 7

Aim: Write a PL/SQL block to implement cursors.

Objective: 1] to understand the basic concept of cursors used in PL/SQL

Theory:
1] Cursor its use:
 A cursor is a pointer to this context area. PL/SQL controls the context area through a cursor.
 A cursor holds the rows (one or more) returned by a SQL statement.
 The set of rows the cursor holds is referred to as the active set.

2] Types of cursors:

 Implicit cursors:
Implicit cursors are automatically created by Oracle whenever an SQL statement is executed,
when there is no Explicit cursor for the statement. Programmers cannot control the implicit
cursors and the information in it.

Whenever a DML statement (INSERT , UPDAT E and DELET E) is issued, an implicit cursor is
associated with this statement. For INSERT operations, the cursor holds the data that needs to be
inserted. For UPDAT E and

DELET E operations, the cursor identifies the rows that would be affected.

Attribute Desc ription

%FOUND Returns T RUE if an INSERT , UPDAT E, or DELET E statement affected


one or
more rows or a SELECT INT O statement returned one or more rows.
Otherwise, it returns FALSE.

%NOT FOUND T he logical opposite of %FOUND. It returns T RUE if an INSERT , UPDAT


E, or
DELET E statement affected no rows, or a SELECT INT O statement returned
no rows. Otherwise, it returns FALSE.

%ISOPEN Always returns FALSE for implicit cursors, because Oracle closes the SQL
cursor automatically after executing its associated SQL statement.

%ROWCOUNT Returns the number of rows affected by an INSERT , UPDAT E, or DELET E


statement, or returned by a SELECT INT O statement.
 Explicit cursors
Explicit cursors are programmer defined cursors for gaining more control over the context area.
An explicit cursor should be defined in the declaration section of the PL/SQL Block.
T he syntax for creating an explicit cursor is :
CURSOR cursor_name IS select_statement;
Working with an explicit cursor involves four steps:
Declaring the cursor for initializing in the memory
Opening the cursor for allocating memory
Fetching the cursor for retrieving data
Closing the cursor to release allocated memory
Declaring the Cursor
Declaring the cursor defines the cursor with a name and the associated SELECT statement.
For example:
CURSOR c_customers IS
SELECT id, name, address FROM customers;

Opening the Cursor


Opening the cursor allocates memory for the cursor and makes it ready for fetching the rows
returned by the SQL statement into it. For example, we will open above-defined cursor as
follows:

OPEN c_customers;

Fetching the Cursor


Fetching the cursor involves accessing one row at a time. For example we will fetch rows from
the aboveopened
cursor as follows:

FETCH c_customers INTO c_id, c_name, c_addr;

Closing the Cursor


Closing the cursor means releasing the allocated memory. For example, we will close above-
opened cursor as
follows:

CLOSE c_customers;
Cursor Example
Example 1

Create table emp_tbl as follows

Emp_tbl(first_name,last_name,salary);

Write a procedure with cursor to display employees first name and last name whose salary
greater than 1000;

drop procedure if exists pcursor;

create procedure pcursor()

begin

DECLARE fn varchar(30);

declare ln varchar(30);

DECLARE cur1 CURSOR FOR SELECT first_name,last_name from emp_tbl where


salary>1000;

OPEN cur1;

read_loop: LOOP

FETCH cur1 INTO fn,ln;

select concat(fn,' ',ln) as name;

end loop;

CLOSE cur1;

END

Example 2
create table t1(id int,data int);( id char(16),data int)

create table t2(i int);

create table t3(i1 int,12 int); //t3 table blank (i1 char(16),i2 int)

CREATE PROCEDURE curdemo()


BEGIN
DECLARE done INT DEFAULT FALSE;
DECLARE a CHAR(16);
DECLARE b, c INT;
DECLARE cur1 CURSOR FOR SELECT id,data FROM test.t1;
DECLARE cur2 CURSOR FOR SELECT i FROM test.t2;
OPEN cur1;
OPEN cur2;
read_loop: LOOP
FETCH cur1 INTO a, b;
FETCH cur2 INTO c;
IF b < c THEN
INSERT INTO test.t3 VALUES (a,b);
ELSE
INSERT INTO test.t3 VALUES (a,c);
END IF;
END LOOP;
CLOSE cur1;
CLOSE cur2;
END;

Conclusion: Thoroughly understood the basic concept of cursors used in PL/SQL.


ASSIGNMENT 8

Aim: Execute DDL statements which demonstrate the use of views. Try to update the base
table using its corresponding view. Also consider restrictions on updatable views and
perform view creation from multiple tables.

Objective: Understand the concept of view and perform various operations on view

Theory:

What is View?

In SQL, a view is a virtual table based on the result-set of an SQL statement.

A view contains rows and columns, just like a real table. The fields in a view are fields from
one or more real tables in the database.

You can add SQL functions, WHERE, and JOIN statements to a view and present the data as if
the data were coming from one single table.

CREATE VIEW Syntax

CREATE VIEW view_name AS


SELECT column1, column2, ...
FROM table_name
WHERE condition;

SQL CREATE VIEW Examples

If you have the Northwind database you can see that it has several views installed by default.

The view "Current Product List" lists all active products (products that are not discontinued)
from the "Products" table. The view is created with the following SQL:

CREATE VIEW [Current Product List] AS


SELECT ProductID, ProductName
FROM Products
WHERE Discontinued = No;

Then, we can query the view as follows:


SELECT * FROM [Current Product List];

MySQL Create View with JOIN

CREATE VIEW command can be used along with a JOIN statement.


Example :
Sample table : category
Sample table : purchase

 CREATE VIEW view_purchase


 AS SELECT a.cate_id,a.cate_descrip, b.invoice_no,
 b.invoice_dt,b.book_name
 FROM category a,purchase b
 WHERE a.cate_id=b.cate_id;

The above MySQL statement will create a view 'view_purchase' along with a JOIN statement.
The JOIN statement here retrieves cate_id, cate_descrip from category table and invoice_no,
invoice_dt and book_name from purchase table if cate_id of category table and that of purchase
are same.

MySQL Create View with LIKE

CREATE VIEW command can be used with LIKE operator.


Example :
Sample table : author
Code :

1. CREATE VIEW view_author


2. AS SELECT *
3. FROM author
4. WHERE aut_name
5. NOT LIKE 'T%' AND aut_name NOT LIKE 'W%';

The above MySQL statement will create a view 'view_author' taking all the records of author
table, if (A)name of the author (aut_name) does not start with 'T' and (B) name of the author
(aut_name) does not start with 'W'.

MySQL Create View using Subquery

CREATE VIEW command can be used with subqueries.


Example :
Sample table : purchase
Sample table : book_mast
Code :

1. CREATE VIEW view_purchase


2. AS SELECT invoice_no,book_name,cate_id
3. FROM purchase
4. WHERE cate_id= (SELECT cate_id FROM book_mast WHERE no_page=201)
5.

Create table Employee(ID,First_Name,Last_Name,Start_Date,End_Date,Salary,City).

1. Create a simple view to display First_Name,Last_Name from employee.


2. Create a view to display First_Name,Last_Name of those employee whose salary
is greater than 2000 from employee table.
3. Create a view to display first_name starting with “S” and last_name end with “t”.

Conclusion: Implemented Views and performed operation on view


Group C:
MongoDB
ASSIGNMENT 1

Aim: -: Create a database with suitable example using MongoDB and implement Inserting,
updating, removing and saving document

Objective: Perform CURD operation on MongoDB Database

What is MongoDB

MongoDB is an open-source document database that provides high performance, high


availability, and automatic scaling.

Document Database

A record in MongoDB is a document, which is a data structure composed of field and value
pairs. MongoDB documents are similar to JSON objects. The values of fields may include other
documents, arrays, and arrays of documents.
Figure shows a MongoDB document.

The advantages of using documents are:


• Documents (i.e. objects) correspond to native data types in many programming languages.
• Embedded documents and arrays reduce need for expensive joins.
• Dynamic schema supports fluent polymorphism.

Key Features
High Performance
MongoDB provides high performance data persistence. In particular,• Support for embedded
data models reduces I/O activity on database system.
• Indexes support faster queries and can include keys from embedded documents and arrays.

High Availability
To provide high availability, MongoDB’s replication facility, called replica sets, provide:
• Automatic failover.
• Data redundancy.
A replica set is a group of MongoDB servers that maintain the same data set, providing
redundancy and increasing data availability.

Automatic Scaling

MongoDB provides horizontal scalability as part of its core functionality.


• Automatic sharding distributes data across a cluster of machines.
• Replica sets can provide eventually-consistent reads for low-latency high throughput
deployments.

Objective:

 In this Assignment, we are creating Teacher Database. Which contain the information of
Teacher_id, name of a teacher, department of a teacher, salary and status of a teacher?
Here status is whether teacher is approved by the university or not.
 Our main aim is to implement all the DDL & DML queries on the Teacher Database
and difference between SQL Commands and mongodb commands.

SQL Vs MongoDB

SQL Concepts MongoDB Concepts

database database

table Collection
Row Document 0r BSON Document

Column Field
Index Index
Table Join Embedded documents & Linking
Primary key Primary Key

Specify any unique column or column In MongoDB, the primary key is automatically
combination as primary key. set to the _id field.

aggregation (e.g. group by) aggregation pipeline


Executables

Oracle MySQL MongoDB

Database Server oracle mysqld mongod

Database Client sqlplus mysql mongo

MongoDB: Creation of Document

{
Teacher_id: “Pic001",
Teacher_Name: “Ravi”,
Dept_Name: “IT”,
Sal: 30000,
status: 'A'
}
OR
db.createCollection(“Teacher_info")
Insert Command:
db.Teacher_info.insert( { Teacher_id: “Pic001", Teacher_Name: “Ravi",Dept_Name: “IT”,
Sal:30000, status: "A" } )
db.Teacher_info.insert( { Teacher_id: “Pic002", Teacher_Name: “Ravi",Dept_Name: “IT”,
Sal:20000, status: "A" } )
db.Teacher_info.insert( { Teacher_id: “Pic003", Teacher_Name: “Akshay",Dept_Name:
“Comp”, Sal:25000, status: “N" } )

Retrieving data from Mongodb:

 > db.Teacher_info.find()
 { "_id" : ObjectId("53a2d8ac8404f005f1acc666"), "Teacher_id" : "pic001", "Teache
 r_name" : "Ravi", "Dept_name" : "IT", "sal" : 20000, "status" : "A" }
 { "_id" : ObjectId("53a2d8fc8404f005f1acc667"), "Teacher_id" : "pic001", "Teache
 r_name" : "Ravi", "Dept_name" : "IT", "sal" : 20000, "status" : "A" }
 { "_id" : ObjectId("53a2d91b8404f005f1acc668"), "Teacher_id" : "pic003", "Teache
 r_name" : "Akshay", "Dept_name" : "IT", "sal" : 25000, "status" : "N" }
 { "_id" : ObjectId("53a2da038404f005f1acc669"), "Teacher_id" : "pic003", "Teache
 r_name" : "Akshay", "Dept_name" : "IT", "sal" : 25000, "status" : "N" }
SQL & Mongodb Commands

SQL SELECT Statements MongoDB find() Statements

SELECT * FROM Teacher_info; db.Teacher_info.find()

SELECT * FROM Teacher_info db.Teacher_info.find( {sal: 25000})


WHERE sal = 25000;

SELECT Teacher_id FROM db.Teacher_info.find( {Teacher_id:


Teacher_info WHERE eacher_id 1; "pic001"})

SELECT * FROM db.Teacher_info.find(


Teacher_info WHERE status {status:{$ne:"A"}})
!= "A“;
SELECT * FROM db.Teacher_info.find({status:"A",
Teacher_info WHERE status = sal:20000})
"A" AND sal = 20000;
SELECT * FROM > db.Teacher_info.find( { $or: [ { status:
Teacher_info WHERE status = "A" } , { sal:50000 } ] } )
"A" OR sal = 50000;
SELECT * FROM db. Teacher_info.find( { sal: { $gt: 40000 }
Teacher_info WHERE sal > })
40000
SELECT * FROM db. Teacher_info.find( { sal: { $gt: 30000 }
Teacher_infoWHERE sal < })
30000

SELECT * FROM Teacher_info db.


WHERE status = "A" ORDER Teacher_info.find( { statu
BY SAL ASC s: "A" } ).sort( { sal: 1 } )
SELECT * FROM users db. Teacher_info.find( { status: "A" } ).sort(
WHERE status = "A" ORDER {sal: -1 } )
BY SAL DESC

SELECT COUNT(*) FROM db. Teacher_info.count()


Teacher_info; or
db. Teacher_info.find().count()
SELECT db. Teacher_info.distinct( “Dept_name" )
DISTINCT(Dept_name) FROM
Teacher_info;

Update Records

UPDATE Teacher_info SET db. Teacher_info.update( { sal: { $gt: 25000


Dept_name = “ETC" WHERE sal > } }, { $set: { Dept_name: “ETC" } }, {
250000 multi: true } )
UPDATE Teacher_infoSET sal = sal db. Teacher_info.update( { status: "A" } , {
+ 10000 WHERE status = "A" $inc: { sal: 10000 } }, { multi: true } )

Delete Records

DELETE FROM Teacher_info db.Teacher_info.remove({Teacher_id:


WHERE Teacher_id = “pic001" "pic001"});
DELETE FROM Teacher_info; db. Teacher_info.remove({})

Alter Table in Oracle & MongoDb

Oracle:
ALTER TABLE Teacher_info ADD join_date DATETIME
MongoDb:
At the document level, update() operations can add fields to existing documents using the $set
operator.
Ex:
db.Teacher_info.update( { }, { $set: { join_date: new Date() } }, { multi: true} )

Drop Command

Oracle:
DROP TABLE Teacher_info
Mongo:
db.Teacher_info.drop()

1) Finding all the records in the collection


> db.college.find()

2) Finding a particular record


> db.college.find(“name”:”pict”)
... { "_id" : ObjectId("531abcc1fd853871fff162e8"), "name" : "pict" }
... { "_id" : ObjectId("531ac014fd853871fff162e9"), "name" : "pict ",”rno”:4 }
... { "_id" : ObjectId("531ac014fd853871fff162e9"), "name" : "pict ",”rno”:5}
... { "_id" : ObjectId("531ac014fd853871fff162e9"), "name" : "pict ",”rno”:6 }
... { "_id" : ObjectId("531ac04dfd853871fff162ef"), "name" : 7 }
... { "_id" : ObjectId("531ac04dfd853871fff162f0"), "name" : 8 }

3) Updating a record
>db.college.update({"name":"hsaifjdas"},{$addToSet:{"dept":"mech"}},{'multi':true})

4) Removing a record
> db.college.remove({"name":"hsaifjdas"})

5) Ensuring the index


> db.events.ensure_index('path')

Conclusion: -Understand and execute MongoDB query


ASSIGNMENT 2

Aim: -: Execute at least 10 queries on any suitable MongoDB database that demonstrates
following querying techniques:
find and findOne (specific values)
Query criteria (Query conditionals, OR queries, $not, Conditional semantics)
Type-specific queries (Null, Regular expression, Querying arrays)

Introduction to find
The find method is used to perform queries in MongoDB. Querying returns a subset of documents in
a collection, from no documents at all to the entire collection. Which documents get returned is
determined by the first argument to find, which is a document specifying the query to be performed.
An empty query document (i.e., {}) matches everything in the collection. If find isn’t given a query
document, it defaults to {}. For example, the following:
> db.c.find()
returns everything in the collection c. When we start adding key/value pairs to the query document,
we begin restricting our search. This works in a straightforward way for most types. Integers match
integers, Booleans match Booleans, and strings match strings. Querying for a simple type is as easy
as specifying the value that you are looking for.
For example, to find all documents where the value for "age" is 27, we can add that key/value
pair to the query document:
> db.users.find({"age" : 27})
If we have a string we want to match, such as a "username" key with the value "joe", we use
that key/value pair instead: > db.users.find({"username" : "joe"}) Multiple conditions can be
strung together by adding more key/value pairs to the query document, which gets interpreted as
“condition1 AND condition2 AND … AND conditionN.” For instance, to get all users who are
27-year-olds with the username “joe,” we can query for the following: >
db.users.find({"username" : "joe", "age" : 27}) Specifying Which Keys to Return Sometimes,
you do not need all of the key/value pairs in a document returned.

If this is the case, you can pass a second argument to find (or findOne) specifying the keys you
want. This reduces both the amount of data sent over the wire and the time and memory used to
decode documents on the client side.
For example, if you have a user collection and you are interested only in the "user name" and
"email" keys, you could return just those keys with the following query:
> db.users.find({},{"username" : 1, "email" : 1})
{ "_id" : ObjectId("4ba0f0dfd22aa494fd523620"), "username" : "joe", "email" :
"joe@example.com" }
As you can see from the previous output, the "_id" key is always returned, even if it isn’t
specifically listed. You can also use this second parameter to exclude specific key/value pairs
from the results of a query. For instance, you may have documents with a variety of keys, and
the only thing you know is that you never want to return the "fatal weakness" key:
> db.users.find({}, {"fatal_weakness" : 0})
This can even prevent "_id" from being returned:
> db.users.find({}, {"username" : 1, "_id" : 0}) { "username" : "joe", }

Query Criteria Queries can go beyond the exact matching described in the previous section;
they can match more complex criteria, such as ranges, OR-clauses, and negation. Query
Conditionals "$lt", "$lte", "$gt", and "$gte" are all comparison operators, corresponding to <=,
>, and >=, respectively. They can be combined to look for a range of values.

For example, to look for users who are between the ages of 18 and 30 inclusive, we can do this:
> db. users.find({"age" : {"$gte" : 18, "$lte" : 30}})

These types of range queries are often useful for dates.

For example, to find people who registered before January 1, 2007, we can do this:
> start = new Date ("01/01/2007")
> db. users. Find ({"registered" : {"$lt" : start}}) An exact match on a date is less useful,
because dates are only stored with millisecond precision. Often you want a whole day, week, or
month, making a range query necessary. To query for documents where a key’s value is not
equal to a certain value, you must use another conditional operator, "$ne", which stands for “not
equal.” If you want to find all users who do not have the username “joe,” you can query for
them using this:

> db.users.find({"username" : {"$ne" : "joe"}}) "$ne" can be used with any type. OR Queries
There are two ways to do an OR query in MongoDB. "$in" can be used to query for a variety of
values for a single key. "$or" is more general; it can be used to query for any of the given values
across multiple keys. If you have more than one possible value to match for a single key, use an
array of criteria with "$in". For instance, suppose we were running a raffle and the winning
ticket numbers were 725, 542, and 390. To find all three of these documents, we can construct
the following query

: > db.raffle.find({"ticket_no" : {"$in" : [725, 542, 390]}}) "$in" is very flexible and allows you
to specify criteria of different types as well as values. For example, if we are gradually
migrating our schema to use usernames instead of user ID numbers, we can query for either by
using this:

> db.users.find({"user_id" : {"$in" : [12345, "joe"]}) This matches documents with a "user_id"
equal to 12345, and documents with a "user_id" equal to "joe". If "$in" is given an array with a
single value, it behaves the same as directly matching the value. For instance, {ticket_no : {$in :
[725]}} matches the same documents as {ticket_no : 725}. The opposite of "$in" is "$nin",
which returns documents that don’t match any of the criteria in the array. If we want to return
all of the people who didn’t win anything in the raffle, we can query for them with this: >
db.raffle.find({"ticket_no" : {"$nin" : [725, 542, 390]}}) This query returns everyone who did
not have tickets with those numbers. "$in" gives you an OR query for a single key, but what if
we need to find documents where "ticket_no" is 725 or "winner" is true? For this type of query,
we’ll need to use the "$or" conditional. "$or" takes an array of possible criteria. In the raffle
case, using "$or" would look like this: > db.raffle.find({"$or" : [{"ticket_no" : 725}, {"winner" :
true}]}) "$or" can contain other conditionals. If, for example, we want to match any of the three
"ticket_no" values or the "winner" key, we can use this:

> db.raffle.find({"$or" : [{"ticket_no" : {"$in" : [725, 542, 390]}}, {"winner" : true}]}) With a
normal AND-type query, you want to narrow your results down as far as possible in as few
arguments as possible. OR-type queries are the opposite: they are most efficient if the first
arguments match as many documents as possible.
$not "$not" is a metaconditional: it can be applied on top of any other criteria. As an example,
let’s consider the modulus operator, "$mod". "$mod" queries for keys whose values, when
divided by the first value given, have a remainder of the second value: >
db.users.find({"id_num" : {"$mod" : [5, 1]}}) The previous query returns users with "id_num"s
of 1, 6, 11, 16, and so on. If we want, instead, to return users with "id_num"s of 2, 3, 4, 5, 7, 8,
9, 10, 12, and so on, we can use "$not": > db.users.find({"id_num" : {"$not" : {"$mod" : [5,
1]}}}) "$not" can be particularly useful in conjunction with regular expressions to find all
documents that don’t match a given pattern

Conditional semantics
In the query, "$lt" is in the inner document; in the update, "$inc" is the key for the outer
document. This generally holds true: conditionals are an inner document key, and modifiers are
always a key in the outer document. Multiple conditions can be put on a single key. For
example, to find all users between the ages of 20 and 30, we can query for both "$gt" and "$lt"
on the "age" key:

> db.users.find({"age" : {"$lt" : 30, "$gt" : 20}})

Any number of conditionals can be used with a single key. Multiple update modifiers cannot be
used on a single key, however. For example, you cannot have a modifier document such as
{"$inc" : {"age" : 1}, "$set" : {age : 40}} because it modifies "age" twice. With query
conditionals, no such rule applies.

Type-Specific Queries

MongoDB has a wide variety of types that can be used in a document. Some of these behave
specially in queries. null null behaves a bit strangely. It does match itself, so if we have a
collection with the following documents:

> db.c.find() { "_id" : ObjectId("4ba0f0dfd22aa494fd523621"), "y" : null }

{ "_id" : ObjectId("4ba0f0dfd22aa494fd523622"), "y" : 1 }


{ "_id" : ObjectId("4ba0f148d22aa494fd523623"), "y" : 2 }

we can query for documents whose "y" key is null in the expected way: > db.c.find({"y" :
null}) { "_id" : ObjectId("4ba0f0dfd22aa494fd523621"), "y" : null } However, null not only
matches itself but also matches “does not exist.” Thus, querying for a key with the value null
will return all documents lacking that key:

> db.c.find({"z" : null})

{“_id”: ObjectId("4ba0f0dfd22aa494fd523621"), "y”: null}


{“_id”: ObjectId("4ba0f0dfd22aa494fd523622"), "y”: 1}
{“_id”: ObjectId("4ba0f148d22aa494fd523623"), "y”: 2}

If we only want to find keys whose value is null, we can check that the key is null and exists
using the "$exists" conditional: > db.c.find({"z" : {"$in" : [null], "$exists" : true}})
Unfortunately, there is no "$eq" operator, which makes this a little awkward, but "$in" with one
element is equivalent.

Regular Expressions

Regular expressions are useful for flexible string matching. For example, if we want to find all
users with the name Joe or joe, we can use a regular expression to do caseinsensitive matching:

> db.users.find({"name" : /joe/i}) Regular expression flags (i) are allowed but not required.

If we want to match not only various capitalizations of joe, but also joey, we can continue to
improve our regular expression:

> db.users.find({"name" : /joey?/i}) MongoDB uses the Perl Compatible Regular Expression
(PCRE) library to match regular expressions; any regular expression syntax allowed by PCRE is
allowed in MongoDB. It is a good idea to check your syntax with the JavaScript shell before
using it in a query to make sure it matches what you think it matches.
Regular expressions can also match themselves.

Very few people insert regular expressions into the database, but if you insert one, you can
match it with itself:
> db.foo.insert({"bar" : /baz/})
> db.foo.find({"bar" : /baz/})

{ "_id" : ObjectId("4b23c3ca7525f35f94b60a2d"), "bar" : /baz/ }

Querying Arrays
Querying for elements of an array is simple. An array can mostly be treated as though each
element is the value of the overall key.
For example, if the array is a list of fruits, like this: > db.food.insert({"fruit" : ["apple",
"banana", "peach"]}) the following query: > db.food.find({"fruit" : "banana"}) will successfully
match the document. We can query for it in much the same way as though we had a document
that looked like the (illegal) document: {"fruit" : "apple", "fruit" : "banana", "fruit" : "peach"}.
$all If you need to match arrays by more than one element, you can use "$all". This allows you
to match a list of elements. For example, suppose we created a collection with three elements:

> db.food.insert({"_id" : 1, "fruit" : ["apple", "banana", "peach"]})


> db.food.insert({"_id" : 2, "fruit" : ["apple", "kumquat", "orange"]})
> db.food.insert({"_id" : 3, "fruit" : ["cherry", "banana", "apple"]})

Then we can find all documents with both "apple" and "banana" elements by querying with
"$all": > db.food.find({fruit : {$all : ["apple", "banana"]}})

{"_id" : 1, "fruit" : ["apple", "banana", "peach"]}


{"_id" : 3, "fruit" : ["cherry", "banana", "apple"]}

Order does not matter. Notice "banana" comes before "apple" in the second result. Using a one-
element array with "$all" is equivalent to not using "$all". For instance, {fruit : {$all : ['apple']}
will match the same documents as {fruit : 'apple'}. You can also query by exact match using the
entire array. However, exact match will not match a document if any elements are missing or
superfluous.

For example, this will match the first document shown previously:
> db.food.find({"fruit" : ["apple", "banana", "peach"]})

But this will not: > db.food.find({"fruit" : ["apple", "banana"]}) and neither will this:
> db.food.find({"fruit" : ["banana", "apple", "peach"]})

If you want to query for a specific element of an array, you can specify an index using the
syntax key.index:

> db.food.find({"fruit.2" : "peach"}) Arrays are always 0-indexed, so this would match the
third array element against the string "peach".

$size A useful conditional for querying arrays is "$size", which allows you to query for arrays
of a given size. Here’s an example:

> db.food.find({"fruit" : {"$size" : 3}})

One common query is to get a range of sizes. "$size" cannot be combined with another $
conditional (in this example, "$gt"), but this query can be accomplished by adding a "size" key
to the document. Then, every time you add an element to the array, increment the value of
"size". If the original update looked like this:
> db.food.update({"$push" : {"fruit" : "strawberry"}}) it can simply be changed to this:
> db.food.update({"$push" : {"fruit" : "strawberry"}, "$inc" : {"size" : 1}})
Incrementing is extremely fast, so any performance penalty is negligible. Storing documents
like this allows you to do queries such as this:

> db.food.find({"size" : {"$gt" : 3}}) Unfortunately, this technique doesn’t work as well with
the "$addToSet" operator.

The $slice operator the optional second argument to find specifies the keys to be returned. The
special "$slice" operator can be used to return a subset of elements for an array key. For
example, suppose we had a blog post document and we wanted to return the first 10 comments:
> db.blog.posts.findOne(criteria, {"comments" : {"$slice" : 10}}) Alternatively, if we wanted
the last 10 comments, we could use -10:

> db.blog.posts.findOne(criteria, {"comments" : {"$slice" : -10}})


"$slice" can also return pages in the middle of the results by taking an offset and the number of
elements to return: > db.blog.posts.findOne(criteria, {"comments" : {"$slice" : [23, 10]}}) This
would skip the first 23 elements and return the 24th through 34th. If there are fewer than 34
elements in the array, it will return as many as possible.

Unless otherwise specified, all keys in a document are returned when "$slice" is used. This is
unlike the other key specifiers, which suppress unmentioned keys from being returned. For
instance, if we had a blog post document that looked like this:

{ "_id" : ObjectId("4b2d75476cc613d5ee930164"), "title" : "A blog post", "content" : "...",


"comments" : [ { "name" : "joe", "email" : "joe@example.com", "content" : "nice post." }, {
"name" : "bob", "email" : "bob@example.com", "content" : "good post." } ] } and we did a
"$slice" to get the last comment, we’d get this:

> db.blog.posts.findOne(criteria, {"comments" : {"$slice" : -1}}) { "_id" :


ObjectId("4b2d75476cc613d5ee930164"), "title" : "A blog post", "content" : "...", "comments" :
[ { "name" : "bob", "email" : "bob@example.com", "content" : "good post." } ] } Both "title"
and "content" are still returned, even though they weren’t explicitly included in the key
specifier.

Conclusion: - Executed queries on MongoDB database that demonstrates querying techniques:


like find and findOne, Query conditionals, OR queries, $not, Conditional semantics, Null,
Regular expression, Querying arrays
ASSIGNMENT 5

ASSIGNMENT 3

Aim: - Execute at least 10 queries on any suitable MongoDB database that demonstrates
following:
$ where queries
Cursors (Limits, skips, sorts, advanced query options)
Database commands

Theory: -

$where Queries
Key/value pairs are a fairly expressive way to query, but there are some queries that they
cannot represent. For queries that cannot be done any other way, there are "$where" clauses,
which allow you to execute arbitrary JavaScript as part of your query. This allows you to do
(almost) anything within a query. The most common case for this is wanting to compare the
values for two keys in a document, for instance, if we had a list of items and wanted to return
documents where any two of the values are equal.

Here’s an example:

> db.foo.insert({"apple" : 1, "banana" : 6, "peach" : 3})


> db.foo.insert({"apple" : 8, "spinach" : 4, "watermelon" : 4})

In the second document, "spinach" and "watermelon" have the same value, so we’d like that
document returned. It’s unlikely MongoDB will ever have a $ conditional for this, so we can
use a "$where" clause to do it with JavaScript:

> db.foo.find({"$where" : function () { ... for (var current in this) { ... for (var other in this) { ...
if (current != other && this[current] == this[other]) { ... return true; ... } ... } ... } ... return false;
... }});

If the function returns true, the document will be part of the result set; if it returns false, it won’t
be.

We used a function earlier, but you can also use strings to specify a "$where" query; the
following two "$where" queries are equivalent:

> db.foo.find({"$where" : "this.x + this.y == 10"})


> db.foo.find({"$where" : "function() { return this.x + this.y == 10; }"})

"$where" queries should not be used unless strictly necessary: they are much slower than
regular queries. Each document has to be converted from BSON to a JavaScript object and then
run through the "$where" expression. Indexes cannot be used to satisfy a "$where", either.
Hence, you should use "$where" only when there is no other way of doing the query. You can
cut down on the penalty by using other query filters in combination with "$where". If possible,
an index will be used to filter based on the non- $where clauses; the "$where" expression will
be used only to fine-tune the results.

Cursors
The database returns results from find using a cursor. The client-side implementations of
cursors generally allow you to control a great deal about the eventual output of a query. You can
limit the number of results, skip over some number of results, sort results by any combination of
keys in any direction, and perform a number of other powerful operations.

To create a cursor with the shell, put some documents into a collection, do a query on them,
and assign the results to a local variable (variables defined with "var" are local). Here, we create
a very simple collection and query it, storing the results in the cursor variable: > for(i=0; i var
cursor = db.collection.find();

The advantage of doing this is that you can look at one result at a time. If you store the results in
a global variable or no variable at all, the MongoDB shell will automatically iterate through and
display the first couple of documents. This is what we’ve been seeing up until this point, and it
is often the behavior you want for seeing what’s in a collection but not for doing actual
programming with the shell. To iterate through the results, you can use the next method on the
cursor. You can use hasNext to check whether there is another result.

A typical loop through results looks like the following:

> while (cursor.hasNext()) { ... obj = cursor.next(); ... // do stuff ... }


cursor.hasNext() checks that the next result exists, and cursor.next() fetches it.

The cursor class also implements the iterator interface, so you can use it in a forEach loop:
> var cursor = db.people.find(); > cursor.forEach(function(x) { ... print(x.name); ... }); adam
matt zak When you call find, the shell does not query the database immediately. It waits until
you actually start requesting results to send the query, which allows you to chain additional
options onto a query before it is performed. Almost every method on a cursor object returns the
cursor itself so that you can chain them in any order. For instance, all of the following are
equivalent: > var cursor = db.foo.find().sort({"x" : 1}).limit(1).skip(10); > var cursor =
db.foo.find().limit(1).sort({"x" : 1}).skip(10); > var cursor =
db.foo.find().skip(10).limit(1).sort({"x" : 1}); At this point, the query has not been executed yet.
All of these functions merely build the query. Now, suppose we call the following: >
cursor.hasNext() At this point, the query will be sent to the server. The shell fetches the first 100
results or first 4MB of results (whichever is smaller) at once so that the next calls to next or
hasNext will not have to make trips to the server. After the client has run through the first set of
results, the shell will again contact the database and ask for more results. This process continues
until the cursor is exhausted and all results have been returned.
Example MongoDB cursor
When the db.collection.find () function is used to search for documents in the collection, the
result returns a pointer to the collection of documents returned which is called a cursor.

By default, the cursor will be iterated automatically when the result of the query is returned. But
one can also explicitly go through the items returned in the cursor one by one. If you see the
below example, if we have 3 documents in our collection, the cursor will point to the first
document and then iterate through all of the documents of the collection.

The following example shows how this can be done.

var myEmployee = db.Employee.find( { Employeeid : { $gt:2 }});

while(myEmployee.hasNext())

{
print(tojson(myCursor.next()));
}

Code Explanation:

1. First we take the result set of the query which finds the Employee's whose id is
greater than 2 and assign it to the JavaScript variable 'myEmployee'
2. Next we use the while loop to iterate through all of the documents which are returned
as part of the query.
3. Finally, for each document, we print the details of that document in JSON
readable format.

If the command is executed successfully, the following Output will be shown

Output:

Limits, Skips, and Sorts


The most common query options are limiting the number of results returned, skipping a number
of results, and sorting. All of these options must be added before a query is sent to the database.
To set a limit, chain the limit function onto your call to find.
For example, to only return three results, use this:

> db.c.find().limit(3) If there are fewer than three documents matching your query in the
collection, only the number of matching documents will be returned; limit sets an upper limit,
not a lower limit. skip works similarly to limit:

> db.c.find().skip(3) This will skip the first three matching documents and return the rest of the
matches. If there are less than three documents in your collection, it will not return any
documents. sort takes an object: a set of key/value pairs where the keys are key names and the
values are the sort directions.
Sort direction can be 1 (ascending) or -1 (descending). If multiple keys are given, the results
will be sorted in that order. For instance, to sort the results by "username" ascending and "age"
descending, we do the following:
> db.c.find().sort({username : 1, age : -1}) These three methods can be combined. This is often
handy for pagination. For example, suppose that you are running an online store and someone
searches for mp3. If you want 50 results per page sorted by price from high to low, you can do
the following:

> db.stock.find({"desc" : "mp3"}).limit(50).sort({"price" : -1}) If they click Next Page to see


more results, you can simply add a skip to the query, which will skip over the first 50 matches
(which the user already saw on page 1): > db.stock.find({"desc" :
"mp3"}).limit(50).skip(50).sort({"price" : -1}) However, large skips are not very performant, so
there are suggestions on avoiding them in a moment

Advanced Query Options

There are two types of queries: wrapped and plain. A plain query is something like this:
> var cursor = db.foo.find({"foo" : "bar"}) There are a couple options that “wrap” the query.
For example, suppose we perform a sort:
> var cursor = db.foo.find({"foo" : "bar"}).sort({"x" : 1}) Instead of sending {"foo" : "bar"} to
the database as the query, the query gets wrapped in a larger document. The shell converts the
query from {"foo" : "bar"} to {"$query" : {"foo" : "bar"}, "$orderby" : {"x" : 1}}. Most drivers
provide helpers for adding arbitrary options to queries.

Other helpful options include the following:


$maxscan: integer Specify the maximum number of documents that should be scanned for the
query.
$min: document Start criteria for querying.
$max: document End criteria for querying.
$hint: document Tell the server which index to use for the query.
$explain: Boolean Get an explanation of how the query will be executed (indexes used, number
of results, how long it takes, etc.), instead of actually doing the query.
$snapshot: Boolean Ensure that the query’s results will be a consistent snapshot from the point
in time when the query was executed. See the next section for details

Database Commands

MongoDB supports a wide range of advanced operations that are implemented as commands.
Commands implement all of the functionality that doesn’t fit neatly into “create, read, update,
delete.” We’ve already seen a couple of commands in the previous chapters; for instance, we
used the getLastError command in Chapter 3 to check the number of documents affected by an
update:
> db.count.update({x : 1}, {$inc : {x : 1}}, false, true)
> db.runCommand({getLastError : 1}) { "err" : null, "updatedExisting" : true, "n" : 5, "ok" :
true }

We’ll also describe some of the most useful commands that are supported by MongoDB. How
Commands Work One example of a database command that you are probably familiar with is

drop: to drop a collection from the shell, we run db.test.drop(). Under the hood, this function is
actually running the drop command—we can perform the exact same operation using
runCommand: > db.runCommand({"drop" : "test"});

{ "nIndexesWas" : 1, "msg" : "indexes dropped for collection", "ns" : "test.test", "ok" : true }

The document we get as a result is the command response, which contains information about
whether the command was successful, as well as any other information that the command might
provide. The command response will always contain the key "ok". If "ok" is true, the command
was successful, and if it is false, the command failed for some reason.
If "ok" is false, then an additional key will be present, "errmsg". The value of "errmsg" is a
string explaining why the command failed.

As an example, let’s try running the drop command again, on the collection that we just
dropped: > db.runCommand({"drop" : "test"}); { "errmsg" : "ns not found", "ok" : false }
Commands in MongoDB are actually implemented as a special type of query that gets
performed on the $cmd collection. runCommand just takes a command document and performs
the equivalent query, so our drop call becomes the following: db.$cmd.findOne({"drop" :
"test"}); When the MongoDB server gets a query on the $cmd collection, it handles it using
special logic, rather than the normal code for handling queries. Almost all MongoDB drivers
provide a helper method like runCommand for running commands, but commands can always
be run using a simple query if necessary.
Some commands require administrator access and must be run on the admin database. If such a
command is run on any other database, it will return an “access denied” error.

Conclusion: -Execute queries on suitable MongoDB database that demonstrates following:


$ where queries, Cursors (Limits, skips, sorts, advanced query options), Database
commands

ASSIGNMENT 5
ASSIGNMENT 4

Aim: Implement MapReduce operations with suitable example using MongoDB.

Objective: To learn MapReduce operations

Theory:

 MapReduce is a programming model and an associated implementation for processing


and generating large data sets with a parallel, distributed algorithm on a cluster.
 A MapReduce program is composed of a Map() procedure that performs filtering and
sorting (such as sorting students by first name into queues, one queue for each name)
and a Reduce() procedure that performs a summary operation (such as counting the
number of students in each queue, yielding name frequencies).
 Map-reduce is a data processing paradigm for condensing large volumes of data into
useful aggregated results.
 For map-reduce operations, MongoDB provides the mapReduce database command.
In map Reduce we have to write 3 functions.
1. Map Function (Ex.Person to each city to count population).
2. Reduce Function (Ex. Reducing the total population count to single value.)
3. Map Reduce Function ( it will create a new collection it contains the total population)

Step 1: Map
var mapFunction1 = function()
{ emit(this.cust_id,
this.price);
};
Map function to process each input document:
In the function, this refers to the document that the map-reduce operation is processing.
The function maps the price to the cust_id for each document and emits the cust_id and price
pair.

Step 2: Reduce
var reduceFunction1 = function(keyCustId, valuesPrices) { return Array.sum(valuesPrices);
};

Define the corresponding reduce function with two arguments keyCustId and valuesPrices:
The valuesPrices is an array whose elements are the price values emitted by the map function
and grouped by keyCustId. The function reduces the valuesPrice array to the sum of its
elements.
Step 3: Map Reduce

db.orders.mapReduce(
mapFunction1,
reduceFunction1,
{ out: "map_example" }
)
Perform the map-reduce on all documents in the orders collection using the mapFunction1 map
function and the reduceFunction1 reduce function. This operation outputs the results to a
collection named map_example. If the map_example collection already exists, the operation
will replace the contents with the results of this map-reduce operation:

Map-Reduce
MongoDB also provides map-reduce operations to perform aggregation. In general, map-reduce
operations have two phases: a map stage that processes each document and emits one or more
objects for each input document,and reduce phase that combines the output of the map
operation. Optionally, map-reduce can have a finalize stage to make final modifications to the
result. Like other aggregation operations, map-reduce can specify a query condition to select the
input documents as well as sort and limit the results.

Map-reduce uses custom JavaScript functions to perform the map and reduce operations, as well
as the optional finalize operation. While the custom JavaScript provide great flexibility
compared to the aggregation pipeline, in general, mapreduce is less efficient and more complex
than the aggregation pipeline.Additionally, map-reduce operations can have output sets that
exceed the 16 megabyte output limitation of the aggregation pipeline.

Conclusion: Understand and implement Map Reduced Operation


ASSIGNMENT 5

AIM: Implement Aggregation and Indexing with suitable example using MongoDB.

Objective: To understand 1) Aggregation 2) To understand Indexing in MongoDB

Theory:

Indexes provide high performance read operations for frequently used queries. This section
introduces indexes in MongoDB, describes the types and configuration options for indexes, and
describes special types of indexing MongoDB supports. The section also provides tutorials
detailing procedures and operational concerns, and providing information on how applications
may use indexes. Indexes support the efficient execution of queries in MongoDB.Without
indexes, MongoDB must scan every document in a collection to select those documents that
match the query statement. These collection scans are inefficient because they require mongod
to process a larger volume of data than an index for each operation. Indexes are special data
structures 1 that store a small portion of the collection’s data set in an easy to traverse form.
The index stores the value of a specific field or set of fields, ordered by the value of the field.
Fundamentally, indexes in MongoDB are similar to indexes in other database systems.
MongoDB defines indexes at the collection level and supports indexes on any field or sub-field
of the documents in a MongoDB collection. If an appropriate index exists for a query,
MongoDB can use the index to limit the number of documents it must inspect. In some cases,
MongoDB can use the data from the index to determine which documents match a query. The
following diagram illustrates a query that selects documents using an index.

Index Types
MongoDB provides a number of different index types to support specific types of data and
queries.

Default _id
All MongoDB collections have an index on the _id field that exists by default. If applications do
not specify a value for _id the driver or the mongod will create an _id field with an ObjectId
value.
The _id index is unique, and prevents clients from inserting two documents with the same value
for the _id field.

Single Field
In addition to the MongoDB-defined _id index, MongoDB supports user-defined indexes on a
single field of a document

Consider the following illustration of a single-field index:


Diagram of an index on the score field (ascending).

Compound Index

MongoDB also supports user-defined indexes on multiple fields. These compound indexes
behave like single-field indexes; however, the query can select documents based on additional
fields. The order of fields listed in a compound index has significance. For instance, if a
compound index consists of { userid: 1, score:-1 }, the index sorts first by userid and then,
within each userid value, sort by score. Consider the following illustration of this compound
index:

Multikey Index

MongoDB uses multikey indexes to index the content stored in arrays. If you index a field that
holds an array value, MongoDB creates separate index entries for every element of the array.
These multikey indexes allow queries to select documents that contain arrays by matching on
element or elements of the arrays. MongoDB automatically determines whether to create a
multikey index if the indexed field contains an array value; you do not need to explicitly specify
the multikey type.

Geospatial Index

To support efficient queries of geospatial coordinate data, MongoDB provides two special
indexes: 2d indexes that uses planar geometry when returning results and 2sphere indexes that
use spherical geometry to return results.

Text Indexes

MongoDB provides a text index type that supports searching for string content in a collection.
These text indexes do not store language-specific stop words (e.g. “the”, “a”, “or”) and stem the
words in a collection to only store root words.
Hashed Indexes

To support hash based sharding, MongoDB provides a hashed index type, which indexes the
hash of the value of a field. These indexes have a more random distribution of values along their
range, but only support equality matches and cannot support range-based queries.

Example Given the following document in the friends collection:

{ "_id" : ObjectId(...),
"name" : "Alice"
"age" : 27
}

The following command creates an index on the name field:

db.friends.ensureIndex( { "name" : 1 } )

Indexes on Embedded Fields


You can create indexes on fields embedded in sub-documents, just as you can index top-level
fields in documents. Indexes on embedded fields differ from indexes on sub-documents, which
include the full content up to the maximum index size of the sub-document in the index.
Instead, indexes on embedded fields allow you to use a “dot notation,” to introspect into sub-
documents.

Consider a collection named people that holds documents that resemble the following example
document:
{"_id": ObjectId(...)
"name": "John Doe"
"address": {
"street": "Main",
"zipcode": "53511",
"state": "WI"
}
}

You can create an index on the address.zipcode field, using the following specification:

db.people.ensureIndex( { "address.zipcode": 1 } )

Aggregation

Aggregations operations process data records and return computed results. Aggregation
operations group values from multiple documents together, and can perform a variety of
operations on the grouped data to return a single result.
MongoDB provides three ways to perform aggregation: the aggregation pipeline, the map-
reduce function , and single purpose aggregation methods and commands.

Aggregations are operations that process data records and return computed results. MongoDB
provides a rich set of aggregation operations that examine and perform calculations on the data
sets. Running data aggregation on the mongod instance simplifies application code and limits
resource requirements. Like queries, aggregation operations in MongoDB use collections of
documents as an input and return results in the form of one or more documents.

Aggregation Pipelines

MongoDB 2.2 introduced a new aggregation framework, modeled on the concept of data
processing pipelines. Documents enter a multi-stage pipeline that transforms the documents into
an aggregated result.
The most basic pipeline stages provide filters that operate like queries and document
transformations that modify the form of the output document. Other pipeline operations provide
tools for grouping and sorting documents by specific field or fields as well as tools for
aggregating the contents of arrays, including arrays of documents. In addition, pipeline stages
can use provides efficient data aggregation using native operations within MongoDB, and is the
preferred method for data aggregation in MongoDB.
Figure:

Conclusion: Understand and implement various Aggregation function and Indexing


Group D:
Mini Project / Database Application
Development

You might also like