Professional Documents
Culture Documents
BSC-IT SEM 5 Exam Planning for 100% Passing Result | Strategies, Tips, and Resources
(theshikshak.com)
We keep updating our blog you so bookmark our blog in your favorite browser.
Stay connected
htps://www.youtube.com/watch?v=qDtbosxcOzM&list=PLG0jPn7yVt51R_lEqslcOSbM
UqfEd6rqM&ab_channel=TheShikshak
UNIT 1
1 What is Big Data? What are the different sources of Big Data? (NOV 2018)
A
N Big Data
S • Big data is a term used to describe data that has
o massive volume, comes in a
o variety of structures, and is generated at
o high velocity.
• This kind of data poses challenges to the tradi�onal RDBMS systems used for storing
and processing data. Bid data is paving way for newer approaches of processing and
storing data
• What is data
o binary data
o iot data
o text data
A
N Big Data
S • Big data is a term used to describe data that has
o massive volume, comes in a
o variety of structures, and is generated at
o high velocity.
• This kind of data poses challenges to the tradi�onal RDBMS systems used for storing
and processing data. Bid data is paving way for newer approaches of processing and
storing data
• What is data
o binary data
o iot data
o text data
o Although this data is extremely useful to us, it does create more work and
require more analy�cal skills to decipher this incoming data, make it
manageable and allow it to work.
3 What is Big Data? List the different uses of Big Data. (NOV 2019)
OR
Explain the importance of Big Data in context to its usage.(APR 2023)
A
N Big Data
S • Big data is a term used to describe data that has
o massive volume, comes in a
o variety of structures, and is generated at
o high velocity.
• This kind of data poses challenges to the tradi�onal RDBMS systems used for storing
and processing data. Bid data is paving way for newer approaches of processing and
storing data
• What is data
o binary data
o iot data
o text data
• The system may become slow, impac�ng scalability and user experience.
• In contrary to the ACID approach of tradi�onal RDBMS systems, NoSQL solves the
problem using an approach popularly called as BASE.
• Before explaining BASE, let’s explore the concept of the CAP theorem.
5 With the help of a neat diagram explain the CAP theorem. (NOV 2018)
OR
Briefly explain the CAP theorem.(APR 2019)
OR
Write a short note on Cap theorem.(NOV 2019)
OR
Explain Brewer’s theorem along with a neat diagram. (NOV 2022)
• Consistency means that the data remains consistent a�er any opera�on is
performed that changes the data, and that all users or clients accessing the
applica�on see the same updated data.
• Availability means that the system is always available.
• Par��on Tolerance means that the system will con�nue to func�on even if it is
par��oned into groups of servers that are not able to communicate with one
another.
6 List the Big data sources and also explain the challenges of big data. (NOV 2022)
OR
What are the different challenges Big Data posses? (NOV 2018)
A
N Big Data Sources
S
• Enterprises, which are collec�ng data with more granulari�es now, ataching more
details with every transac�on in order to understand consumer behavior.
• Increase in mul�media usage across industries such as health care, product companies,
etc.
• social media sites such as Facebook, Twiter, etc.
• Rapid adop�on of smartphones
• Increased usage of sensors and devices
8 Discuss the difficul�es faced in managing Big Data using Legacy systems .(APR 2023)
A
N Legacy Systems and Big Data
S • Structure of Big Data
o Legacy systems are designed to work with structured data where tables with
columns are defined. The format of the data held in the columns is also
known.
o However, big data is data with many structures. It’s basically unstructured data
such as images, videos, logs, etc.
• Data Storage
o Legacy systems use big servers and NAS (Network-atached storage) and SAN
(Storage area network) systems to store the data. As the data increases, the
server size and the backend storage size has to be increased.
o Tradi�onal legacy systems typically work in a scaleup model where more and
more compute, memory, and storage needs to be added to a server to meet
the increased data needs. Hence the processing �me increases exponen�ally,
which defeats the other important requirement of big data, which is velocity.
• Data Processing
o The algorithms in legacy system are designed to work with structured data
such as strings and integers.
o They are also limited by the size of data. Thus, legacy systems are not capable
of handling the processing of unstructured data, huge volumes of such data,
and the speed at which the processing needs to be performed.
A
N
S Advantages and Disadvantages of NoSQL
• Advantages of NoSQL
High scalability the new genera�on of NoSQL databases is designed to scale out (i.e. to
expand horizontally using low-end commodity servers).
Manageability NoSQL databases are designed to mostly work with automated repairs,
and distributed data, and simpler data models, leading to low manageability
administra�on and administra�on
Low cost NoSQL databases are typically designed to work with a cluster of cheap
commodity servers, enabling the users to store and process more data
at a low cost.
Flexible data NoSQL databases have a very flexible data model, enabling them to work
models with any type of data; they don’t comply with the rigid RDBMS data
models. As a result, any applica�on changes that involve upda�ng the
database schema can be easily implemented.
• Disadvantages of NoSQL
Maturity Most NoSQL databases are pre-produc�on versions with key features that
are s�ll to be implemented. Thus, when deciding on a NoSQL database, we
should analyse the product properly to ensure the features are fully
implemented and not s�ll on the To-do list.
Support Support is one limita�on that we need to consider. Most NoSQL databases
are from start-ups which were open sourced. As a result, support is very
minimal as compared to the enterprise so�ware companies and may not
have global reach or support resources.
Limited Since NoSQL databases are generally developed to meet the scaling
Query requirement of the web-scale applica�ons, they provide limited querying
Capabili�es capabili�es.
Exper�se Since NoSQL is an evolving area, exper�se on the technology is limited in
the developer and administrator community.
10 What are the different categories of NoSQL database? Explain each with an example.
(NOV 2018)
OR
Discuss the various categories of NoSQL Databases.(NOV 2019)
OR
List the category of NoSql database, Also explain the ways in which MongoDB is different from
SQL .(NOV 2022)
• Feature Comparison
Feature Column- Document Key-Value Store Graph
Oriented Store
Table-like schema Yes No No Yes
support
(columns)
Complete Yes Yes Yes Yes
update/fetch
Par�al Yes Yes Yes No
update/fetch
Query/filter on Yes Yes No Yes
value
Aggregate across Yes No No No
rows
Rela�onship No No No Yes
between en��es
Cross-en�ty view No Yes No No
support
Batch fetch Yes Yes Yes Yes
Batch update Yes Yes Yes No
Types All types support SQL standard. Multiple types exists, such as
document stores, key value stores,
column databases, etc.
Development Developed in 1970 Developed in 2000s
History
Examples SQL Server, Oracle, MySQL. MongoDB, HBase, Cassandra
Data Storage Data is stored in rows and The data model depends on the
Model columns in a table, where each database type. Say data is stored as a
column is of a specific type. key-value pair for key-value stores.
Schemas Fixed structure and schema, so Dynamic schema, new data types, or
any change to schema involves structures can be accommodated by
altering the database. expanding or altering the current
schema.
New fields can be added dynamically.
Scalability Scale up approach is used; this Scale out approach is used; this
means as the load increases, means distributing the data load
bigger, expensive servers are across inexpensive commodity
bought to accommodate the servers.
data.
Supports Supports ACID and transac�ons. Supports par��oning and availability,
Transac�ons and compromises on transac�ons
Support High level of enterprise support Open source model. Support through
is provided. third par�es or companies building
the open source products.
• Defini�on
o NoSQL is an umbrella term for data stores that don’t follow the RDBMS
principles.
o term was used ini�ally to mean “do not use SQL if you want to scale.” Later this
was redefined to “not only SQL,”
• A Brief History of NoSQL
o In 1998, Carlo Strozzi coined the term NoSQL .
o He used this term to iden�fy his database because the database didn’t have a
SQL interface.
o The term resurfaced in early 2009 when Eric Evans (a Rackspace employee)
used this term in an event on open source distributed databases to refer to
distributed databases that were non-rela�onal and did not follow the ACID
features of rela�onal databases.
History
• The development of MongoDB was started in early 2007 by Dwight Merriman, Eliot
Horowitz when the company was developing a Microso� Azure-like pla�orm as a
service.
• This was a New York based company name 10gen which is now changed its name to
MongoDB Inc.
• The ini�al development was focused on building a PaaS (Pla�orm as a Service), but later
in 2009, MongoDB came to the market as an open-source database server and was
maintained by this organiza�on itself.
SQL Comparison
• MongoDB uses documents for storing its data, which offer a flexible schema.
• MongoDB doesn’t provide support for JOIN opera�ons, like in SQL
• MongoDB doesn’t provide support for transac�ons in the same way as SQL.
• MongoDB stores data as Binary JSON documents (also known as BSON). The
documents can have different schemas, which means that the schema can change as
the applica�on evolves.
• MongoDB is built for scalability, performance, and high availability.
History
• The development of MongoDB was started in early 2007 by Dwight Merriman, Eliot
Horowitz when the company was developing a Microso� Azure-like pla�orm as a
service.
• This was a New York based company name 10gen which is now changed its name to
MongoDB Inc.
• The ini�al development was focused on building a PaaS (Pla�orm as a Service), but later
in 2009, MongoDB came to the market as an open-source database server and was
maintained by this organiza�on itself.
1 How consistency can be implemented at both read and write opera�on levels explain.(NOV
5 2022)
OR
Describe the NRW nota�on for implemen�ng consistency at read and write opera�ons.(APR
2023)
Consistency at the write opera�on level:
At the write opera�on level, consistency ensures that all replicas or copies of a data item are
updated in a coordinated and synchronized manner. Here are some approaches to
implemen�ng consistency:
a. Two-Phase Commit (2PC): The two-phase commit protocol is a classic approach for achieving
write consistency in distributed systems. It involves a coordinator and mul�ple par�cipants
(replicas). The coordinator ensures that all par�cipants agree on commi�ng or abor�ng a
transac�on before proceeding. This protocol guarantees that either all replicas commit the
write opera�on or none of them do, ensuring consistency across the system.
b. Distributed Consensus: Distributed consensus algorithms like Paxos and Ra� can be used to
achieve write consistency. These algorithms allow a distributed system to agree on a single value
or sequence of opera�ons. By ensuring that all replicas reach consensus before commi�ng a
write opera�on, consistency is maintained across all par�cipants.
a. Strong Consistency: Strong consistency guarantees that a read opera�on will always return
the most recent value. This can be achieved by direc�ng read opera�ons to a single primary
replica that handles all updates and ensuring that subsequent reads are served by that replica.
Any concurrent write opera�ons are coordinated to maintain consistency.
UNIT 2
1 Consider a Collec�on users containing the following fields
{
id: ObjectID(),
FName: "First Name",
LName: "Last Name",
Age: 30,
Gender: "M",
Country: "Country"
}
Where Gender value can be either "M" or "F" or “Other”.
Country can be either "UK" or "India" or "USA".
Based on above informa�on write the MongoDB query for the following.
i. Update the country to UK for all female users.
ii. Add the new field company to all the documents.
iii. Delete all the documents where Gender = ‘M’.
iv. Find out a count of female users who stay in either India or USA.
v. Display the first name and age of all female employees.
(NOV 2018)
AN
S i.Update the country to UK for all female users.
db.users.update({"Gender":"F"}, {$set:{"Country":"UK"}})
db.users.update({},{$set:{"Company":"TestComp"}},{mul�:true})
db.users.remove({"Gender":"M"})
iv. Find out a count of female users who stay in either India or USA.
db.users.find({"Gender":"F",$or:[{"Country":"India"}, {"Country":"USA"}]}).count()
db.users.find({"Gender":"F"}, {"Name":1,"Age":1})
2 Write the Mongo dB command to create the following with an example:
(i) Database (ii)Collec�on (iii)Document
(iv)Drop Collec�on (v)Drop Database (vi)Index
(NOV 2018)
ANS (i) Database
• MongoDB use DATABASE_NAME is used to create database. The command will create
a new database if it doesn't exist, otherwise it will return the exis�ng database.
Syntax
(ii) Collec�on
• MongoDB db.createCollec�on(name, op�ons) is used to create collec�on.
Syntax
• Basic syntax of createCollec�on() command is as follows −
• db.createCollec�on(name, op�ons)
Example
db.createCollec�on("my collec�on")
(iii) Document
• To create a document into MongoDB collec�on, you need to use MongoDB's insert ()
or save () method.
Syntax
• The basic syntax of insert () command is as follows −
• db.COLLECTIONNAME.insert(document)
Example
db.mycol.insert({
�tle: 'MongoDB Overview',
descrip�on: 'MongoDB is no SQL database',
url: 'htp://www.MongoDB.com',
tags: ['mongo dB', 'database', 'NoSQL'],
likes: 100
})
(iv) Drop Collec�on
• MongoDB's db.collec�on.drop () is used to drop a collec�on from the database.
Syntax
• Basic syntax of drop() command is as follows −
• db.COLLECTIONNAME.drop()
Example
db.mycollec�on.drop()
(v) Drop Database
• MongoDB db.dropDatabase() command is used to drop a exis�ng database.
Syntax
• Basic syntax of dropDatabase () command is as follows −
• db. dropDatabase()
Example
db. dropDatabase()
(vi) Index
• To create an index you need to use ensureIndex() method of MongoDB.
Syntax
• The basic syntax of ensureIndex() method is as follows ().
• db.COLLECTIONNAME.ensureIndex({KEY:1})
Example
• db.mycol.ensureIndex({"�tle":1})
• In addition to collection size, we can also limit the number of documents in the
collection using the max parameter –
2.
db.createCollection("cappedLogCollection",{capped:true,size:10000,max:1000})
• If you want to check whether a collection is capped or not, use the following is
Capped command –
3. db.cappedLogCollection.isCapped()
4. db.runCommand({"convertToCapped":"posts",size:10000})
• This code would convert our existing collection posts to a capped collection.
4 List and explain the different condi�onal operators in MongoDB. (NOV 2018)
ANS
Using Condi�onal Operators
• Different condi�onal operators are $lt , $lte , $gt , $gte , $in , $nin , and $not.
1. $lt and $lte
They stand for “less than ” and “ less than or equal to, ” respec�vely.
If you want to find all students who are younger than 25 (Age < 25), you can
execute the following find with a selector:
db.students.find({"Age":{"$lt":25}})
2. $gt and $gte
The $ gt and $gte operators stand for “g greater than” and “ greater than or
equal to,” respec�vely.Let’s find out all of the students with Age > 25 . This can
be achieved by execu�ng the following command:
db.students.find({"Age":{"$gt":25}})
• BSON document
• MongoDB stores the JSON document in a binary-encoded format. This is termed as
BSON. The BSON data model is an extended form of the JSON data model.
6 What is Polymorphic Schema? Explain the various reasons for using a polymorphic
schema.(NOV 2022)
OR
Define polymorphic schema. Give reasons for its use. (APR 2023)
ANS
Polymorphic Schemas
• Schema Evolu�on
MongoDB offers an Update op�on that can be used to update all the
documents' structure within a collec�on if there's a new addi�on of a field,
imagine the impact of doing this if you have thousands of documents in the
collec�on.
It would be very slow and would have a nega�ve impact on the underlying
applica�on's performance.
One of the ways of doing this is to include the new structure to the new
documents being added to the collec�on and then gradually migra�ng the
collec�on in the background while the applica�on is s�ll running. This is one of
the many use cases where having a polymorphic schema will be advantageous.
the applica�on team decides to introduce a "short descrip�on" field in the
�cket document structure, so the best alterna�ve is to introduce this new field
in the new �cket documents.
Within the applica�on, you embed a piece of code that will handle retrieving
both "old style" documents (without a short descrip�on field) and "new style"
documents (with a short descrip�on field).
Gradually the old style documents can be migrated to the new style documents.
Once the migra�on is completed, if required the code can be updated to
remove the piece of code that was embedded to handle the missing field.
E.g.:
A compound index can only help with sor�ng if it is a prefix of the sort.
db.tes�ndx.ensureIndex({"Age": 1, "Name": 1, "Class": 1})
db.tes�ndx.find(). sort({"Age":1})
db.tes�ndx.find().sort({"Age":1,"Name":1})
db.tes�ndx.find().sort({"Age":1,"Name":1, "Class":1})
db.tes�ndx.find().sort({"Class":1,"Age":1,"Name":1}) : WILL NOT BE HELPFUL
Unique Index
if an index is created on the Name field, then two or more documents can have
the same names. However, if uniqueness is one of the constraints that needs to
be enabled, the unique property needs to be set to true when crea�ng the
index.
db.tes�ndx.ensureIndex({"Name":1},{"unique":true})
Uniqueness can be enabled for compound indexes also, which means that
although individual fields can have duplicate values, the combina�on will
always be unique.
db.tes�ndx.ensureIndex({"Name":1, "Age":1},{"unique":true})
system.indexes
All of the informa�on about a database’s indexes is stored in the system.indexes
collec�on. This is a reserved collec�on, so you cannot modify its documents or
remove documents from it.
You can manipulate it only through ensureIndex and the dropIndexes database
commands.
dropIndex
db.tes�ndx.dropIndex({"Name":1})
reindex
When you have performed a number of inser�ons and dele�ons on the
collec�on, you may have to rebuild the indexes so that the index can be used
op�mally.
db.tes�ndx.reIndex()
8 How can you create a collec�on explicitly? Explain about selector and projector with
example.(NOV 2022)
OR
What is a Query Document? Describe selectors and projectors with a suitable example.
(APR 2023)
ANS • A query document can contain selectors and projectors.
• A selector is like a where condi�on in SQL or a filter that is used to filter out the
results.
• db.users.find({"Gender":"F"})
• db.users.find({"Gender":"F", $or: [{"Country":"India"}]})
• db.users.find({"Gender":"F",$or:[{"Country":"India"},{"Country":"US"}]})
• For aggrega�on requirements, the aggregate func�ons need to be used , count()
func�on for aggrega�on
• db.users.find().count()
• A projector is like the select condi�on or the selec�on list that is used to display the
data fields.
• db.users.find({"Gender":"F"}, {"Name":1,"Age":1})
sort () 1 for ascending and -1 for descending sort.
• db.users.find({"Gender":"F"}, {"Name":1,"Age":1}).sort({"Age":1})
• db.users.find({"Gender":"F"},{"Name":1,"Age":1}).sort({"Name":-1,"Age":1})
9 What is the use of findOne() method? Briefly explain about explain () func�on.(NOV 2022)
ANS • findOne( )
Similar to find() is the findOne() command. The findOne() method can take the
same parameters as find(), but rather then returning a cursor, it returns a single
document.
db.users.findOne({"Gender":"F"}, {"Name":1,"Age":1})
db.users.findOne() : returns first document.
• explain( )
The explain() func�on can be used to see what steps the MongoDB database is
running while execu�ng a query.
verbosity modes: allPlansExecu�on , execu�onStats , and queryPlanner . The
default verbosity mode is queryPlanner.
db.users.find({"Name":"Test User"}).explain("allPlansExecu�on")
11 List and explain the 3 core components in the MongoDB package. (NOV 2018)
OR
What are the various tools available in MongoDB? Explain. (APR 2019)
OR
Discuss the various tools in MongoDB. (NOV 2019)
OR
Describe the Core Processes and tools for the MongoDB package. (APR 2023)
ANS
Core Processes
• The core components in the MongoDB package are.
1) mongod: which is the core database process
2) mongos: which is the controller and query router for sharded clusters
3) mongo: which is the interac�ve MongoDB shell
• 1. mongod
The primary daemon in a MongoDB system is known as mongod. This daemon
handles all the data requests, manages the data format, and performs
opera�ons for background management. When a mongod is run without any
arguments, it connects to the default data directory, which is C:\data\db or
/data/db , and default port 27017, where it listens for socket connec�ons.
It’s important to ensure that the data directory exists, and you have write
permissions to the directory before the mongod process is started.
• 2. mongo
mongo provides an interac�ve JavaScript interface for the developer to test
queries and opera�ons directly on the database and for the system
administrators to manage the database. This is all done via the command line.
When the mongo shell is started, it will connect to the default database called
test. This database connec�on value is assigned to global variable db.
• 3. mongos
mongos is used in MongoDB sharding. It acts as a rou�ng service that processes
queries from the applica�on layer and determines where in the sharded cluster
the requested data is located.
MongoDB Tools
• mongodump: This u�lity is used as part of an effec�ve backup strategy. It creates a
binary export of the database contents.
• mongorestore: The binary database dump created by the mongodump u�lity is
imported to a new or an exis�ng database using the mongorestore u�lity.
• bsondump: This u�lity converts the BSON files into human-readable formats such
as JSON and CSV. For example, this u�lity can be used to read the output file
generated by mongodump.
• mongoimport, mongoexport: mongoimport provides a method for taking data in
JSON , CSV, or T SV formats and impor�ng it into a mongod instance. Mongoexport
provides a method to export data from a mongod instance into JSON, CSV, or TSV
formats.
• mongostat, mongotop, mongosniff: These u�li�es provide diagnos�c informa�on
related to the current opera�on of a mongod instance.
The master node maintains a capped collec�on (oplog) that stores an ordered
history of logical writes to the database.
The slaves replicate the data using this oplog collec�on.
Since the oplog is a capped collec�on, if the slave’s state is far behind the
master’s state, the slave may become out of sync.
13 Explain the two ways MongoDB enables distribu�on of the data in Sharding.(NOV 2022)
OR
Write a short note on Data Distribu�on Process.(APR 2019)
OR
Explain the concept of Sharding in detail. (NOV 2019)
ANS
Data Distribu�on Process
• In MongoDB, the data is sharded or distributed at the collec�on level. The collec�on
1. Range-Based Partitioning
In range-based par��oning, the shard key values are divided into ranges. Say
you consider a �mestamp field as the shard key. In this way of par��oning, the
values are considered as a straight line star�ng from a Min value to Max value
where Min is the star�ng period (say, 01/01/1970) and Max is the end period
(say, 12/31/9999). Every document in the collec�on will have �mestamp value
within this range only, and it will represent some point on the line.
Based on the number of shards available, the line will be divided into ranges,
and documents will be distributed based on them.
The documents where the values of the shard key are nearby are likely to fall on
the
same shard. This can significantly improve the performance of the range of
queries.
2. Hash-Based Partitioning
In hash-based par��oning, the data is distributed on the basis of the hash value
of
the shard fields. If selected, this will lead to a more random distribu�on
compared to rangebased par��oning.
It’s unlikely that the documents with close shard key will be part of the same
chunk.
For example, for ranges based on the hash of the id field, there will be a straight
line of hash values, which will again be par��oned on basis of the number of
shards. On the basis of the hash values, the documents will lie in either of the
shards.
14 What is Sharding? Explain the data balancing process used in Sharding.(APR 2023)
ANS
Data Balancing Process
• MongoDB ensures balance with the following background processes:
• Chunk spli�ng
Chunk spli�ng is one of the processes that ensures the chunks are of the
specified size.
If the size of the chunk changes due to an insert or update opera�on, and
exceeds the default chunk size, then the chunk is split into two smaller chunks
by the mongos.
• Balancer
Balancer is the background process that is used to ensure that all of the shards
are equally loaded or are in a balanced state. This process manages chunk
migra�ons.
15 Discuss the points to be considered while Impor�ng data in a Shared environment. (NOV
2018)
Shard:
A shard is a subset of the data in the sharded cluster. It represents a dis�nct database or
par��on that holds a por�on of the overall dataset. Each shard is a separate server or replica
set that can store and manage its own data independently. Shards are responsible for
execu�ng read and write opera�ons on the data they contain.
Config Servers:
Config servers store the metadata and configura�on informa�on about the sharded cluster.
They maintain details such as shard key ranges, shard mappings, and cluster se�ngs. Config
servers ensure the cluster's consistency and provide the necessary informa�on for rou�ng
opera�ons to the appropriate shards.
Query Routers:
Query routers (or mongos instances) are responsible for receiving and rou�ng client requests
to the appropriate shards. They act as an intermediary between the applica�on and the
sharded cluster. Query routers determine the target shard based on the shard key value in
the query and route the opera�on to the respec�ve shard. They handle query op�miza�on,
aggrega�on, and data chunk migra�on.
Sharding Metadata:
Sharding metadata includes informa�on about the distribu�on of data across shards, shard
key ranges, chunk distribu�on, and metadata about individual shards. This metadata is stored
and managed by the config servers. It helps query routers and other cluster components to
make informed decisions about rou�ng and managing data distribu�on.
Balancer:
The balancer is responsible for moving data between shards to maintain a balanced
distribu�on and avoid hotspots. It monitors the data distribu�on across shards and triggers
chunk migra�ons when necessary. The balancer ensures that the data is evenly distributed,
op�mizing query performance and resource u�liza�on across the sharded cluster.
Chunk:
A chunk is a con�guous range of data within a shard. The sharding mechanism divides the
data into smaller chunks based on the shard key range. Each chunk represents a subset of
the data within a shard, and it is assigned to a specific shard based on the shard key value.
Chunks can be migrated between shards by the balancer to maintain an even distribu�on.
Shard Key:
The shard key is the atribute or field used to determine the placement of data within a
sharded cluster. It is chosen based on the applica�on's requirements and data characteris�cs.
The shard key value is used to determine the target shard for storing and querying data. It
should be carefully chosen to ensure even distribu�on and avoid hotspots.
UNIT 3
1 What is Data Storage engine? Which is the default storage engine in MongoDB? Also compare
MMAP and Wired Tiger storage engines.(NOV 2018)
OR
What is Data Storage Engine? Differen�ate between MMAP and Wired storage engines.(NOV
2019)
ANS
Data Storage Engine
• MongoDB uses MMAP as its default storage engine.
• This engine works with memory-mapped files.
• Memory-mapped files are data files that are placed by the opera�ng system in
memory using the mmap () system call.
• mmap is a feature of OS that maps a file on the disk into virtual memory.
• MongoDB uses memory-mapped files for any data interac�on or data
management ac�vity. As and when the documents are accessed, the data files are
memory mapped to the memory.
• MongoDB allows the OS to control the memory mapping and allocate the
maximum amount of RAM. Doing this results in minimal effort and coding at
MongoDB level.
ANS
Data File (Relevant for WiredTiger)
• WiredTiger cache is used for any read/write opera�ons on the data. The trees in
cache are op�mized for in-memory access.
• Reads and Writes
• In WiredTiger, the data in the cache is stored in a B+ tree structure which is
op�mized for in-memory. The cache maintains an on-disk page image in associa�on
with an index, which is used to iden�fy where the data being asked for actually
• The write opera�ons do not change the page; instead, the updates are layered on
top of the page. A skipList data structure is used to maintain all the updates, where
the most recent update is on the top.
• Does not support update in current document usually it deletes and writes a
document again.
3 What is Journaling? Explain the importance of Journaling with the help of a neat Diagram.
(NOV 2018)
OR
Delineate the write opera�ons performed using Journaling. (APR 2023)
ANS
Using Journaling
• MongoDB disk writes are lazy, which means if there are 1,000 increments in one
second, it will only be writen once. The physical writes occurs a few seconds a�er
the opera�on. We will now see how an update actually happens in mongod.
• In the MongoDB system, mongod is the primary daemon process. So the disk has
the data files and the journal files.
• When the mongod is started, the data files are mapped to a shared view. In other
words, the data file is mapped to a virtual address space.
• Basically, the OS recognizes that your data file is 2000 bytes on disk, so it maps this
to memory address 1,000,000 – 1,002,000. Un�l now you s�ll had files backing up
the memory. Thus, any change in memory will be flushed to the underlying files by
the OS.
• This is how the mongod works when journaling is not enabled. Every 60 seconds
the in-memory changes are flushed by the OS. Why the virtual memory amount
used by mongod doubles when the journaling is enabled.
4 Explain “ GridFS – The MongoDB File System” with the help of a neat diagram.(NOV 2018)
OR
Write a short note on GridFS. (APR 2019)
OR
Explain �re concept of GridFS - The MongoDB File System. (NOV 2022)
OR
Illustrate the following methods of GridFS: (APR 2023)
a. new_file()
b. get_version()
c. get_last_version()
d. delete()
e. exists() and put()
AN
S GridFS – The MongoDB File System
• GridFS is MongoDB’s specifica�on for handling large files that exceed BSON’s
document size limit.
• The Ra�onale of GridFS
By design, a MongoDB document (i.e., a BSON object) cannot be
larger than 16MB. This is to keep performance at an op�mum
level, and the size is well suited for our needs.
For example, 4MB of space might be sufficient for storing a sound
clip or a profile picture. However, if the requirement is to store
high quality audio or movie clips, or even files that are more than
several hundred megabytes in size, MongoDB has covered by
using GridFS.
GridFS specifies a mechanism for dividing a large file among
mul�ple documents. The language driver that implements it, for
example, the PHP driver, takes care of the spli�ng of the stored
files (or merging the split chunks when files are to be retrieved)
under the hood.
The developer using the driver does not need to know of such
internal details. This way GridFS allows the developer to store and
manipulate files in a transparent and efficient way.
GridFS uses two collec�ons for storing the file.
• Using GridFS
• The first thing that is needed is a reference to the GridFS filesystem :
1. from pymongo import MongoClient
2. import gridfs
3. db = MongoClient().gridfs_example
4. fs = gridfs.GridFS(db)
• Every GridFS instance is created with and will operate on a specific
Database instance.
• Saving and Retrieving Data
• The simplest way to work with gridfs is to use its key/value interface (the
put() and get() methods). To write data to GridFS, use put ():
a = fs.put (b"hello world")
fs.get(a).read()
Indexing
• Sparse Indexes
• The index is said to be sparse because this only contains documents with
the indexes field and miss the documents when the fields are missing. Due
to this nature, sparse indexes provide a significant space saving.
db. User.ensureIndex({ "LastName": 1 }, { sparse: true } )
This index will contain documents such as
{FirstName: Test, LastName: User}
{FirstName: Test2, LastName: }
the following document will not be part of the sparse index:
{FirstName: Test1}
• An index is a data structure that speeds up the read opera�ons.
• _id index
This is the default index that is created on the id field. This index cannot
be deleted.
• Secondary Indexes
All indexes that are user created using ensureIndex () in MongoDB
are termed as secondary indexes.
db. products.ensureIndex ({"item": 1, "loca�on": 1})
• Indexes with Keys Ordering
The refernces are maintained in either an ascending order or
descending order.
db.events.ensureIndex ({"username": 1, "�mestamp": -1})
• Unique Indexes
ensures that you have unique values in the user_id field.
db.payroll.ensureIndex ({ "userid": 1 }, { unique: true})
• TTL Indexes (Time To Live)
If you want to set the TTL of one hour on collec�on logs, the
following command can be used:
db. Logs.ensureIndex ({"Sample_Time": 1}, {expireA�erSeconds:
3600})
• Geospa�al Indexes
If speed is not a primary concern or if the data set is larger than what any in-
memory strategy can support, it’s very important to select a proper disk type.
• CPU:
If you an�cipate using map reducing, then the clock speed and the available
processors become important considera�ons.
• Replica�on:
is used if high availability is one of the requirements.
9 What are the �ps need to be considered when coding with the MongoDB database. (NOV
2022)
AN
S Coding
• avoid $Where as much as possible because it’s an extremely �me- and resource
intensive opera�on.
4. If the secondary is used for taking backups, consider taking backups without
blocking.
5. Check for replica�on errors. Run rs.status () and check the errmsg field.
Sharding Limitations
Sharding is the mechanism of spli�ng data across shards. The following are the limita�ons .
when dealing with sharding.
1. Shard Early to Avoid Any Issues
Using the shard key, the data is split into chunks, which are then automa�cally distributed
amongst the shards. However, if sharding is implemented late, it can cause slowdowns of the
servers because the spli�ng and migra�on of chunks takes �me and resources.
A simple solu�on is to monitor your MongoDB instance capacity using tools such as
MongoDB Cloud Manager (flush �me, lock percentages, queue lengths, and faults are good
measures) and shard before reaching 80% of the es�mated capacity.
2. Shard Key Can’t Be Updated
The shard key can’t be updated once the document is inserted in the collec�on because
MongoDB uses shard keys to determine to which shard the document should be routed. If
you want to change the shard key of a document, the suggested solu�on is to remove the
document and reinsert the document when he change has been made.
3. Shard Collec�on Limit
The collec�on should be sharded before it reaches 256GB.
4. Select the Correct Shard Key
It’s very important to choose a correct shard key because once the key is chosen it’s not easy
to correct it.
13 Define Monitoring. Explain the factors to be considered while using Monitoring Services. (NOV
2019)
OR
Write a short note on performance monitoring of Mongo DB query.(APR 2023)
AN
S Monitoring
• MongoDB system should be proac�vely monitored to detect unusual behaviors so
that necessary ac�ons can be taken to resolve issues.
• MongoDB also provides several tools such as mongostat and mongotop to gain
insights into the performance.
• When using monitoring services, the following should be watched closely.
• Op counters: Includes inserts, delete, reads, updates and cursor usage.
• Resident memory: An eye should always be kept on the allocated memory.
• Working set size: The ac�ve working set should fit into memory for a good
performance, so a close eye needs to be kept on the working set.
14 “With the rise of the Smartphone, it’s becoming very common to query for things near a
current loca�on”. Explain the different indexes used by MongoDB to support such loca�on-
based queries. (NOV 2018)
AN The different indexes used by MongoDB to support such location-based queries,
S MongoDB provides geospatial indexes
Geospatial indexes
• To create a geospa�al index, a coordinate pair in the following forms must exist in
the documents:
• • Either an array with two elements
• • Or an embedded document with two keys (the key names can be anything).
• The following are valid examples:
• { "userloc" : [ 0, 90 ] }
• { "loc" : { "x" : 30, "y" : -30 } }
• { "loc" : { "la�tude" : -30, "longitude" : 180 } }
• {"loc" : {"a1" : 0, "b1" : 1}}. db.userplaces.ensureIndex( { userloc : "2d" } )
• A geospa�al index assumes that the values will range from -180 to 180 by default.
If this needs to be changed, it can be specified along with ensureIndex as follows:
• db.userplaces.ensureIndex({"userloc" : "2d"}, {"min" : -1000, "max" : 1000})
• The following can be used to create a geospa�al index on the userloc field:
• Let’s understand with an example how this index works. Say you have documents
that are of the following type:
• {"loc":[0,100], "desc":"coffeeshop"}
• {"loc":[0,1], "desc":"pizzashop"}
• If the query of a user is to find all coffee shops near her loca�on, the following
compound index can help:
• db.ensureIndex({"userloc" : "2d", "desc" : 1})
Geohaystack Indexes
• Geohaystack indexes are bucket-based geospa�al indexes (also called geospa�al
haystack indexes ). They are useful for queries that need to find out loca�ons in a
small area and also need to be filtered along another dimension, such as finding
documents with coordinates within 10 miles and a type field value as restaurant .
• While defining the index, it’s mandatory to specify the bucketSize parameter as it
determines the haystack index granularity. For example,
• db.userplaces.ensureIndex({ userpos : "geoHaystack", type : 1 }, { bucketSize : 1 })
• This example creates an index wherein keys within 1 unit of la�tude or longitude
are stored together in the same bucket. You can also include an addi�onal
category in the index, which means that informa�on will be looked up at the same
�me as finding the loca�on details.
UNIT 4
1 Explain about Solid State Disk.(NOV 2022)
ANS
Solid State Disk
• In contrast to a magne�c disk, solid state disks contain no moving parts and provide
tremendously lower IO latencies.
• Performance of flash SSD is on orders of magnitude superior to magne�c disk
devices, especially for read opera�ons.
• A random read from a high-end solid-state disk may complete in as litle as 25
microseconds, while a read from a magne�c disk may take up to 4,000
microseconds (4 milliseconds or 4/1000 of a second)—over 150 �mes slower.
• SSDs store bits of informa�on in cells.
• A single-level cell (SLC) SSD contains one bit of informa�on per cell, while a mul�-
level cell (MLC) SSD contains more than one bit.
• Read opera�ons, and ini�al write opera�ons, require only a single-page IO.
• However, changing the contents of a page requires an erase and overwrite of a
complete block. Even the ini�al write can be significantly slower than a read, but
the block erase opera�on is par�cularly slow.
• When the database is started, all data is loaded from checkpoint files into main
memory.
• The applica�on interacts with TimesTen via SQL requests that are guaranteed to
find all relevant data inside that main memory.
• Periodically or when required database data is writen to checkpoint files.
• An applica�on commit triggers a write to the transac�on log.
• though by default this write will be asynchronous so that the applica�on will not
need to wait on disk. The transac�on log can be used to recover the database in
the event of failure.
4 Define In-Memory Database. What are the techniques used in In-Memory Database to ensure
that data is not lost?(NOV 2019)
AN In-Memory Databases
S
• The cost of memory and the amount of memory that can be stored on a server have
both been moving exponen�ally since the earliest days of compu�ng.
• The size of the average database—par�cularly in light of the Big Data
phenomenon—has been growing exponen�ally as well.
• Tradi�onal rela�onal databases use memory to cache data stored on disk, and they
generally show significant performance improvements as the amount of memory
increases.
• In the tradi�onal database architecture, COMMIT opera�ons require a write to a
transac�on log on a persistent medium, and periodically the database writes
“checkpoint” blocks in memory to disk.
• Taking full advantage of a large memory system requires an architecture that is
aware the database is completely memory resident and that allows for the
advantages of high-speed access without losing data in the event of a power failure.
• two changes to tradi�onal database architecture an in-memory system should
address.
Cache-less architecture:
Tradi�onal disk-based databases almost invariably cache data in main memory
to minimize disk IO.
5 Explain how does Redis uses disk files for persistence.(NOV 2019)
OR
Define IMDB. Given overview of Redis architecture. (APR 2023)
Redis
• All Redis data resides in-memory, in contrast to databases that store data on disk or
SSDs. By elimina�ng the need to access disks, in-memory data stores such as Redis
avoid seek �me delays and can access data in microseconds.
• Flexible data structures
• Redis has a vast variety of data structures to meet your applica�on needs.
Strings – text or binary data up to 512MB in size
Lists – a collec�on of Strings in the order they were added.
Sets – an unordered collec�on of strings with the ability to intersect, union, and
diff other Set types.
Sorted Sets – Sets ordered by a value.
Hashes – a data structure for storing a list of fields and values.
• Although Redis was designed to hold all data in memory, it is possible for Redis to
operate on datasets larger than available memory by using its virtual memory
feature.
• When this is enabled, Redis will “swap out” older key values to a disk file. Should
the keys be needed they will be brought back into memory.
• Redis uses disk files for persistence.
The Snapshot files store copies of the en�re Redis system at a point in �me.
Snapshots can be created on demand or can be configured to occur at
scheduled intervals or a�er a threshold of writes has been reached.
The Append Only File (AOF) keeps a journal of changes that can be used to “roll
forward” the database from a snapshot in the event of a failure.
Redis supports asynchronous master/slave replica�on. If performance is very
cri�cal and some data loss is acceptable, then a replica can be used as a backup
database and the master configured with minimal disk-based persistence.
9 Write jQuery code to add a CSS class to the HTML elements. (NOV 2018)
OR
What is jQuery? Explain jQuery clement selector, id selector and class selector with example.
(NOV 2022)
AN <!DOCTYPE html>
S <html>
<head>
<script src="htps://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<script>
$(document).ready(func�on(){
$("buton").click(func�on(){
$("p:first").addClass("intro");
});
}); </script>
<style>
.intro {
font-size: 150%;
color: red;
}
</style>
</head>
<body>
<h1>This is a heading</h1>
<p>This is a paragraph.</p>
<p>This is another paragraph.</p>
<buton>Add a class name to the first p element</buton>
</body>
</html>
10 What is the chaining methods? Write a code snippet using chaining methods.(APR 2023) .
OR
Explain the jQuery DOM Filter Methods.(APR 2019)
• Chaining Methods
Chaining is a good way to avoid selec�ng elements more than once, as
follows:
• $("div").fadeOut();
• $("div").css("color", "red");
• $("div").text("hello world");
Instead of doing that and running $("div") three �mes, you could do this:
• $("div").fadeOut().css("color", "red").text("hello world");
parents() quite literally gets all an element’s parents, right up to the very top
element
• $("strong").parents()
Will give all parents and parents parent �ll html.
• ("strong").parent()
Will give parent of the element.
filters that are very useful are the :even and :odd filters.
• var rows = $("tr");
• rows.filter(":even").css("background", "red");
• rows.filter(":odd").css("background", "blue");
Will strip table in 2 different colors.
AN
S Events
• There are a lot of events in the browser that you can bind to
• click: Clicking an elements such as a buton
• hover: Interac�ng with an element via the mouse; in pure JavaScript, known as
mouseenter or mouseleave.
• submit: Submi�ng a form.
• trigger: Making an event happen.
• off: Removing an event.
• Popular Events
• Now that you know how to bind events, it’s �me to examine some of the
ones I tend to use most o�en in day to day development. The most obvious
is the click event, which you have already seen. This is the event you are
likely going to use more than any other.
• Another popular event is hover.
$("div").hover(func�on() {
alert("hovered in");
}, func�on() {
alert("hovered out");
});
• By taking advantage of chaining, you can simply bind the mouseleave
immediately a�er binding the mouseenter func�on.
$("div").on("mouseenter", func�on() {
alert("hovered over");
}).on("mouseleave", func�on() {
alert("hovered out");
});
• You can bind to as many events as you want in one go:
$("div").on("click", func�on() {
alert($(this).atr("class"));
});
• Triggering Events
• Some�mes you might want to manually trigger an event. Perhaps you’ve
got a link that enables the user to fill out a form, and when it’s clicked,
you’d like to fire the submit event on a form. jQuery has the trigger ()
method to do this for us:
$("a").on("click", func�on() {
$("form").trigger("submit");
});
• Unbinding from Events
• Just as you have on() for binding to events, you have off() for unbinding
from events.
$("p").off("click");
• That will unbind all events from every div. You can also pass in an event as
the first parameter to unbind all events of that type. The following code
unbinds all click events from paragraphs, so clicking a paragraph does
nothing:
$("p").on("click", func�on() {
alert("click " + this.id);
});
$("p").off("click");
• The Event Object
• Whenever you bind an event to a func�on and that func�on is then
triggered, jQuery passes what’s known as the event object. This object
contains a lot of informa�on about the event.
$("p").on("click", func�on(event) {
console.log(event);
});
• Building an Accordion
• The code that we’ve asked you to write has been small and typically used to
show a small feature. This �me, you’re going to pull together what you’ve
learned in the past few chapters and build a basic accordion. Once you
study events in further detail in the next chapter, you will visit this code
again and improve it.
14 Write a jQuery code to change text contents of the elements on buton click.(NOV 2018)
AN <!DOCTYPE html>
S <html>
<head>
<�tle></�tle>
<script src="jquery.js"></script>
<script>
$(document).ready(
func�on()
{
$("buton").click(
func�on()
{
document.write("hello world");
} );
}
);
</script>
</head>
<body>
<p>Hello! Welcome in Jquery Language!!</p>
<buton>Click me</buton>
</body>
</html>
15 Explain how we can create our own custom event in jQuery with an example.(NOV 2018)
ANS A seldom-used but very useful feature of jQuery’s events is the ability to trigger and bind to
your own custom events. We can use the jQuery,s On method to atach event handlers to
elements.For example in the below code we have created a customized event named
“myOwnEvent” which will get triggered on click of the buton.
Code:
<html>
<head>
<script src="jquery-3.3.1.min.js"></script>
<script>
$(document).ready(func�on(){
$("p").on("myOwnEvent", func�on(event, showName){
$(this).text(showName + "! It is a Javascript Library!");
});
$("buton").click(func�on(){
$("p").trigger("myOwnEvent", ["Jquery"]);
});
});
</script>
</head>
<body>
<buton>Trigger custom event</buton>
<p>Click the buton to atach a customized event on this p element.</p>
</body>
</html>
16 What is Ajax? What is the use of Ajax? Explain how Ajax can be used with jQuery. (NOV 2018)
ANS 1. Ajax stands for Asynchronous Javascript And Xml. Ajax is just a means of loading
data from the server to the web browser without reloading the whole page.
2. Basically, what Ajax does is make use of the JavaScript-based XMLHtpRequest
object to send and receive informa�on to and from a web server asynchronously,
in the background, without interfering with the user's experience.
3. Ajax has become so popular that you hardly find an applica�on that doesn't use
Ajax to some extent. The example of some large-scale Ajax-driven online
applica�ons is: Gmail, Google Maps, Google Docs, YouTube, Facebook, Flickr, etc.
Ajax with jQuery
4. Different browsers implement the Ajax differently that means if we are adop�ng
the typical JavaScript way to implement the Ajax we have to write the different
code for different browsers to ensure that Ajax would work cross-browser.
5. But, fortunately jQuery simplifies the process of implemen�ng Ajax by taking care
of those browser differences. It offers simple methods such as load(), $.get(),
$.post(), etc. to implement the Ajax that works seamlessly across all the browsers.
For example jQuery load() Method
6. The jQuery load() method loads data from the server and place the returned
HTML into the selected element. This method provides a simple way to load data
asynchronous from a web server.
1. DOM Inser�on, Around: These methods let you insert elements around
exis�ng ones.(wrap(),wrapAll(),wrapInner())
The wrap() method wraps specified HTML element(s) around each selected element.
Example
Wrap a <div> element around each <p> element:
$("buton").click(func�on(){
$("p").wrap("<div></div>");
});
wrapAll():Wraps HTML element(s) around all selected elements
wrapInner():Wraps HTML element(s) around the content of each selected element
2. DOM Inser�on, Inside: These methods let you insert elements within exis�ng
ones.(append(),appendTo(), html(),prepend(),prependTo(),text())
The append() method inserts specified content at the end of the selected elements.
Example
Insert content at the end of all <p> elements:
$("buton").click(func�on(){
$("p").append("<b>Appended text</b>");
});
The prepend() method inserts specified content at the beginning of the selected
elements.
Example
Insert content at the beginning of all <p> elements:
$("buton").click(func�on(){
$("p").prepend("<b>Prepended text</b>");
});
The html() method sets or returns the content (innerHTML) of the selected elements.
Example
Change the content of all <p> elements:
$("buton").click(func�on(){
$("p").html("Hello <b>world</b>!");
});
3. DOM Inser�on, Outside: These methods let you insert elements outside exis�ng
ones that are completely separate( a�er(),before(),insertA�er(),insertBefore() )
The a�er() method inserts specified content a�er the selected elements.
Example
Insert content a�er each <p> element:
$("buton").click(func�on(){
$("p").a�er("<p>Hello world!</p>");
});
The before() method inserts specified content in front of (before) the selected elements.
Example
Insert content before each <p> element:
$("buton").click(func�on(){
$("p").before("<p>Hello world!</p>");
});
Removing Elements to DOM
2) jQuery provides handful of methods, such as empty(), remove(), unwrap() etc. to
remove exis�ng HTML elements or contents from the document.
The empty() method removes all child nodes and content from the selected elements
Example
Remove the content of all <div> elements:
$("buton").click(func�on(){
$("div").empty();
});
The remove() method removes the selected elements, including all text and child nodes.
This method also removes data and events of the selected elements.
Example
Remove all <p> elements:
$("buton").click(func�on(){
$("p").remove();
});
18 What is an Event? Explain with syntax fadeln() and fadeOut() jQuery methods. (NOV 2022)
• Events
When you write JavaScript in the browser, you’re writing event-driven code. Most
of your code will be executed when something happens, such as having content
slide in when a user clicks a link.
$("div").click(function() {
alert("hello");
});
$("div").on("click", function() {
alert("hello");
});
jQuery fadeIn()
jQuery fadeIn() method is used to fade in the element.
$(document).ready(function(){
$("button").click(function(){
$("#div1").fadeIn();
});
});
Syntax:
$(selector).fadeIn(speed, easing, callback);
jQuery fadeOut()
jQuery fadeOut() method is used to fade out the element.
$(document).ready(function(){
$("button").click(function(){
$("#div1").fadeoUT();
});
});
Syntax:
$(selector).fadeOut(speed, easing, callback);
speed: It is an optional parameter. It specifies the speed of the delay. Its possible
vales are slow, fast and milliseconds.
19 With a suitable code snippet, discuss the various methods for inser�ng content outside other
element in jQuery. (APR 2023)
• DOM Insertion, Outside:
These methods let you insert elements outside existing ones that are completely
separate.
after()
<div><p>Hello</p></div>
$("p").after("<span>Hey</span>"); <div><p>Hello</p><span>Hey</span></div>
before()
<div><p>Hello</p></div>
$("p").before("<span>Hey</span>");
<div><span>Hey</span><p>Hello</p></div>
insertAfter()
<div><p>Hello</p></div>
$("<span>Hey</span>").insertAfter("p");
<div><p>Hello</p><span>Hey</span></div>
insertBefore()
<div><p>Hello</p></div>
$("<span>Hey</span>").insertBefore("p");
<div><span>Hey</span><p>Hello</p></div>
16 What is Event Propaga�on? Demonstrate the use of stop Propaga�on(). (APR 2023)
Event Propagation
<div style="background-color:red">DIV 1
<div style="background-color:green">DIV 2
<div style="background-color:blue">DIV 3</div>
</div>
</div>
$(function() {
$("div").on("click",function(event) {
alert("Hello World"+$(this).html());
event.stopPropagation();
});
});
if you click on DIV 3 div tag you will get 3 alerts (div 3, div 2, div 1) because
div 3 is nested in div 2 and div 2 is nested in div 1, so if you click on div 3 it
seems to be you clicked all parents. This is called a event propagation.
$(function() {
$("div").on("click",function(event) {
alert("Hello World"+$(this).html());
event.stopPropagation();
});
});
17 How are Ajax requests handled in jQuery? Illustrate the use of done(), fail() and always(). (APR
2023)
• jQuery comes with jQuery.ajax(), a complex and powerful method to handle
Ajax requests
• Take a look at how you might make a request to a fictional URL to get some
JSON. Later on, you will use an actual API, but for now, familiarize yourself with
the method. With the $.ajax() method, you can pass in one argument, which is
an object of options, or you can pass in two arguments. The first is the URL to
pass in and the second is an object of options. We prefer the first method—
passing in one object that contains a property for the URL, in which you would
either do, for example:
$.ajax({
"url": "/myurl",
//more settings here
});
$.ajax({
"url" : 'https://jsonplaceholder.typicode.com/posts'
}).done(function(data){
//if the call is successful
console.log(data)
}).fail(function(jqXHR, textStatus, errorThrown){
//if the call is not successful
}).always(function(){
//runs all the time
});
• As of jQuery 1.8, using error() and success() are deprecated, meaning they
shouldn’t be used; instead, use the following:
o done(), which replaces success()
o fail(), which replaces error()
o always(), which runs regardless of whether the request was successful or
not
18 What is a Plug-in? Create a jQuery plug-in that logs out the value of the ID atribute for every
element on the page. (APR 2023)
jQuery plug-ins are something that beginners tend to shy away from or are afraid to
use. Plug-ins seem to be built up in people’s minds as incredibly complex things to
use, but once you learn how they work, you’ll find them actually very
straightforward, and you’ll find yourself making multiple plug-ins while working.
Plug-ins are not as complicated as you might think
Why a Plug-in?
If you find yourself writing very similar code multiple times on different projects, it
is a great sign that you should spend time producing a plug-in that can then be
easily reused with little effort.
// here we use this function as plugin to div tag, when ever div tag is rendered it
will print logs.
$(function() {
$("div").logID();
});
UNIT 4
1 Write a short note on JSON Arrays. (NOV 2018)
AN 1. Arrays in JSON are almost the same as arrays in JavaScript.In JSON, array values
S must be of
type string, number, object, array, boolean or null.In JavaScript, array values can be all
of the
above, plus any other valid JavaScript expression, including func�ons, dates, and
undefined.
2. Arrays in JSON Objects
Arrays can be values of an object property:
Example
{
"name":"John",
"age":30,
"cars":[ "Ford", "BMW", "Fiat" ]
}
3. Accessing Array Values
We can access the array values by using the index number:
Example
x = myObj.cars[0];
4. Looping Through an Array
We can access array values by using a for-in loop:
var myObj, i, x = "";
myObj = {
"name":"John",
"age":30,
"cars":[ "Ford", "BMW", "Fiat" ]
};
for (i in myObj.cars) {
x += myObj.cars[i] + "<br>";
}
5. We can use the index number to modify an array:
Example
myObj.cars[1] = "Mercedes";
6. We can use the delete keyword to delete items from an array:
Example
delete myObj.cars[1];
2 Explain JSON data types. (APR 2019)
AN • JSON Values
S
the values that can be utilized within our JSON structures are represented by
types, as outlined within the 3rd edition of the ECMA standard. JSON makes
use of four primitive types and two structured types.
A JSON value can only be a representative of string, number, object, array,
true, false, and null.
1 $schema
The $schema keyword states that this schema is written according to the draft v4
specification.
2 title
You will use this to give a title to your schema.
3 description
A little description of the schema.
4 type
The type keyword defines the first constraint on our JSON data: it has to be a JSON
Object.
5 properties
Defines various keys and their value types, minimum and maximum values to be
used in JSON file.
6 required
This keeps a list of required properties.
7 minimum
This is the constraint to be put on the value and represents minimum acceptable
value.
You can check a htp://json-schema.org for the complete list of keywords that can be used in
defining a JSON schema. The above schema can be used to test the validity of the following
JSON code –
[
{
"id": 2,
"name": "An ice sculpture",
"price": 12.50,
},
{
"id": 3,
"name": "A blue mouse",
"price": 25.50,
}
]
4 What is JSON Grammar? Explain. (APR 2019)
OR
Explain the JSON Grammar. (NOV 2019)
OR
Explain the JSON Grammar. (NOV 2022)
AN • JSON Grammar
S
JSON, in a nutshell, is a textual representation defined by a small set of
governing rules inwhich data is structured. The JSON specification states that
data can be structured in either of the two following compositions:
1. A collection of name/value pairs
2. An ordered list of values
• ["0",1,2,3,4,100];
• Designing an object and array via Literal Notation with the Provision of
Properties
var objectInstantion = {name:"ben",age:36};
var arrayInstantiation = ["ben",36];
AN • JSON Values
S
the values that can be utilized within our JSON structures are represented by
types, as outlined within the 3rd edition of the ECMA standard. JSON makes
use of four primitive types and two structured types.
A JSON value can only be a representative of string, number, object, array,
true, false, and null.
Escaped Literals
6 Give an overview about JavaScript Object Nota�on (JSON). Also explain about JSON tokens.
(NOV 2022)
AN The JavaScript Object Notation data format, or JSON for short, is derived from the literals of
S the JavaScript programming language. This makes JSON a subset of the JavaScript language.
As a subset, JSON does not possess any additional features that the JavaScript language itself
does not already possess. Although JSON is a subset of a programming language, it itself is
not a programming language but, in fact, a data interchange format.
• JSON Tokens
regarding the interchange of JSON and the many languages that do not natively
possess Objects and Arrays, the tokens that make up the JSON text are all that is
required to interpret if any collections or ordered lists exist and apply all values in a
manner required of that language. This is accomplished with six structural characters.
5) JSON doesn't provide XML provides the capability to display data because it is a
display capabilities. markup language.
6) JSON supports array. XML doesn't support array.
7) JSON is less secured XML is more secured.
than XML.
8) JSON files are more XML files are less human readable.
human readable
than XML.
9) JSON supports only XML support many data types such as text, number, images,
text and number charts, graphs etc. Moreover, XML offers options for
data type. transferring the format or structure of the data with actual
data.
8 What is the use of Stringify func�on? What are the different parameters that can be
passed in Stringify func�on? Explain with an example. (NOV 2018)
OR
Explain the stringify object for JSON Object. (NOV 2019)
OR
What is the use of stringify method? Explain with syntax. (NOV 2022)
AN • Stringify
S o stringify is used for serializing JavaScript values into that of a valid JSON.
The method itself accepts three parameters, value, replacer, and space,
as defined by the signature.
o the JSON Object is a global object that does not offer the ability to create
any instances of the JSON Object.
• Value : The value parameter of the stringify method is the only required
parameter of the three outlined by the signature. The argument supplied to
the method represents the JavaScript value intended to be serialized. This
can be that of any object, primitive, or even a composite of the two.
Code Output
• toJSON
o dates do not possess a literal form, the stringify method captures all
dates it encounters as string literals. It captures not only the date but
time as well. Because stringify converts a date instance into a string,
you might rationalize that it’s produced by calling the toString
method possessed by the Date object. However, Date.toString(),
does not produce a standardized value, but, rather, a string
representation whose format depends on the locale of the browser
that the program is running. With this output lacking a standard, it
would be less than ideal to serialize this value for data interchange.
o What would be ideal is to transform the contents into that of the ISO
8601 grammar, which is the standard for handling date and time
interchange.
o A JavaScript Date Object can be instantiated with the provision of an
ISO formatted string.
o To enable this feature, Crockford’s library also includes the toJSON
method, which is appended to the prototype of the Date Object so
that it will exist on any date.
o The toJSON method provides a convenient way to define the
necessary logic wherein the default behavior may fall short. While
this is not always ideal, it is often necessary. However, the toJSON
method is not the only means of augmenting the default behavior of
the stringify method.
• Replacer
o replacer, is optional, and when supplied, it can augment the default
behavior of the serialization that would otherwise occur. There are
two possible forms of argument that can be supplied. As explained
within the ECMA-262 standardization, the optional replacer
parameter is either a function that alters the way objects and arrays
are stringified or an array of strings and numbers that acts as a white
list for selecting the object properties that will be stringified.
• replacer Array
o var author = new Object();
o author.name="ben";
o author.age=35;
o author.email="iben@spilled-milk.com";
o JSON.stringify(author, ["name","age"] ); // "{"name":"ben",
"age":35"}"
• replacer Function
• The alternate form that can be supplied as the replacer is that of a
function. Supplying a function to the replacer property allows the
application to insert the necessary logic that determines how objects
within the stringify method are serialized, much like that of the
toJSON method.
JSON.stringify(author, function(k,v){
console.log(this);
console.log(k);
console.log(v);
return v;
}
);
JSON.stringify(author,replacer); // "{"name":"Ben","pets":[
{"name":"Waverly"}, {"name":"Westley"} ] }"
• Space
o space, is also optional and allows you to specify the amount of
padding that separates each value from one another within the
produced JSON text. This padding provides an added layer of
readability to the produced string.
o The argument supplied to the parameter must be that of a whole
number equal or greater to 1
9 What is meant by serializa�on? Explain the method used for serializing JavaScript objects.
(APR 2023)
AN Object serializa�on is the process of conver�ng an object’s state to a string from which it can
S later be restored. ECMAScript 5 provides na�ve func�ons JSON.stringify() and JSON.parse() to
serialize and restore JavaScript objects. These func�ons use the JSON data interchange
format. JSON stands for “JavaScript Object Nota�on,” and its syntax is very similar to that of
JavaScript object and array literals:
JSON.parse(text [, reviver]);
JSON.parse can accept two parameters, text and reviver. The name of the parameter
text is indicative of the value it expects to receive.
Invalid JSON Grammar Throws a Syntax Error
var str = JSON.parse( "abc123" ); //SyntaxError: JSON.parse: unexpected character
Valid JSON Grammer Is Successfully Parsed
var str = JSON.parse( "\"abc123\"" ); //valid JSON string value
console.log(str) //abc123;
console.log(typeof str) //string;
• eval
The eval function is a property of the global object and accepts an argument in the
form of a string. The string supplied can represent an expression, statement, or both
and will be evaluated as JavaScript code
eval("alert(\"hello world\")");
If you were to run this program, you would see the dialog prompt appear with the
text hello world.
eval Returns the Result of an Evaluation
var answer = eval("1+5");
console.log(answer) //6;
• reviver
The reviver parameter, unlike the replacer parameter of the stringify method, can
only be supplied a function. the reviver function will be provided with two
arguments, which will assist our supplied logic in determining how to handle the
appropriate JavaScript values for return. The first parameter, k, represents the key or
index of the value being analyzed. Complementarily, the v parameter represents the
value of said key/index.
11 Explain the use of json_encode and json_decode func�on with an example. (NOV 2018)
AN • JSON PHP
S
• JSON Functions
<?php
$json = '{"foo-bar": 12345}';
$obj = json_decode($json);
print $obj->{'foo-bar'}; // 12345
?>
syntax
json_decode( string $json, ?bool $associative = null, int $depth = 512, int
$flags = 0)
When true, JSON objects will be returned as associative arrays; when false, JSON
objects will be returned as objects.
Output:-
Brian
Seatle
import json
data = {
a: 0,
b: 9.6,
c: "Hello World",
d: {
a: 4
}
}
json_data = json.dumps(data)
print(json_data)
Output:-
{"c": "Hello World", "b": 9.6, "d": {"e": [89, 90]}, "a": 0}
13 Write a short note on Persis�ng JSON. (APR 2019)
AN • Persisting JSON
S
"Persisting JSON" typically refers to the act of storing JSON data in a more permanent
or persistent manner, such as in a file or a database. JSON (JavaScript Object
Notation) is a lightweight data interchange format commonly used for representing
structured data.
in order to utilize the produced JSON beyond the given process that created it, it
must be stored for later retrieval.
• HTTP Cookie
• The HTTP
o cookie, or cookie for short, was created as a means to string together the
actions taken by the user per “isolated” request and provide a convenient
way to persist the state of one page into that of another. The cookie is simply
a chunk of data that the browser has been notified to retain.
o Furthermore, the browser will have to supply, per subsequent request, the
retained cookie to the server for the domain that set it, thereby providing
state to a stateless protocol.
o expires: Specifies the expiration date and time of the cookie. It can be set to
a Date object representing the desired expiration time.
o max-age: Sets the maximum age of the cookie in seconds. The cookie will be
automatically deleted after the specified time has passed.
o path: Specifies the path within the website for which the cookie is valid. By
default, the cookie is valid for the path of the current document.
o domain: Limits the cookie to a specific domain or subdomain. For example,
to make the cookie accessible only on "example.com":
o secure: Ensures that the cookie is only sent over secure (HTTPS) connections.
It is used to protect sensitive information.
• document.cookie
• Web Storage
o HTML5 introduced the concept of Web Storage to pick up where the cookie
had left off.
o Web Storage allows for the storing of data, the retrieval of data, and the
removal of data. The means by which we will be working with data and the
storage object is via the Web Storage API.
14 Explain the six members of the web storage Interface. (NOV 2022)
OR
Describe the members of Web Storage API. (APR 2023)
AN • Web Storage Interface
S
o Web Storage allows for the storing of data, the retrieval of data, and the
removal of data. The means by which we will be working with data and the
storage object is via the Web Storage API.
clear void
length Number
15 Right and explain the various atributes of Htp cookie used in Set-Cookie. (APR 2023)
AN • HTTP Cookie
S
• The HTTP cookie
o cookie, or cookie for short, was created as a means to string together the
actions taken by the user per “isolated” request and provide a convenient
way to persist the state of one page into that of another. The cookie is simply
a chunk of data that the browser has been notified to retain.
o Furthermore, the browser will have to supply, per subsequent request, the
retained cookie to the server for the domain that set it, thereby providing
state to a stateless protocol.
o expires: Specifies the expiration date and time of the cookie. It can be set to
a Date object representing the desired expiration time.
o max-age: Sets the maximum age of the cookie in seconds. The cookie will be
automatically deleted after the specified time has passed.
o path: Specifies the path within the website for which the cookie is valid. By
default, the cookie is valid for the path of the current document.
o domain: Limits the cookie to a specific domain or subdomain. For example,
to make the cookie accessible only on "example.com":
o secure: Ensures that the cookie is only sent over secure (HTTPS) connections.
It is used to protect sensitive information.
Entity Body : The syntax of the entity can reflect that of HTML, XML,
or even JSON. However, if the Content-Type entity header is not
supplied, the server, being the receiving party of the request, will
have to guess the appropriate MIME type of the data provided.
17 List and explain any 5 XMLHttpRequest Event Handlers used for Monitoring the
Progress of the HTTP Request. (NOV 2018)
OR
Describe the Request methods of the xhr object. (APR 2023)
AN • XMLHttpRequest Interface
S
• Global Aspects
o XMLHttpRequest Constructor :
o var xhr = new XMLHttpRequest();
o xhr Event Handlers for Monitoring the Progress of the HTTP Request
o onprogress, onload, onerror, ontimeout, onreadystatechange
18 List and explain the different methods of a Cradle Wrapper. (NOV 2018)
AN
S
<?php
$myJSON = '{ "name":"John", "age":30, "city":"New York" }';
echo "myFunc(".$myJSON.");";
?>
function myFunc(myObj) {
document.getElementById("demo").innerHTML = myObj.name;
}
<p id="demo"></p>
<script>
function myFunc(myObj) {
document.getElementById("demo").innerHTML = myObj.name;
}
</script>
<script src="demo_jsonp.php"></script>