Professional Documents
Culture Documents
1000 Meqs - PDF 3 Dbms
1000 Meqs - PDF 3 Dbms
1000 MEQs
50 Qs on ADVANCED DATABASES
BIGDATA | NoSQL | Data Mining & Data Warehousing
A. I, II, and IV
B. I, II, IV and V
C. II,IV and V
Ans: C
Solution : (C)
Unique allow Null values
Not Null not allow null values
Primary key not allow null values
Check not allow null values
Foreign key allow null values.
BIGDATA
201. Data is distributed over several machines and replicated to ensure their durability
to failure and high availability to parallel application in HDFS. We can use HDFS :
1. Fragmented Files
2. Streaming Data Access
3. Commodity Hardware
a) 1 and 3
b) 2 and 3
c) 1 and 2
d) All of the above
e) Only 3
BIGDATA
201. Data is distributed over several machines and replicated to ensure their durability
to failure and high availability to parallel application in HDFS. We can use HDFS :
HDFS = Hadoop Distributed File System
1. Fragmented Files = Very Large Files
2. Streaming Data Access We cannot use HDFS :
1. Low Latency data access
3. Commodity Hardware 2. Lots Of Small Files
3. Multiple Writes
a) 1 and 3
FACEBOOK has the world’s largest
b) 2 and 3 Hadoop cluter.
c) 1 and 2
d) All of the above
e) Only 3
BIGDATA
1) A and B
2) B and C
3) A and C
4) All of them
5) None of them
BIGDATA
1) A and B
2) B and C
3) A and C
4) All of them
5) None of them
BIGDATA
A. Apache Hadoop
B. Apache Spark
C. Apache Kafka
D. Apache Pytarch
1) A and B
2) A, B and C
3) B, C and D
4) A, C and D
5) None of them
BIGDATA
A. Apache Hadoop
B. Apache Spark
C. Apache Kafka
D. Apache Pytarch
1) A and B
2) A, B and C
3) B, C and D
4) A, C and D
5) None of them
BIGDATA
204. Apache Kafka is an open-source platform that was created by?
A. LinkedIn
B. Facebook
C. Google
D. IBM
BIGDATA
204. Apache Kafka is an open-source platform that was created by?
A. LinkedIn
B. Facebook
C. Google Apache Kafka
D. IBM Aims to provide a unified, high-throughput, low-
latency platform for handling real-time data feeds.
BIGDATA
205. MATCH THE FOLLOWING :
Matured transaction and various Transaction is adapted from DBMS not No transaction management and no
concurrency techniques matured concurrency
Versioning over tuples,row,tables Versioning over tuples or graph is possible Versioned as a whole
It is more flexible than structured data but It is more flexible and there is absence of
It is schema dependent and less flexible
less flexible than unstructured data schema
It is very difficult to scale DB schema It’s scaling is simpler than structured data It is more scalable.
c) Discarded hardware
c) Discarded hardware
What is MapReduce?
1. A processing technique and a program model for distributed computing based on java.
2. The MapReduce algorithm contains two important tasks, namely Map and Reduce.
3. Map takes a set of data and converts it into another set of data, where individual elements are broken down into
tuples.
4. Secondly, reduce task, which takes the output from a map as an input and combines those data tuples into a
smaller set of tuples.
5. As the sequence of the name MapReduce implies, the reduce task is always performed after the map job.
6. Easy to scale data processing over multiple computing nodes.
BIGDATA
211. Which of the following are NOT TRUE for Hadoop?
a) Relational
b) non relational
c) Either a) or b)
d) None of these
NoSQL
212. A NoSQL originally referring to non SQL or _________ database that
provides a mechanism for storage and retrieval of data.
a) Relational
b) non relational
c) Either a) or b)
d) None of these
NoSQL
213. NoSQL databases are used in :
a) 1 and 2
b) 2, 3 and 4
c) 1 and 3
d) 3 and 4
NoSQL
213. NoSQL databases are used in :
a) 1 and 2
b) 2, 3 and 4
c) 1 and 3
d) 3 and 4
NoSQL
214. Which is/are TRUE About NoSQL ?
a) 1 and 2
b) 2 and 3
c) 1 and 3
d) All are TRUE
NoSQL
214. Which is/are TRUE About NoSQL ?
a) 1 and 2
b) 2 and 3
Barriers to the greater adoption of NoSQL stores include :
c) 1 and 3 1. the use of low-level query languages
d) All are TRUE 2. lack of standardized interfaces
3. huge previous investments in existing relational databases.
NoSQL
215. Some database systems like MongoDB and CouchDB store data in JSON
format. Document size is ______ in NoSQL and ______ is not available.
a) decreased, MongoDB
b) Increased, GUI
c) Increased, network bandwidth
d) None of these
NoSQL
215. Some database systems like MongoDB and CouchDB store data in JSON
format. Document size is ______ in NoSQL and ______ is not available.
a) decreased, MongoDB
b) Increased, GUI
c) Increased, network bandwidth
d) None of these
NoSQL
216. NoSQL should not be used when :
a) Only I
b) Only II
c) Both
d) None
NoSQL
219. Cassandra is written in _____ and MongoDB is written in _______.
a) C and C++
b) Java and Java
c) C++ and Java
d) Java and C++
e) C++ and C++
NoSQL
219. Cassandra is written in _____ and MongoDB is written in _______.
a) C and C++
b) Java and Java
c) C++ and Java
d) Java and C++
e) C++ and C++
NoSQL
220. Cassandra was initially created at ________ for inbox search.
a) Orkut
b) Facebook
c) Google
d) Yahoo
e) Outlook
NoSQL
220. Cassandra was initially created at ________ for inbox search.
a) Orkut
b) Facebook
c) Google
d) Yahoo
e) Outlook
NoSQL
221. Which of the following supports ACID properties i.e. Atomicity,
Consistency, Isolation, and Durability ?
a) Mongodb
b) Cassandra
c) Foursquare
d) Intuit
NoSQL
221. Which of the following supports ACID properties i.e. Atomicity,
Consistency, Isolation, and Durability ?
a) Mongodb
b) Cassandra
c) Foursquare
d) Intuit
NoSQL
Cassandra MongoDB
Developed by Apache Software foundation Developed by MongoDB Inc.
Read performance is highly efficient as it takes O(1) time. not that fast
has only cursory support for secondary indexes i.e secondary indexing is
supports the concept of secondary indexes.
restricted.
only supports JSON data format. supports both JSON and BSON data formats.
The replication method supports is Selectable Replication Factor. supports is Master Slave Replication
does not provides ACID transactions but can be tuned to support ACID provides Multi-document ACID transactions with
properties. snapshot isolation.
Server OS for MongoDB are Solaris, Linux, OS X,
Server OS for Cassandra are BSD, Linux, OS X, Windows.
Windows.
Famous companies like Hulu, Instagram, Intuit, Netflix, Reddit, etc uses Famous companies like Adobe, Amadeus, Lyft,
Cassandra. ViaVarejo, Craftbase, etc uses MongoDB.
NoSQL
222. Why MongoDB is known as the best NoSQL database?
A. Easily Scalable
B. High Performance
C. Rich Query language
D. All of the above
E. None of these
NoSQL
222. Why MongoDB is known as the best NoSQL database?
A. Easily Scalable
B. High Performance
C. Rich Query language
D. All of the above
E. None of these
NoSQL
223. The O2-Tree is basically an evolution of Red-Black trees, a form of a
Binary-Search tree, in which a leaf node contains the {key value, pointer}
tuples. It satisfies the following properties:
a) Key-value
b) Document
c) Wide-column
d) All of the above
NoSQL
224. Which of the following are the simplest NoSQL databases?
a) Key-value
b) Document
c) Wide-column
d) All of the above
NoSQL
Four Main Types Of Nosql 2. Key-Value Stores
Databases The simplest type of NoSQL DB.
Every data element in the database is stored as a key value pair
1. Document databases
consisting of an attribute name (or "key") and a value.
2. Key-value stores It is like a relational database with only two columns:
3. Column-oriented databases a) the key or attribute name such as state
4. Graph databases b) the value such as Alaska
1. Document Databases
Stores data in JSON, BSON , or XML documents
Stores not in Word documents or Google docs.
Here, documents can be nested.
Particular elements can be indexed for faster querying.
Documents can be stored and retrieved in a form that is much closer to the data objects used in applications
It means less translation is required to use the data in an application.
SQL data must often be assembled and disassembled when moving back and forth between applications and
storage.
NoSQL
Graph Databases
Focuses on the relationship between data elements.
Each element is stored as a node.
The connections between elements are called links or relationships.
Connections are first-class elements of the database, stored directly.
It is optimized to capture and search the connections between data elements, overcoming the overhead associated
with JOINing multiple tables in SQL.
Very few real-world business systems can survive solely on graph queries.
As a result graph databases are usually run alongside other more traditional databases.
Use cases include fraud detection, social networks, and knowledge graphs.
Column-Oriented Databases
A column store is organized as a set of columns.
When you want to run analytics on a small no of columns, you can read those columns directly without consuming
memory with the unwanted data.
Columns are often of the same type and benefit from more efficient compression, making reads even faster.
Columnar DB can quickly aggregate the value of a given column.
Use cases include analytics.
NoSQL
225. Which of the following is not an example of a nosql database
management system?
a) HBase
b) MongoDB
c) CouchDB
d) PostgreSQL
NoSQL
225. Which of the following is not an example of a nosql database
management system?
a) HBase
b) MongoDB
c) CouchDB
d) PostgreSQL
COUCHDB
1. Developed by Apache Software Foundation
PostgreSQL
2. CouchDB is written in Erlang.
3. It is native JSON – document store inspired by Lotus Notes, 1. most advanced Database.
scalable from globally distributed server-clusters down to mobile
2. object based relational DBMS
phones.
4. The primary database model for CouchDB is Document Store.
3. Implementation language is C.
4. CASCADE option is supported.
5. It has Document store as Secondary database models.
6. Server operating systems for CouchDB are Android, BSD, Linux,
5. It support partial, bitmap and expression indexes.
OS X, Solaris and Windows. 6. It support Advanced data types such as arrays, hstore
7. It does not supports predefined data types. and user defined types.
8. It does not supports SQL query language.
9. It support two replication methods – Master-master replication
and Master-slave replication.
10. It does not supports In-memory capabilities.
11. It does not support to ensure data integrity after non-atomic
manipulations of data.
NoSQL
226. Which of the following is a characteristic of a NoSQL database?
a) Uses JSON
b) Needs a schema
c) Requires JOINs
d) Uses tables for storage
NoSQL
226. Which of the following is a characteristic of a NoSQL database?
a) Uses JSON
b) Needs a schema
c) Requires JOINs
d) Uses tables for storage JSON database
A JSON document database is a type of non relational database
Designed to store and query data as JSON documents, rather than
normalizing data across multiple tables, each with a unique and fixed
structure, as in a relational database.
MySQL, Oracle, PostgreSQL, and SQL Server now offer JSON support.
NoSQL
227. Which of the following statement is true?
A. Non Relational databases require that schemas be defined before you can
add data
B. NoSQL databases are built to allow the insertion of data without a
predefined schema
C. NewSQL databases are built to allow the insertion of data without a
predefined schema
D. All of the above
NoSQL
227. Which of the following statement is true?
A. Non Relational databases require that schemas be defined before you can
add data
B. NoSQL databases are built to allow the insertion of data without a
predefined schema
C. NewSQL databases are built to allow the insertion of data without a
predefined schema
D. All of the above
NoSQL
228. _________ can be used for batch processing of data and aggregation
operations.
A. Hive
B. Oozie
C. MapReduce
D. None of the above
NoSQL
228. _________ can be used for batch processing of data and aggregation
operations.
A. Hive
B. Oozie
C. MapReduce
D. None of the above
NoSQL
229. Which statement(s) is/are TRUE ?
S1 - NoSQL was created to manage the scale and agility challenges that face
modern applications, but the suitability of a database depends on the
problem it must solve.
S2 - Redis, a powerful in-memory key value store used for session caching,
message queues, and other specific applications is a NoSql database.
a) S1 is True.
b) S2 is True.
c) both True.
d) Both False.
NoSQL
229. Which statement(s) is/are TRUE ?
S1 - NoSQL was created to manage the scale and agility challenges that face
modern applications, but the suitability of a database depends on the
problem it must solve.
S2 - Redis, a powerful in-memory key value store used for session caching,
message queues, and other specific applications is a NoSql database.
a) S1 is True.
b) S2 is True.
c) both True.
d) Both False.
Data Mining and Data Warehousing
230. Heterogeneous databases referred to
a) SMTOP
b) OLTP
c) FTP
d) OLAP
Data Mining and Data Warehousing
231. Data can be store , retrive and updated in
a) SMTOP
b) OLTP
c) FTP
d) OLAP
Data Mining and Data Warehousing
232. Missing data may be due to
a) equipment malfunction
b) inconsistent with other recorded data and thus deleted data not entered
due to misunderstanding
a) equipment malfunction
b) inconsistent with other recorded data and thus deleted data not entered
due to misunderstanding
a) Knowledge extraction
b) Data archeology
c) Data exploration
d) Data transformation
Data Mining and Data Warehousing
236. Which of the following is NOT involve Data Mining ?
a) Knowledge extraction
b) Data archeology
c) Data exploration
d) Data transformation
Data Mining and Data Warehousing
237. Pick out the right approach towards data mining ?
(A). Infrastructure, exploration, analysis, exploitation, interpretation
(B). Infrastructure, exploration, analysis, interpretation, exploitation
(C). Infrastructure, analysis, exploration, interpretation, exploitation
(D). None of these
Data Mining and Data Warehousing
237. Pick out the right approach towards data mining ?
(A). Infrastructure, exploration, analysis, exploitation, interpretation
(B). Infrastructure, exploration, analysis, interpretation, exploitation
(C). Infrastructure, analysis, exploration, interpretation, exploitation
(D). None of these
Regression (predictive)
Association Rule Discovery (descriptive)
Classification (predictive)
Clustering (descriptive)
Data Mining and Data Warehousing
238. Which of the following terms is used as a synonym for data
mining?
(A). knowledge discovery in databases
(B). data warehousing
(C). regression analysis
(D). parallel processing in databases
Data Mining and Data Warehousing
Which of the following terms is used as a synonym for data
mining?
(A). knowledge discovery in databases
(B). data warehousing
(C). regression analysis
(D). parallel processing in databases
The knowledge discovery process is repetitive, interactive, and consists of steps. Note that the process is
repetitive at each step, meaning one might have to move back to the previous steps.
Data Cleaning: Data cleaning is defined as removal of noisy and irrelevant data from collection.
Data Integration: Data integration is defined as heterogeneous data from multiple sources combined in a common source(DataWarehouse).
Data Selection: Data selection is defined as the process where data relevant to the analysis is decided and retrieved from the data collection.
Data selection using Neural network.
Data selection using Decision Trees.
Data selection using Naive bayes.
Data selection using Clustering, Regression, etc.
Data Transformation: Data Transformation is defined as the process of transforming data into appropriate form required by mining procedure. Data
Transformation is a two step process:
Data Mapping: Assigning elements from source base to destination to capture transformations.
Code generation: Creation of the actual transformation program.
Data Mining: Data mining is defined as clever techniques that are applied to extract patterns potentially useful.
Transforms task relevant data into patterns.
Decides purpose of model using classification or characterization.
Pattern Evaluation: Pattern Evaluation is defined as as identifying strictly increasing patterns representing knowledge based on given measures.
Find interestingness score of each pattern.
Uses summarization and Visualization to make data understandable by user.
Knowledge representation: Knowledge representation is defined as technique which utilizes visualization tools to represent data mining results.
Generate reports.
Generate tables.
Generate discriminant rules, classification rules, characterization rules, etc
7 STEPS IN KDD
Data Mining and Data Warehousing
K-means clustering is a type of unsupervised learning, which is used when you have
unlabeled data (i.e., data without defined categories or groups). The goal of this
algorithm is to find groups in the data, with the number of groups represented by the
variable K.
Data Mining and Data Warehousing
240. You are given data about seismic activity in the United States, and
you want to predict the magnitude of the upcoming earthquake. This
can be considered as an example of which of the following methods?
A. Supervised learning
B. Unsupervised learning
C. Serration
D. Dimensionality reduction
Data Mining and Data Warehousing
240. You are given data about seismic activity in the United States, and
you want to predict the magnitude of the upcoming earthquake. This
can be considered as an example of which of the following methods?
A. Supervised learning
B. Unsupervised learning
C. Serration
D. Dimensionality reduction
Supervised learning
Supervised learning, as the name indicates, has the presence of a supervisor as a teacher. Basically supervised learning is
when we teach or train the machine using data that is well labeled. Which means some data is already tagged with the
correct answer. After that, the machine is provided with a new set of examples(data) so that the supervised learning
algorithm analyses the training data(set of training examples) and produces a correct outcome from labeled data.
For instance, suppose you are given a basket filled with different kinds of fruits. Now the first step is to train the machine
with all different fruits one by one like this:
If the shape of the object is rounded and has a depression at the top, is red in color, then it will be labeled as –Apple.
If the shape of the object is a long curving cylinder having Green-Yellow color, then it will be labeled as –Banana.
Now suppose after training the data, you have given a new separate fruit, say Banana from the basket, and asked to
identify it.
Since the machine has already learned the things from previous data and this time have to use it wisely. It will first
classify the fruit with its shape and color and would confirm the fruit name as BANANA and put it in the Banana category.
Thus the machine learns the things from training data(basket containing fruits) and then applies the knowledge to test
data(new fruit).
Supervised learning classified into two categories of algorithms:
Classification: A classification problem is when the output variable is a category, such as “Red” or “blue” or “disease” and
“no disease”.
Regression: A regression problem is when the output variable is a real value, such as “dollars” or “weight”.
Data Mining and Data Warehousing
241. A priori algorithm operates in ___ method
a. Bottom-up search method
b. Breadth-first search method
c. None of the above
d. Both a & b
Data Mining and Data Warehousing
241. A priori algorithm operates in ___ method
a. Bottom-up search method
b. Breadth-first search method
c. None of the above
d. Both a & b •Apriori algorithm is used for finding frequent itemsets
in a dataset for boolean association rule.
•Name of the algorithm is Apriori because it uses prior
knowledge of frequent itemset properties.
•We apply an iterative approach or level-wise search
where k-frequent itemsets are used to find k+1
itemsets.
Data Mining and Data Warehousing
242. Which of the following are the intermediate servers that stand
in between a relational back-end server and client front-end tools?
a. ROLAP
b. MOLAP
c. HOLAP
d. All the above
Data Mining and Data Warehousing
242. Which of the following are the intermediate servers that stand
in between a relational back-end server and client front-end tools?
a. ROLAP
b. MOLAP
c. HOLAP
d. All the above
Basis ROLAP MOLAP HOLAP
Relational Database is used Multidimensional Database is Multidimensional Database is
Storage location for summary
as storage location for used as storage location for used as storage location for
aggregation
summary aggregation. summary aggregation. summary aggregation.
Slow query response time in Fast query response time in Medium query response time
Query response time ROLAP as compare to MOLAP MOLAP as compare to ROLAP in HOLAP as compare to
and HOLAP. and HOLAP. MOLAP and ROLAP.
Data Mining and Data Warehousing
It reveals a snapshot of present business tasks. It provides a multi-dimensional view of different business tasks.
It only need backup from time to time as compared to OLTP. Backup and recovery process is maintained religiously
This data is generally managed by CEO, MD, GM. This data is managed by clerks, managers.
Only read and rarely write operation. Both read and write operations.
Data Mining and Data Warehousing
A. Many-to-many
B. Many-to-one
C. One-to-one
D. One-to-many
Data Mining and Data Warehousing
A. Many-to-many
B. Many-to-one
C. One-to-one
D. One-to-many
Data Mining and Data Warehousing
248. In the OLAP model, the _ provides the
multidimensional view.
A. Data layer
B. Data link layer
C. Presentation layer
D. Application layer
Data Mining and Data Warehousing
248. In the OLAP model, the _ provides the
multidimensional view.
A. Data layer
B. Data link layer
C. Presentation layer
D. Application layer
Data Mining and Data Warehousing
249. The output of an OLAP query is displayed as a
A. Pivot
B. Matrix
C. Excel
D. both B and C
Data Mining and Data Warehousing
249. The output of an OLAP query is displayed as a
A. Pivot
B. Matrix
C. Excel
D. both B and C
Data Mining and Data Warehousing
A. Star schema.
B. Snowflake schema.
C. Fact constellation.
D. Star-snowflake schema.
Data Mining and Data Warehousing
250.___________ is a good alternative to the star schema.
A. Star schema.
B. Snowflake schema.
C. Fact constellation.
D. Star-snowflake schema.
GATE NoteBook
Target JRF - UGC NET Computer Science Paper 2
1000 MEQs
50 Qs on DATABASES
Most Expected Questions Course
DATABASES
251. An entity is
(a) a collection of items in an application
(b) a distinct real world item in an application
(c) an inanimate object in an application
(d) a data structure
DATABASES
An entity is
(a) a collection of items in an application
(b) a distinct real world item in an application
(c) an inanimate object in an application
(d) a data structure
DATABASES
252. Pick entities from the following:
(i) vendor
(ii) student
(iii) attends
(iv) km/hour
(a) i, ii, iii (b) i, ii, iv
(c) i and ii (d) iii and iv
DATABASES
Pick entities from the following:
(i) vendor
(ii) student
(iii) attends
(iv) km/hour
(a) i, ii, iii (b) i, ii, iv
(c) i and ii (d) iii and iv
DATABASES
253. Pick the relationship from the following:
(a) a classroom
(b) teacher
(c) attends
(d) cost per dozen
DATABASES
Pick the relationship from the following:
(a) a classroom
(b) teacher
(c) attends
(d) cost per dozen
DATABASES
254. Pick the meaningful relationship between entities
(a) vendor supplies goods
(b) vendor talks with customers
(c) vendor complains to vendor
(d) vendor asks prices
DATABASES
Pick the meaningful relationship between entities
(a) vendor supplies goods
(b) vendor talks with customers
(c) vendor complains to vendor
(d) vendor asks prices
DATABASES
255. Attributes are
(i) properties of relationship
(ii) attributed to entities
(iii) properties of members of an entity set
(a) i
(b) i and ii
(c) i and iii
(d) iii
DATABASES
Attributes are
(i) properties of relationship
(ii) attributed to entities
(iii) properties of members of an entity set
(a) i
(b) i and ii
(c) i and iii
(d) iii
256. The attributes of relationship teaches in teacher teaches course should be
2.DML(Data Manipulation Language): The SQL commands that deals with the manipulation of data present in the database belong to DML or
Data Manipulation Language and this includes most of the SQL statements. Examples of DML:
1. INSERT – is used to insert data into a table.
2. UPDATE – is used to update existing data within a table.
3. DELETE – is used to delete records from a database table.
3.DCL(Data Control Language): DCL includes commands such as GRANT and REVOKE which mainly deal with the rights, permissions and
other controls of the database system. Examples of DCL commands:
1. GRANT-gives user’s access privileges to the database.
2. REVOKE-withdraw user’s access privileges given by using the GRANT command.
4.TCL(transaction Control Language): TCL commands deal with the transaction within the database. Examples of TCL commands:
1. COMMIT– commits a Transaction.
2. ROLLBACK– rollbacks a transaction in case of any error occurs.
3. SAVEPOINT–sets a savepoint within a transaction.
4. SET TRANSACTION–specify characteristics for the transaction.
DATABASES
269.In E-R Diagram, weak entity is represented by.......
(A) Rectangle
(B) Square
(C) Double Rectangle
(D) Circle
DATABASES
In E-R Diagram, weak entity is represented by.......
(A) Rectangle
(B) Square
(C) Double Rectangle
(D) Circle
DATABASES
270. In SQL the statement select*from R,S is equivalent to
A. Select * from R natural join S
B. Select * from R cross join S
C. Select * from R union join S
D. Select * from R inner join S
DATABASES
In SQL the statement select*from R,S is equivalent to
A. Select * from R natural join S
B. Select * from R cross join S
C. Select * from R union join S
D. Select * from R inner join S
DATABASES
271. Which of the following relational algebra operations do not require the
participating tables to be union-compatible?
(A)Union
(B) Intersection
(C) Difference
(D) Join
DATABASES
Which of the following relational algebra operations do not require the
participating
tables to be union-compatible?
(A)Union
(B) Intersection
(C) Difference
(D) Join
DATABASES
272. The operation which is not considered a basic operation of relational
algebra is
(A)Join. (B) Selection.
(C) Union. (D) Cross product.
DATABASES
The operation which is not considered a
basic operation of relational algebra is
(A)Join. (B) Selection.
(C) Union. (D) Cross product.
Ans: (A)
DATABASES
273. The default level of consistency in
SQL is
(A) repeatable read
(B) read committed
(C) read uncommitted
(D) serializable
DATABASES
The default level of consistency in SQL is
(A) repeatable read
(B) read committed
(C) read uncommitted
(D) serializable
Ans: (D)
274. Which of the following aggregate functions does not ignore nulls in its
results?.
Ans: (B)
275. Use of UNIQUE while defining an attribute of a table in SQL means that the
attribute values are
DATABASES
(A)distinct values
(B)cannot have NULL
(C)both (A) & (B)
(D)same as primary key
Use of UNIQUE while defining an attribute of a table in SQL means that the
attribute values are
Ans: (C)
276.Cascading rollback is avoided in all protocol except
DATABASES
(A)
strict two-phase locking protocol.
(B)
tree locking protocol
(C)
two-phase locking protocol
(D)
validation based protocol
Cascading rollback is avoided in all protocol except
DATABASES
(A)
strict two-phase locking protocol.
(B)
tree locking protocol
(C)
two-phase locking protocol
(D)
validation based protocol
277. If α→β holds then so does
DATABASES
(A) γα→γβ
(B) α→→γβ
(B) α→→γβ
Ans: (A)
DATABASES
278. In tuple relational calculus P1 AND P2
is equivalent to
(A) (¬P1OR¬P2). (B) ¬(P1OR¬P2).
(C) ¬(¬P1OR P2). (D) ¬(¬P1OR ¬P2).
DATABASES
In tuple relational calculus P1 AND P2 is
equivalent to
(A) (¬P1OR¬P2). (B) ¬(P1OR¬P2).
(C) ¬(¬P1OR P2). (D) ¬(¬P1OR ¬P2).
DATABASES
279.For correct behaviour during recovery, undo and redo operation must be
(A)Commutative
(B) Associative
(C) idempotent
(D) distributive
DATABASES
For correct behaviour during recovery, undo and redo operation must be
(A)Commutative
(B) Associative
(C) idempotent
(D) distributive
Ans: (C)
DATABASES
280. The drawback of shadow paging
technique are
(A)Commit overhead (B) Data
fragmentation
(C) Garbage collection (D) All of these
DATABASES
The drawback of shadow paging technique
are
(A)Commit overhead (B) Data
fragmentation
(C) Garbage collection (D) All of these
Ans: (D)
The idea is to maintain two page tables during the life of a transaction: the current page table and the shadow page table.
When the transaction starts, both tables are identical. The shadow page is never changed during the life of the transaction.
The current page is updated with each write operation. Each table entry points to a page on the disk. When the transaction is
committed, the shadow page entry becomes a copy of the current page table entry and the disk block with the old data is
released. If the shadow is stored in nonvolatile memory and a system crash occurs, then the shadow page table is copied to
the current page table. This guarantees that the shadow page table will point to the database pages corresponding to the stat e
of the database prior to any transaction that was active at the time of the crash, making aborts automatic.
There are drawbacks to the shadow-page technique:
•Commit overhead. The commit of a single transaction using shadow paging requires multiple blocks to be output -- the
current page table, the actual data and the disk address of the current page table. Log-based schemes need to output only the
log records.
•Data fragmentation. Shadow paging causes database pages to change locations (therefore, no longer contiguous.
•Garbage collection. Each time that a transaction commits, the database pages containing the old version of data changed by
the transactions must become inaccessible. Such pages are considered to be garbage since they are not part of the free space
and do not contain any usable information. Periodically it is necessary to find all of the garbage pages and add them to the list
of free pages. This process is called garbage collection and imposes additional overhead and complexity on the system.
DATABASES
281. In SQL, testing whether a subquery is empty is done using
(A) DISTINCT (B) UNIQUE
(C) NULL (D) EXISTS
DATABASES
In SQL, testing whether a subquery is empty is done using
(A) DISTINCT (B) UNIQUE
(C) NULL (D) EXISTS
Ans: (D)
DATABASES
282. The FD A → B , DB → C implies
(A) DA → C (B) A → C
(C) B → A (D) DB → A
DATABASES
The FD A → B , DB → C implies
(A) DA → C (B) A → C
(C) B → A (D) DB → A
Ans: (A)
DATABASES
283. Manager salary details are hidden from the employee
.This is
(A)
Conceptual level data hiding.
(B)
External level data hiding.
(C)
Physical level data hiding.
(D)
None of these.
DATABASES
Manager salary details are hidden from the employee .This
is
(A)
Conceptual level data hiding.
(B)
External level data hiding.
(C)
Physical level data hiding.
(D)
None of these.
Ans: (A)
284. If minimum cardinality = 0 , then it signifies : DATABASES
A. Partial participation
B. Total Participation
C. Weak entity
D. Strong entity
If minimum cardinality = 0 , then it signifies : DATABASES
A. Partial participation
B. Total Participation
C. Weak entity
D. Strong entity
ANSWER : A
ALL Correct
DATABASES
SELECT first_name, last_name, COUNT(*) FROM student GROUP BY first_name;
A. 2 , 2NF
B. 3, 3NF
C. 4 , 3NF
D. 4, 2NF
Consider a relation- R ( A , B , C , D , E ) with functional dependencies-
A → BC
DATABASES
CD → E
B→D
E→A
From here,
•Prime attributes = { A , B , C , D , E }
•There are no non-prime attributes
Now,
•It is clear that there are no non-prime attributes in the relation.
•In other words, all the attributes of relation are prime attributes.
•Thus, all the attributes on RHS of each functional dependency are prime
attributes.
A. 1,3
B. 1,2
C. 1,8
D. 1,9
Let R = (A, B, C, D, E) be a relation scheme with the
following dependencies- DATABASES
AB → C
C→D
B→E
Determine the total number of candidate keys and
super keys.
AB___
Set G-
A → CD
E → AH
Set F-
A→ C Determining whether F covers G- DATABASES
AC → D
E → AD Step-1:
E→H
•(A)+ = { A , C , D } // closure of left side of A → CD using set
Set G- G
A → CD •(E)+ = { A , C , D , E , H } // closure of left side of E → AH using set
E → AH G
(C) F = G
(D) All of the above Step-3:
296.Which is TRUE ?
1. Canonical cover is free from all the extraneous functional
dependencies.
2. The closure of canonical cover is subset as that of the given
set of functional dependencies.
3. Canonical cover is unique.
4. All
Which is TRUE ?
1. Canonical cover is free from all the extraneous functional
dependencies.
2. The closure of canonical cover is subset as that of the given set of
functional dependencies.
3. Canonical cover is unique. DATABASES
4. All
Composition-
If A → B and C → D, then AC → BD always holds.
Additive-
If A → B and A → C, then A → BC always holds.
298. A B/B+ tree with order 5. Find minimum children ?
DATABASES
a) 1
b) 2
c) 3
d) none
A B/B+ tree with order 5. Find minimum children ?
DATABASES
a) 1
b) 2
c) 3 •A B/B+ tree with order p has maximum p pointers and hence
d) none maximum p children.
ceil(p/2) – 1 keys.
DATABASES
299. The maximum number of super keys for the relation
schema R(A,B,C,D) with AB as the key is
(A) 5
(B) 6
(C) 7
(D) 4
DATABASES
The maximum number of super keys for the relation schema
R(A,B,C,D) with AB as the key is
(A) 5
(B) 6
(C) 7
(D) 4
Ans: (D)
Explanation:
Maximum no. of possible super keys for a table with n
attributes = 2(n-2)
Here, n = 4.
So, the possible super keys = 24-2 = 4
The possible super keys are: AB, ABC, ABD, ABCD
300. Given a database with multiple tables, which of the following constraints can be used in a way to ensure, or will by definition not allow
NULL values to be inserted ?
I. UNIQUE
II. NOT NULL DATABASES
III. FOREIGN KEY
IV.PRIMARY KEY
V.CHECK
A. I, II, and IV
B. I, II, IV and V
C. II,IV and V
A. I, II, and IV
B. I, II, IV and V
C. II,IV and V
Ans: C
Solution : (C)
Unique allow Null values
Not Null not allow null values
Primary key not allow null values
Check not allow null values
Foreign key allow null values.
Copy protected with Online-PDF-No-Copy.com