You are on page 1of 6

NAME : TAUSIF LIYAKAT MULLA

ROLL NO :3446

RECENT TRENDS

Q1.Explain-What is software measurement?

The primary measurement of software is size, specifically functional size.


The generic principles of functional size are described in the ISO/IEC
14143.[1] Software size is principally measured in function points. It can
also be measured in lines of code, or specifically, source lines of code
(SLOC) which is functional code excluding comments. Whilst measuring
SLOC is interesting, it is more an indication of effort than functionality.
Two developers could approach a functional challenge using different
techniques, and one might need only write a few lines of code, and the
other might need to write many times more lines to achieve the same
functionality. The most reliable method for measuring software size is
code agnostic, from the user's point of view - in function points.

 Software measurement is needed for the following activities:

1) Understanding: Metrics help in making the aspects of a process


more visible, thereby giving a better understanding of relationships
among the activities of entities they affect
2) Control: using baselines, goals and an understanding of the
relationships, it can be predicted what is likely to happen and
correspondingly, make appropriate changes in the process to help meet
the goals.
3) Improvement: By taking corrective actions and making appropriate
changes product can be improved. Similarly, based on the analysis of a
project a process can be improved.
Q2.Explain in detail what is software prototyping?

Software prototyping is the activity of


creating prototypes of software applications, i.e., incomplete versions
of the software program being developed. It is an activity that can occur
in software development and is comparable to prototyping as known
from other fields, such as mechanical engineering or manufacturing.

This process is in contrast with the 1960s and 1970s monolithic


development cycle of building the entire program first and then working
out any inconsistencies between design and implementation, which led
to higher software costs and poor estimates of time and cost.[citation
needed]
The monolithic approach has been dubbed the "Slaying the
(software) Dragon" technique, since it assumes that the software
designer and developer is a single hero who has to slay the entire
dragon alone. Prototyping can also avoid the great expense and
difficulty of having to change a finished software product.
The practice of prototyping is one of the points Frederick P.
Brooks makes in his 1975 book The Mythical Man-Month and his 10-
year anniversary article "No Silver Bullet".
An early example of large-scale software prototyping was the
implementation of NYU's Ada/ED translator for the Ada programming
language.[4] It was implemented in SETL with the intent of producing an
executable semantic model for the Ada language, emphasizing clarity of
design and user interface over speed and efficiency. The NYU Ada/ED
system was the first validated Ada implementation, certified on April 11,
1983.[5]
Q3.What is distributed dbms?Explain replication in dbms.

A distributed database management system (DDBMS) is a set of


multiple, logically interrelated databases distributed over a network.
They provide a mechanism that makes
the distribution of data transparent to users.

Data Replication is the process of storing data in more than one site or
node. It is useful in improving the availability of data. It is simply
copying data from a database from one server to another server so that
all the users can share the same data without any inconsistency. The
result is a distributed database in which users can access data
relevant to their tasks without interfering with the work of others.
Data replication encompasses duplication of transactions on an
ongoing basis, so that the replicate is in a consistently updated
state and synchronized with the source.However in data replication
data is available at different locations, but a particular relation has to
reside at only one location.
There can be full replication, in which the whole database is stored at
every site. There can also be partial replication, in which some
frequently used fragment of the database are replicated and others are
not replicated.

Types of Data Replication –


1. Transactional Replication – In Transactional replication users
receive full initial copies of the database and then receive updates
as data changes. Data is copied in real time from the publisher to
the receiving database(subscriber) in the same order as they occur
with the publisher therefore in this type of replication, transactional
consistency is guaranteed. Transactional replication is typically
used in server-to-server environments. It does not simply copy the
data changes, but rather consistently and accurately replicates each
change.
2.
Snapshot Replication – Snapshot replication distributes data
exactly as it appears at a specific moment in time does not monitor
for updates to the data. The entire snapshot is generated and sent to
Users. Snapshot replication is generally used when data
changes are infrequent. It is bit slower than transactional because
on each attempt it moves multiple records from one end to the other
end. Snapshot replication is a good way to perform initial
synchronization between the publisher and the subscriber.

3. Merge Replication – Data from two or more databases is combined


into a single database. Merge replication is the most complex type of
replication because it allows both publisher and subscriber to
independently make changes to the database. Merge replication is
typically used in server-to-client environments. It allows changes to
be sent from one publisher to multiple subscribers.
Q4.Explain with example-varying arrays in object-relational database.

The examples in this appendix illustrate the most important aspects of


defining, using, and evolving object types. One important aspect of
working with object types is creating methods that perform operations on
objects. In the example, definitions of object type methods use the
PL/SQL language. Other aspects of using object types, such as defining
a type, use SQL.

The examples develop different versions of a database schema for an


application that manages customer purchase orders. First a purely
relational version is shown, and then an equivalent, object-relational
version. Both versions provide for the same basic kinds of entities—
customers, purchase orders, line items, and so on. But the object-
relational version creates object types for these entities and manages
data for particular customers and purchase orders by instantiating
instances of the respective object types.

PL/SQL and Java provide additional capabilities beyond those illustrated


in this appendix, especially in the area of accessing and manipulating
the elements of collections.

Client applications that use the Oracle Call Interface (OCI), Pro*C/C++,
or Oracle Objects for OLE (OO4O) can take advantage of its extensive
facilities for accessing objects and collections, and manipulating them on
clients.

The relational approach normalizes everything into tables. The table


names are Customer_reltab, PurchaseOrder_reltab,
and Stock_reltab.

Each part of an address becomes a column in


the Customer_reltab table. Structuring telephone numbers as
columns sets an arbitrary limit on the number of telephone numbers a
customer can have.

The relational approach separates line items from their purchase orders
and puts each into its own table,
named PurchaseOrder_reltab and LineItems_reltab.

As depicted in Figure A-1, a line item has a relationship to both a


purchase order and a stock item. These are implemented as columns
in LineItems_reltab table with foreign keys
to PurchaseOrder_reltab and Stock_reltab.

Q5.What is data warehouse? And data mining

Subject Oriented: A data warehouse provides information catered to a


specific subject instead of the whole organization's ongoing
operations. Examples of subjects include product information,
sales data, customer and supplier details, etc.

Data mining is considered as a process of extracting data from


large data sets, whereas a Data warehouse is the process of pooling all
the relevant data together. Data mining is the process of analyzing
unknown patterns of data, whereas a Data warehouse is a technique
for collecting and managing data.

You might also like