You are on page 1of 19

ASSIGNMENT 3

Q1.Requirement engineering tasks in detail.

ANS:

Requirement engineering consists of seven different tasks as follow:

1. Inception

 Inception is a task where the requirement engineering asks a set of questions to


establish a software process.
 In this task, it understands the problem and evaluates with the proper solution.
 It collaborates with the relationship between the customer and the developer.
 The developer and customer decide the overall scope and the nature of the
question.
2. Elicitation
Elicitation means to find the requirements from anybody.
The requirements are difficult because the following problems occur in elicitation.

Problem of scope: The customer give the unnecessary technical detail rather than
clarity of the overall system objective.

Problem of understanding: Poor understanding between the customer and the


developer regarding various aspect of the project like capability, limitation of the
computing environment.

Problem of volatility: In this problem, the requirements change from time to time and it
is difficult while developing the project.

3. Elaboration

 In this task, the information taken from user during inception and elaboration and
are expanded and refined in elaboration.
 Its main task is developing pure model of software using functions, feature and
constraints of a software.
4. Negotiation

 In negotiation task, a software engineer decides the how will the  project be


achieved with limited business resources.
 To create rough guesses of development and access the impact of the requirement
on the project cost and delivery time.
5. Specification

 In this task, the requirement engineer constructs a final work product.


 The work product is in the form of software requirement specification.
 In this task, formalize the requirement of the proposed software such as informative,
functional and behavioral.
 The requirement are formalize in both graphical and textual formats.
6. Validation
 The work product is built as an output of the requirement engineering and that is
accessed for the quality through a validation step.
 The formal technical reviews from the software engineer, customer and other
stakeholders helps for the primary requirements validation mechanism.
7. Requirement management

 It is a set of activities that help the project team to identify, control and track the
requirements and changes can be made to the requirements at any time of the
ongoing project.
 These tasks start with the identification and assign a unique identifier to each of the
requirement.
 After finalizing the requirement traceability table is developed.
 The examples of traceability table are the features, sources, dependencies,
subsystems and interface of the requirement.

Q2. Explain Requirement Engineering Processes.

ANS:

Requirement Engineering is the process of defining, documenting and maintaining the


requirements. It is a process of gathering and defining service provided by the system.
Requirements Engineering Process consists of the following main activities:

Requirements Elicitation:

 It is related to the various ways used to gain knowledge about the project domain
and requirements. The various sources of domain knowledge include customers,
business manuals, the existing software of same type, standards and other
stakeholders of the project.

 The techniques used for requirements elicitation include interviews, brainstorming,


task analysis, Delphi technique, prototyping, etc. Elicitation does not produce formal
models of the requirements understood. Instead, it widens the domain knowledge of
the analyst and thus helps in providing input to the next stage.

Requirements specification:

 This activity is used to produce formal software requirement models. All the
requirements including the functional as well as the non-functional requirements
and the constraints are specified by these models in totality. During specification,
more knowledge about the problem may be required which can again trigger the
elicitation process.

 The models used at this stage include ER diagrams, data flow diagrams(DFDs),
function decomposition diagrams(FDDs), data dictionaries, etc.

Requirements verification and validation:

 Verification: It refers to the set of tasks that ensures that the software correctly
implements a specific function.
 Validation: It refers to a different set of tasks that ensures that the software that
has been built is traceable to customer requirements.

If requirements are not validated, errors in the requirement definitions would propagate to
the successive stages resulting in a lot of modification and rework.
The main steps for this process include:

The requirements should be consistent with all the other requirements i.e no two
requirements should conflict with each other.

The requirements should be complete in every sense.

The requirements should be practically achievable.

Reviews, buddy checks, making test cases, etc. are some of the methods used for this.

Requirements management:
Requirement management is the process of analyzing, documenting, tracking, prioritizing
and agreeing on the requirement and controlling the communication to relevant
stakeholders. This stage takes care of the changing nature of requirements. It should be
ensured that the SRS is as modifiable as possible so as to incorporate changes in
requirements specified by the end users at later stages too. Being able to modify the
software as per requirements in a systematic and controlled manner is an extremely
important part of the requirements engineering process.

Q3.How to collect requirement? Explain different methods to collect requirement.


What is its importance in Software Engineering?

ANS:

Requirement gathering:

The process to gather the software requirements from client, analyze and document them is
known as requirement engineering.

Methods to collect requirements:

Brainstorming:

 Brainstorming is used in requirement gathering to get as many ideas as possible


from group of people. Generally used to identify possible solutions to problems, and
clarify details of opportunities.

Document Analysis:

 Reviewing the documentation of an existing system can help when creating AS–IS
process document, as well as driving gap analysis for scoping of migration projects.
In an ideal world, we would even be reviewing the requirements that drove creation
of the existing system – a starting point for documenting current requirements.
Nuggets of information are often buried in existing documents that help us ask
questions as part of validating requirement completeness.

Focus Group:

 A focus group is a gathering of people who are representative of the users or


customers of a product to get feedback. The feedback can be gathered about
needs/opportunities/ problems to identify requirements, or can be gathered to
validate and refine already elicited requirements. This form of market research is
distinct from brainstorming in that it is a managed process with specific participants.

Interface analysis:

 Interfaces for a software product can be human or machine. Integration with


external systems and devices is just another interface. User centric design
approaches are very effective at making sure that we create usable software.
Interface analysis – reviewing the touch points with other external systems is
important to make sure we don’t overlook requirements that aren’t immediately
visible to users.

Interview:

 Interviews of stakeholders and users are critical to creating the great software.
Without understanding the goals and expectations of the users and stakeholders,
we are very unlikely to satisfy them. We also have to recognize the perspective of
each interviewee, so that, we can properly weigh and address their inputs. Listening
is the skill that helps a great analyst to get more value from an interview than an
average analyst.

Prototyping:

 Prototyping is a relatively modern technique for gathering requirements. In this


approach, you gather preliminary requirements that you use to build an initial
version of the solution – a prototype. You show this to the client, who then gives you
additional requirements. You change the application and cycle around with the
client again. This repetitive process continues until the product meets the critical
mass of business needs or for an agreed number of iterations.

Reverse Engineering:

 When a migration project does not have access to sufficient documentation of the
existing system, reverse engineering will identify what the system does. It will not
identify what the system should do, and will not identify when the system does the
wrong thing.

Its importance in software engineering:

 Requirements Gathering is a fundamental part of any software development project.


These are things like “User wants to do X. How is this achieved?” In effect,
Requirements Gathering is the process of generating a list of requirements
(functional, system, technical, etc.) from all the stakeholders (customers, users,
vendors, IT staff) that will be used as the basis for the formal definition of what the
project is.

Q4.Explain requirement Analysis.

ANS:

 Requirement analysis is significant and essential activity after elicitation. We


analyze, refine, and scrutinize the gathered requirements to make consistent and
unambiguous requirements. This activity reviews all requirements and may provide
a graphical view of the entire system. After the completion of the analysis, it is
expected that the understandability of the project may improve significantly. Here,
we may also use the interaction with the customer to clarify points of confusion and
to understand which requirements are more important than others.

Draw the context diagram:

 The context diagram is a simple model that defines the boundaries and interfaces
of the proposed systems with the external world. It identifies the entities outside the
proposed system that interact with the system. The context diagram of student
result management system is given below:

Development of a Prototype :

 One effective way to find out what the customer wants is to construct a prototype,
something that looks and preferably acts as part of the system they say they want.

 The prototype should be built quickly and at a relatively low cost. Hence it will
always have limitations and would not be acceptable in the final system. This is an
optional activity.

Model the requirements:

 This process usually consists of various graphical representations of the functions,


data entities, external entities, and the relationships between them. The graphical
view may help to find incorrect, inconsistent, missing, and superfluous
requirements. Such models include the Data Flow diagram, Entity-Relationship
diagram, Data Dictionaries, State-transition diagrams, etc.

Finalise the requirements:

 After modeling the requirements, we will have a better understanding of the system
behavior. The inconsistencies and ambiguities have been identified and corrected.
The flow of data amongst various modules has been analyzed. Elicitation and
analyze activities have provided better insight into the system. Now we finalize the
analyzed requirements, and the next step is to document these requirements in a
prescribed format.
ASSIGNMENT 4
Q1.Explain design concepts and explain various architectures in brief.

ANS:

Design concepts:

Abstraction:

 Abstraction simply means to hide the details to reduce complexity and increases
efficiency or quality. Different levels of Abstraction are necessary and must be
applied at each stage of the design process so that any error that is present can be
removed to increase the efficiency of the software solution and to refine the
software solution.

 The solution should be described in broadways that cover a wide range of different
things at a higher level of abstraction and a more detailed description of a solution
of software should be given at the lower level of abstraction.

Modularity:

 Modularity simply means to divide the system or project into smaller parts to reduce
the complexity of the system or project. In the same way, modularity in design
means to subdivide a system into smaller parts so that these parts can be created
independently and then use these parts in different systems to perform different
functions.

 It is necessary to divide the software into components known as modules because


nowadays there are different software available like Monolithic software that is hard
to grasp for software engineers. So, modularity is design has now become a trend
and is also important.

Architecture:

 Architecture simply means a technique to design a structure of something.


Architecture in designing software is a concept that focuses on various elements
and the data of the structure. These components interact with each other and use
the data of the structure in architecture.

Refinement:

 Refinement simply means to refine something to remove any impurities if present


and increase the quality. The refinement concept of software design is actually a
process of developing or presenting the software or system in a detailed manner
that means to elaborate a system or software. Refinement is very necessary to find
out any error if present and then to reduce it.

Pattern:

 The pattern simply means a repeated form or design in which the same shape is
repeated several times to form a pattern. The pattern in the design process means
the repetition of a solution to a common recurring problem within a certain context.

Refactoring:

 Refactoring simply means to reconstruct something in such a way that it does not
affect the behavior or any other features. Refactoring in software design means to
reconstruct the design to reduce and complexity and simplify it without affecting the
behavior or its functions. Fowler has defined refactoring as “the process of changing
a software system in a way that it won’t affect the behavior of the design and
improves the internal structure”.

Various architectures:

Data centred architectures:

 A data store will reside at the center of this architecture and is accessed
frequently by the other components that update, add, delete or modify the
data present within the store.

 The figure illustrates a typical data centered style. The client software access
a central repository. Variation of this approach are used to transform the
repository into a blackboard when data related to client or data of interest for
the client change the notifications to client software.

 Data can be passed among clients using blackboard mechanism.

Data flow architectures:


 This kind of architecture is used when input data to be transformed into
output data through a series of computational manipulative components.

 The figure represents pipe-and-filter architecture since it uses both pipe and
filter and it has a set of components called filters connected by pipes.

 Pipes are used to transmit data from one component to the next.

 Each filter will work independently and is designed to take data input of a
certain form and produces data output to the next filter of a specified form.

 The filters don’t require any knowledge of the working of neighboring filters.

 If the data flow degenerates into a single line of transforms, then it is termed
as batch sequential. This structure accepts the batch of data and then
applies a series of sequential components to transform it.

Call and Return architectures: 

 It is used to create a program that is easy to scale and modify. Many sub-
styles exist within this category. Two of them are explained below.

 Remote procedure call architecture: This components is used to present in a


main program or sub program architecture distributed among multiple
computers on a network.

 Main program or Subprogram architectures: The main program structure


decomposes into number of subprograms or function into a control hierarchy.
Main program contains number of subprograms that can invoke other
components.

Object Oriented architecture:

  The components of a system encapsulate data and the operations that must
be applied to manipulate the data. The coordination and communication
between the components are established via the message passing.

Layered architecture:

 A number of different layers are defined with each layer performing a well-
defined set of operations. Each layer will do some operations that becomes
closer to machine instruction set progressively.
 At the outer layer, components will receive the user interface operations and
at the inner layers, components will perform the operating system
interfacing(communication and coordination with OS)

 Intermediate layers to utility services and application software functions.

Q2.Explain structural partitioning.

ANS:

 The program structure should be partitioned both horizontally and vertically.

Horizontal partitioning:

 defines separate branches of the modular hierarchy for each major program
function.
 Simplest way is to partition a system into: input, data transformation
(processing), and output

Advantages of horizontal partition:


 easy to test, maintain, and extend
 fewer side effects in change propagation or error propagation

Disadvantage:
 more data to be passed across module interfaces
 complicate the overall control of program flow

Vertical partitioning:
 suggests the control and work should be distributed top-down in program
structure.

Advantages:
 good at dealing with changes
 easy to maintain the changes
 reduce the change impact and and propagation

Q3.Explain component level design.

ANS:

Component level design:

 soon as the first iteration of architectural design is complete, component-level design


takes place. The objective of this design is to transform the design model into
functional software. To achieve this objective, the component-level design represents
-the internal data structures and processing details of all the software components
(defined during architectural design) at an abstraction level, closer to the actual code.
In addition, it
 The component-level design can be represented by using different approaches. One
approach is to use a programming language while other is to use some intermediate
design notation such as graphical (DFD, flowchart, or structure chart), tabular
(decision table), or text-based (program design language) whichever is easier to be
translated into source code.
 The component-level design provides a way to determine whether the defined
algorithms, data structures, and interfaces will work properly. Note that a component
(also known as module) can be defined as a modular building block for the software.
However, the meaning of component differs according to how software engineers use
it. The modular design of the software should exhibit the following sets of properties.

Provide simple interface:


 Simple interfaces decrease the number of interactions. Note that the number of
interactions is taken into account while determining whether the software performs
the desired function. Simple interfaces also provide support for reusability of
components which reduces the cost to a greater extent.

 It not only decreases the time involved in design, coding, and testing but the overall
software development cost is also liquidated gradually with several projects. A number
of studies so far have proven that the reusability of software design is the most
valuable way of reducing the cost involved in software development.

Ensure information hiding:

 The benefits of modularity cannot be achieved merely by decomposing a program into


several modules; rather each module should be designed and developed in such a way
that the information hiding is ensured. It implies that the implementation details of one
module should not be visible to other modules of the program. The concept of
information hiding helps in reducing the cost of subsequent design changes.

Functional Independence:

 Functional independence is the refined form of the design concepts of modularity,


abstraction, and information hiding. Functional independence is achieved by
developing a module in such a way that it uniquely performs given sets of function
without interacting with other parts of the system.
 The software that uses the property of functional independence is easier to develop
because its functions can be categorized in a systematic manner. Moreover,
independent modules require less maintenance and testing activity, as secondary
effects caused by design modification are limited with less propagation of errors. In
short, it can be said that functional independence is the key to a good software design
and a good design results in high-quality software. There exist two qualitative criteria
for measuring functional independence, namely, coupling and cohesion.

Q4. What is object oriented design of a system?

ANS:

 Object oriented design started right from the moment computers were invented.
Programming was there, and programming approaches came into the picture.
Programming is basically giving certain instructions to the computer.
 At the beginning of the computing era, programming was usually limited to
machine language programming. Machine language means those sets of
instructions that are specific to a particular machine or processor, which are in
the form of 0’s and 1’s. These are sequences of bits (0100110…). But it’s quite
difficult to write a program or develop software in machine language.
 It’s actually impossible to develop software used in today’s scenarios with
sequences of bits. This was the main reason programmers moved on to the next
generation of programming languages, developing assembly languages, which
were near enough to the English language to easily understand.
 These assembly languages were used in microprocessors. With the invention of
the microprocessor, assembly languages flourished and ruled over the industry,
but it was not enough. Again, programmers came up with something new, i.e.,
structured and procedural programming.

The Object-Oriented Programming (OOP) Approach:

 The OOP concept was basically designed to overcome the drawback of the above
programming methodologies, which were not so close to real-world applications.
The demand was increased, but still, conventional methods were used. This new
approach brought a revolution in the programming methodology field.

 Object-oriented programming (OOP) is nothing but that which allows the writing
of programs with the help of certain classes and real-time objects. We can say
that this approach is very close to the real-world and its applications because
the state and behaviour of these classes and objects are almost the same as
real-world objects.

Data Abstraction:

 Abstraction refers to the act of representing important and special features without
including the background details or explanation about that feature. Data abstraction
simplifies database design.

Encapsulation:

 Encapsulation is one of the fundamental concepts in object-oriented


programming (OOP). It describes the idea of wrapping data and the methods
that work on data within one unit, e.g., a class in Java. This concept is often
used to hide the internal state representation of an object from the outside.

Inheritance:

 Inheritance is the ability of one class to inherit capabilities or properties of


another class, called the parent class. When we write a class, we inherit
properties from other classes.
 So when we create a class, we do not need to write all the properties and
functions again and again, as these can be inherited from another class
which possesses it. Inheritance allows the user to reuse the code whenever
possible and reduce its redundancy.

Polymorphism:
 is the ability of data to be processed in more than one form. It allows the
performance of the same task in various ways. It consists of method overloading
and method overriding, i.e., writing the method once and performing a number of
tasks using the same method name.

Q5.Give 4 points of difference between data and information

ANS:

Q6.Draw DFD level-0 and DFD level-1 for Hospital management system.

ANS:

LEVEL 0 DFD:
LEVEL 1 DFD:

Q7.Prepare an E-R Diagram for a simple Hospital Management System and explain it.

ANS:
Explanation:

 This ER (Entity Relationship) Diagram represents the model of Hospital


Management System Entity.

 The entity-relationship diagram of Hospital Management System shows all the


visual instrument of database tables and the relations between
Patient,Doctor,Room,Record, Nurses, Medicines etc.

 It has so many associations with in it

 It also having the multiplicity.

Q8.Explain the difference between DFD and E-R diagram with symbols and example.

ANS: DFD E_R

It stands for Data Flow Diagram. It stands for Entity Relationship Diagram or
Model.

Main objective is to represent the processes and Main objective is to represent the data object or
data flow between them. entity and relationship between them.

It explains the flow and process of data input, It explains and represent the relationship
data output, and storing data. between entities stored in a database.
Symbols used in DFD are: rectangles (represent Symbols used in ERD are: rectangles (represent
the data entity), circles (represent the process), the entity), diamond boxes (represent
arrows (represent the flow of data), ovals or relationship), lines and standard notations
parallel lines (represent data storing). (represent cardinality).

Rule followed by DFD is that at least one data Rule followed by ERD is that all entities must
flow should be there entering into and leaving represent the set of similar things.
the process or store.

It models the flow of data through a system. It model entities like people, objects, places and
events for which data is stored in a system.

EXAMPLE OF DFD FOR HOSPITAL MANAGAEMENT SYSTEM:


EXAMPLE OF DFD FOR HOSPITAL MANAGAEMENT SYSTEM:

Q9.Using appropriate example explain data dictionary.

ANS:
Data dictionary

 A Data Dictionary is a collection of names, definitions, and attributes about data


elements that are being used or captured in a database, information system, or part of
a research project.

 It describes the meanings and purposes of data elements within the context of a
project, and provides guidance on interpretation, accepted meanings and
representation. A Data Dictionary also provides metadata about data elements. The
metadata included in a Data Dictionary can assist in defining the scope and
characteristics of data elements, as well the rules for their usage and application. 

Example of data dictionary:

Data dictionary for an employee example:

Q10.Explain feasibility study with the example of ATM machine in banking system. Draw
use case diagram of ATM machine.

ANS:

 Inside a ATM banking system, first of all it should be checked whether your product
will be feasible or bot.

 It will be tested based on various criteria called as feasibility study. There are 4
different dimensions:

 For ATM system, we have to check whether this project is technically feasible or
not?
 If any kind of errors may be generated inside your appliocation then it may be
reduced through technology.

Finance

 A next criterion is also checked whether the product is financially feasible or not.

 Cost of software and market price both should be considered.

Time

 There will be some time duration inside your product should be completed.

 At the specific time it should be deployed.

Resource

 To create the product there will be some requirements of resources. It should be


fulfilled.

 Use case diagram of ATM system is shown below.

 There are various actors like, Bank customer, Cashier and Maintenance person.

 Cashier is the person who deposites money and then he will not present at the ATM
center.

 Same, maintenance person check machine and repair machine. But he will not
present at ATM center.

EXAMPLE OF USE CASE DIAGRAM FOR ATM BANKING SYSTEM:

You might also like