You are on page 1of 37

MCA SEMESTER I

ASSIGNMENT 2019

Long Questions (Answers)


Q 1) Software Engineering
Software Engineering is a part of computer science in which several kind of method,
thoughts and techniques used for getting the high quality software and computer
programs.
1. Minimum cost
2. On given time
3. Continuous production
We can judge the usefulness of the software Engineering with the importance of its
attributes. Both are used for the same manner. The basic target of software Engineering is
to provide the high quality software, which can be delivered.
1. On given time,
2. Within budget &
3. That fulfills the need.
Thus we can say that this is the best way or programs to get the following points:
1. To provide the best output of software system.
2. To make easy to use the software systems and develop them.
3. To improve the rate of production.
4. To maintain the budget for development of Software system.
5. Job satisfaction of software engineering.
It is necessary to produce a high quality software to fulfill the below given points.
1. Consistency
2. Improved quality
3. Minimum cost
4. Within time
5. Reliability &
6. Fulfill the need of user
Steps used while developing a software system
To achieve consistency, software development process is divided into set of phases.
Various methods, tools & techniques are applied to accomplish each phases. The process
of developing a software system is divided in two parts to bring the consistency in this
process. To complete the software developing process several kind of methods,
Techniques, used. Below given steps are used to develop a software system.

1. Statement of problem & system study - The first step of system development process is
started with statement of problem & system study. In this step, we get the more
knowledge from everywhere on computer and manually. With the help of this knowledge
we find out the errors at the present time software system which is need to change for
improvement. Here several important points are given on this stage which is:
1. Full knowledge of problems and errors.
2. Ability to improve
3. Find the target for achieving.
4. Find out the benefits which should be in new software.
5. Find the area of plan which is affected after that change.
If we study the problems then it is necessary to think about the other solutions of the
problem and the cost of it which should be in the budget of the user. For this
improvement a lot of skill and attention required.
2. Study of work ability - On the basis of first steps result we moves for the next step
which is about the study of work ability. In this step we think about the present system
and future system and compared them. The area of comparison is skilled manpower,
estimated time period, and other important things. A study of work ability helps to decide
the important things.
o This plan is in our favor or not.
o We are capable for the required resources or not.
o Think again.
o Many types of work ability checked.
5. Technical work ability -
 We have the required technology?
 With the available tools can we developed the new system?
 Can the future system provide the result as required?
The new system will be more suitable for the user or not it is checked by the expert.
For Example: If a software actual requirement to develop visual basic with oracle at a
backend. But here we use less than 48 processer with 14 bit word length then this
software will not be technically sound. It is concerned about the used technology and
tools which are satisfying the need of the system or not.
6. Social work ability - This is the study of user behavior that people like or dislike
the new software.
7. Economic work ability - This factor is determined that new software benefits and
savings is more in the comparison of old software.
8. Legal work abilit - Legal work ability determined that the new software is under
the govt. rule or not. According to the result of the work ability study it is
analyzed to reach the following:
 Formulation of the different solution planning.
 To check the other solution planning & their benefits and compared them.
 Find out the best output and annualizes him.
Software need analysis and specification - Analysis is a study of following factors which plays
a major role in this step.
o By the system many kind of activity are performed.
o Connections between many functions and sub systems.
o Finally the relationship out of the boundary of system.
Need analysis - The main objective of need analysis is to understand what the user expectation
with this software and collection of data and information about that.
o Working capacity
o performance
o easy to use
o easy to maintain
During this process several kind of tools and method used .flow chart, collected data, diagram
and etc. are the part of this exercise. After the resolution of all problems and needs regarding
this, information is organized into a software need specification document.
Software need specification - This topic covers the following points:
o All the document of the user should be arranged in a systematic way,
o Nature of its interface
o Need of hardware
o Base of agreement
o Moral, legal coordination between client and developer
o A detailed plan
o To analyses and confirmation by the customer that it have all quality which is
expected by customer.
o With the help of software engineers to develop a solution.
Software design and specification - During the process of this step need specification converted
in to a base, which is used in programming language. We have two types of approaches:
0. Traditional Approach - This approach is also divided in two parts which are
 First part-
1. Specific needs of this software are moved out.
2. Structured analysis is converted in to a shape of software design.
3. Analysis of many functions, flow chart of data is a part of
structured analyses.
 Second part- Architecture design takes place after analysis of structured.
1. Which components are required?
2. The general base of the software.
3. The programs provided by every design.
4. Interfaces between modules.
5. Data base and result form of the system.
1. Object oriented design - In this design many kind of object raised in the domain
of problem and relationship between these objects is figured out.
Coding and module testing - Coding phase comes after the software design. Coding is a process
with the help of this we can convert the shape structure in to a programming language. Every
part of the design is a program module. Hare every module checked for the surety of module that
is according to the need.
Integration and system testing - In this phase as a whole system all modules which are tested
jointly according to architectural design. For getting the information that interconnection
concerned to modules are correct or not this step taken by the developer. Effects of testing helps
to get
0. Production of high quality software
1. User more satisfied
2. Cheap cost of maintenance
3. Accuracy
4. Result of surety
This system is tested only for getting the information that it is according to SRS or not. At the
last this test is done in the client presence.
System Implementation - System implementation means providing the information on client
site. We have three types of implementation.
0. Direct conversion
1. Phased conversion
2. Parallel conversion
System Maintenance - This step is required after that when customer use our software and
getting some problems .These problems can be related to website, installation and operational.
Maintenance divided in three parts.
o Corrective maintenance - During the process of software development
corrective fault not found or discovered.
o Perfective maintenance - Under this step functions which are performed by this
software increased according to the need of customer.
o Adaptive maintenance - Transform the software to new operating system,
environments or to a new computer is called Adaptive maintenance.

Q 2) Software Design Principles


Software design principles are concerned with providing means to handle the complexity
of the design process effectively. Effectively managing the complexity will not only
reduce the effort needed for design but can also reduce the scope of introducing errors
during design.

Following are the principles of Software Design


1)
Problem Partitioning
For small problem, we can handle the entire problem at once but for the significant problem,
divide the problems and conquer the problem it means to divide the problem into smaller pieces
so that each piece can be captured separately.
For software design, the goal is to divide the problem into manageable pieces.
Benefits of Problem Partitioning
1. Software is easy to understand
2. Software becomes simple
3. Software is easy to test
4. Software is easy to modify
5. Software is easy to maintain
6. Software is easy to expand
These pieces cannot be entirely independent of each other as they together form the system. They
have to cooperate and communicate to solve the problem. This communication adds complexity.
As the number of partition increases = Cost of partition and complexity increases

2)
Abstraction
An abstraction is a tool that enables a designer to consider a component at an abstract level
without bothering about the internal details of the implementation. Abstraction can be used for
existing element as well as the component being designed.
Here, there are two common abstraction mechanisms
1. Functional Abstraction
2. Data Abstraction
Functional Abstraction
i. A module is specified by the method it performs.
ii. The details of the algorithm to accomplish the functions are not visible to the user of the
function.
Functional abstraction forms the basis for Function oriented design approaches.
Data Abstraction
Details of the data elements are not visible to the users of data. Data Abstraction forms the basis
for Object Oriented design approaches.

3)
Modularity
Modularity specifies to the division of software into separate modules which are differently
named and addressed and are integrated later on in to obtain the completely functional software.
It is the only property that allows a program to be intellectually manageable. Single large
programs are difficult to understand and read due to a large number of reference variables,
control paths, global variables, etc.
The desirable properties of a modular system are:
o Each module is a well-defined system that can be used with other applications.
o Each module has single specified objectives.
o Modules can be separately compiled and saved in the library.
o Modules should be easier to use than to build.
o Modules are simpler from outside than inside.
Advantages and Disadvantages of Modularity:

Advantages of Modularity
There are several advantages of Modularity
o It allows large programs to be written by several or different people
o It encourages the creation of commonly used routines to be placed in the library and used
by other programs.
o It simplifies the overlay procedure of loading a large program into main storage.
o It provides more checkpoints to measure progress.
o It provides a framework for complete testing, more accessible to test
o It produced the well designed and more readable program.
Disadvantages of Modularity
There are several disadvantages of Modularity
o Execution time maybe, but not certainly, longer
o Storage size perhaps, but is not certainly, increased
o Compilation and loading time may be longer
o Inter-module communication problems may be increased
o More linkage required, run-time may be longer, more source lines must be written, and
more documentation has to be done
3)
Modular Design
Modular design reduces the design complexity and results in easier and faster implementation by
allowing parallel development of various parts of a system. We discuss a different section of
modular design in detail in this section:
1. Functional Independence: Functional independence is achieved by developing functions that
perform only one kind of task and do not excessively interact with other modules. Independence
is important because it makes implementation more accessible and faster. The independent
modules are easier to maintain, test, and reduce error propagation and can be reused in other
programs as well. Thus, functional independence is a good design feature which ensures
software quality.
It is measured using two criteria:
o Cohesion: It measures the relative function strength of a module.
o Coupling: It measures the relative interdependence among modules.
2. Information hiding: The fundamental of Information hiding suggests that modules can be
characterized by the design decisions that protect from the others, i.e., In other words, modules
should be specified that data include within a module is inaccessible to other modules that do not
need for such information.
The use of information hiding as design criteria for modular system provides the most significant
benefits when modifications are required during testing's and later during software maintenance.
This is because as most data and procedures are hidden from other parts of the software,
inadvertent errors introduced during modifications are less likely to propagate to different
locations within the software.

4)
Strategy of Design
A good system design strategy is to organize the program modules in such a method that are easy
to develop and latter too, change. Structured design methods help developers to deal with the
size and complexity of programs. Analysts generate instructions for the developers about how
code should be composed and how pieces of code should fit together to form a program.
To design a system, there are two possible approaches:
1. Top-down Approach
2. Bottom-up Approach
1. Top-down Approach: This approach starts with the identification of the main components
and then decomposing them into their more detailed sub-components.

2. Bottom-up Approach: A bottom-up approach begins with the lower details and moves
towards up the hierarchy, as shown in fig. This approach is suitable in case of an existing system.
Q 3) The production of the requirements stage of the software development process
is Software Requirements Specifications (SRS) (also called a requirements
document). This report lays a foundation for software engineering activities and is
constructing when entire requirements are elicited and analyzed. SRS is a formal report,
which acts as a representation of software that enables the customers to review whether it
(SRS) is according to their requirements. Also, it comprises user requirements for a
system as well as detailed specifications of the system requirements.
The SRS is a specification for a specific software product, program, or set of applications
that perform particular functions in a specific environment. It serves several goals
depending on who is writing it. First, the SRS could be written by the client of a system.
Second, the SRS could be written by a developer of the system. The two methods create
entirely various situations and establish different purposes for the document altogether.
The first case, SRS, is used to define the needs and expectation of the users. The second
case, SRS, is written for various purposes and serves as a contract document between
customer and developer.

Characteristics of good SRS

Following are the features of a good SRS document:

1. Correctness: User review is used to provide the accuracy of requirements stated in the SRS.
SRS is said to be perfect if it covers all the needs that are truly expected from the system.
2. Completeness: The SRS is complete if, and only if, it includes the following elements:
(1). All essential requirements, whether relating to functionality, performance, design,
constraints, attributes, or external interfaces.
(2). Definition of their responses of the software to all realizable classes of input data in all
available categories of situations.
Note: It is essential to specify the responses to both valid and invalid values.
(3). Full labels and references to all figures, tables, and diagrams in the SRS and definitions of all
terms and units of measure.
3. Consistency: The SRS is consistent if, and only if, no subset of individual requirements
described in its conflict. There are three types of possible conflict in the SRS:
(1). The specified characteristics of real-world objects may conflicts. For example,
(a) The format of an output report may be described in one requirement as tabular but in another
as textual.
(b) One condition may state that all lights shall be green while another states that all lights shall
be blue.
(2). There may be a reasonable or temporal conflict between the two specified actions. For
example,
(a) One requirement may determine that the program will add two inputs, and another may
determine that the program will multiply them.
(b) One condition may state that "A" must always follow "B," while other requires that "A and
B" co-occurs.
(3). Two or more requirements may define the same real-world object but use different terms for
that object. For example, a program's request for user input may be called a "prompt" in one
requirement's and a "cue" in another. The use of standard terminology and descriptions promotes
consistency.
4. Unambiguousness: SRS is unambiguous when every fixed requirement has only one
interpretation. This suggests that each element is uniquely interpreted. In case there is a method
used with multiple definitions, the requirements report should determine the implications in the
SRS so that it is clear and simple to understand.
5. Ranking for importance and stability: The SRS is ranked for importance and stability if
each requirement in it has an identifier to indicate either the significance or stability of that
particular requirement.
Typically, all requirements are not equally important. Some prerequisites may be essential,
especially for life-critical applications, while others may be desirable. Each element should be
identified to make these differences clear and explicit. Another way to rank requirements is to
distinguish classes of items as essential, conditional, and optional.
6. Modifiability: SRS should be made as modifiable as likely and should be capable of quickly
obtain changes to the system to some extent. Modifications should be perfectly indexed and
cross-referenced.
7. Verifiability: SRS is correct when the specified requirements can be verified with a cost-
effective system to check whether the final software meets those requirements. The requirements
are verified with the help of reviews.
8. Traceability: The SRS is traceable if the origin of each of the requirements is clear and if it
facilitates the referencing of each condition in future development or enhancement
documentation.
There are two types of Traceability:
1. Backward Traceability: This depends upon each requirement explicitly referencing its
source in earlier documents.
2. Forward Traceability: This depends upon each element in the SRS having a unique name or
reference number.
The forward traceability of the SRS is especially crucial when the software product enters the
operation and maintenance phase. As code and design document is modified, it is necessary to be
able to ascertain the complete set of requirements that may be concerned by those modifications.
9. Design Independence: There should be an option to select from multiple design alternatives
for the final system. More specifically, the SRS should not contain any implementation details.
10. Testability: An SRS should be written in such a method that it is simple to generate test
cases and test plans from the report.
11. Understandable by the customer: An end user may be an expert in his/her explicit domain
but might not be trained in computer science. Hence, the purpose of formal notations and
symbols should be avoided too as much extent as possible. The language should be kept simple
and clear.
12. The right level of abstraction: If the SRS is written for the requirements stage, the details
should be explained explicitly. Whereas,for a feasibility study, fewer analysis can be used.
Hence, the level of abstraction modifies according to the objective of the SRS.
Properties of a good SRS document
The essential properties of a good SRS document are the following:
Concise: The SRS report should be concise and at the same time, unambiguous, consistent, and
complete. Verbose and irrelevant descriptions decrease readability and also increase error
possibilities.
Structured: It should be well-structured. A well-structured document is simple to understand
and modify. In practice, the SRS document undergoes several revisions to cope up with the user
requirements. Often, user requirements evolve over a period of time. Therefore, to make the
modifications to the SRS document easy, it is vital to make the report well-structured.
Black-box view: It should only define what the system should do and refrain from stating how to
do these. This means that the SRS document should define the external behavior of the system
and not discuss the implementation issues. The SRS report should view the system to be
developed as a black box and should define the externally visible behavior of the system. For this
reason, the SRS report is also known as the black-box specification of a system.
Conceptual integrity: It should show conceptual integrity so that the reader can merely
understand it. Response to undesired events: It should characterize acceptable responses to
unwanted events. These are called system response to exceptional conditions.
Verifiable: All requirements of the system, as documented in the SRS document, should be
correct. This means that it should be possible to decide whether or not requirements have been
met in an implementation.
Q 4) The software development models are the variety of methods that are being
chosen for the expansion of the application depending on the application’s goals. There
are lots of development life cycle models that have been building to accomplish special
necessary aims. The models identify the range of phases of the procedure and in which
they are performed.
Software development models explain stages of the software application sequence and the sort
in which those stages are performed. There are plenty of representations, and several
organizations accept their hold nevertheless everyone contains extremely parallel models.
Every stage forms managements requisite with the subsequently stage in the existence
sequence. Necessities are interpreted to plan. System is formed at the execution that is
determined by the plan.
Testing authenticates the managements of the completion stage alongside
necessities.
So many types of software expansion representation are as follows:

1. Waterfall model
2. V model
3. Incremental model
4. RAD model
5. Agile model
6. Iterative model
7. Spiral model

Waterfall Model
This Model was first development model to be announced. It is also introduce to as a linear-
sequential life cycle model. Waterfall model is very easy to understand and implement. In every
stage must be finished entirely so the next stage can start. At the finish of every stage, an
assessment appears to establish if the application is on the accurate pathway or not to persist or
remove the function. In waterfall form stages do not goes backward.
Advantages:
 Easy and effortless to utilize.
 Simple to control due to the inflexibility of the model
 Stages are enlargement and completed entity at a moment.
 Workings fine for minor applications where necessities are extremely fine unstated.
Disadvantages:
 One time a project is in the testing phase, it is awfully complicated to get rear and alter
somewhat that was not fine thinking out in the perception phase.
 No equipped function is shaped in expectation of belated all through the life cycle.
 Elevated quantities of danger and ambiguity.
 Not a good quality model for multifaceted and object-oriented applications.
 Inopportune replica for widespread and long-lasting applications.
 v Not fitting for the software where necessities are at a reasonable to elevated hazard of
altering.
Use of the Waterfall Model:

 Necessities are extremely renowned, obvious and permanent.


 Result explanation is steady.
 Mechanism is implicit.
 There are no Uncertain necessities
 Plenty possessions with necessary knowledge are accessible liberally
 The development is small.
V model
This model defines confirmation and corroboration model. As the waterfall model, the V model
is in order of pathway of debugging of methods. Every stage should be finished before the after
that stage starts. Testing of the application is considered in equivalent with a consequent stage of
progress.

The different stages of the V-model are as describes below:


Requirements: Resembling Business Requirement Specification and System Requirement
Specification start the existence series model just similar to the waterfall model. Except, in this
model before expansion is happening, application experiment preparation is formed. The
experiment preparation centers on gathering the performance particular in the necessities
assembly.
The high-level design: This stage centers on application planning and intend. It gives summary
of explanation, proposal, application, result and examine/procedure. An incorporation test plan is
formed in this stage as fine in sort to test the parts of the software application capability to effort
jointly.
The low-level design: This stage is where the definite application elements are planned. It
describes the real reason for every part of the application. Group illustration with all the
techniques and relative among modules appears in Low Level Design. part tests are formed in
this stage also.
The implementation: This stage is, again, where all programming done. Formerly programming
is comprehensive, the pathway of effecting continues up the true part of the V model where the
test plans industrial previous is at present put to employ.
Coding: This is at the base of the V model. Component proposes are changed into system by
programmers.
 Advantages:
 Effortless and simple to apply.
 Testing behavior similar to scheduling, test planning occurs fine before programming.
This keeps many times. Therefore advanced possibility of victory over the waterfall
model.
 Practical fault following – that is fault are originated at primary phase.
 Passes up the descending stream of the faults.
 Workings fine for little application where necessities are simply implicit.
Drawbacks:
 Extremely inflexible and slightest lithe.
 Application is programmed in the execution stage, so no untimely demo of the
application are formed.
 If some modifications happen in middle, then the test details along with prerequisite
details have to be modernized.
Use of the V-model:
 This type of model must be used for little to average sized developments where
necessities are obviously definite and permanent.
 This model ought to be preferred when sufficient technological possessions are accessible
with required technological knowledge.
 Elevated assurance of client is necessary for desiring the V-Shaped model advance. As,
no models are formed, there is an extremely elevated hazard concerned in gathering client
prospects.

Incremental model
In This type of model the entire necessity is separated into different executions. Numerous
expansion cycles perform here, building the life cycle a “multi-waterfall” sequence. Sequences
are separated up into lesser, more easily controls parts. Every part goes by the provisions,
diagram, implementation and testing phases. An efficient description of application is created
throughout the primary part, so we have functioning application initially on in the software life
cycle. Every consequent discharge of the parts adjoins method to the earlier discharge. The
method carries on until the whole requirement is accomplished.

Advantages:

 Produces functioning application fast and early on throughout the life series of the
application.
 Extra flexible – smaller amount exclusive to adjust competence and provisions.
 Simpler to test and execute in a lesser repetition.
 Client can react to every development.
 Lesser primary release expense.
 Easier to manage peril as dangerous parts are documented and absorbed in it would
recurrence.
Drawbacks:
 Necessities outstanding grounding and contain it in intelligence.
 Needs an understandable and all-encompassing connotation of the complete construction
prior to it may be broken down and executed increasingly.
 Whole expense is more than waterfall model.
Use of the Incremental model:
 Necessities of the whole structure are evidently distinct and implicit.
 Main necessities should be clear; though, little information may develop with instance.
 There is a requirement to acquire a result to the marketplace before time.
 A fresh equipment have been used
 Property with desirable expertise position are not accessible
 There are various high danger property and aims.
RAD Model
RAD model stands for Rapid Application Development model. It is a kind of cumulative model.
In this model the elements or methods are built in equivalent since if they were small
applications. The executions are instance boxed, spread and after that composed into an prepared
model. RAD model can rapidly provide the client somewhat to observe and employ and to
supply response concerning the release and their necessities.

Stages of RAD model are as describes below:


Business modeling: The data stream is acknowledged among several commerce programs.
Data modeling: Data collected from commerce representation is worn to classify statistics
substances that are desirable for the production.
Process modeling: Information items definite in information representation are transformed to
accomplish the commerce data flow to get some exact commerce purpose. Explanation are
acknowledged and formed for handling of information substances.
Application generation: Mechanical apparatus are worn to change procedure models to system
and the real application.
Testing and turnover: Examination fresh mechanism and the entire forms.
 Advantages:
 Cut evolution time.
 Enhances recyclable of elements
 Rapid primary analysis happen
 Support client response
 Addition from extremely opening resolves many of combination factors.
Drawbacks:
 Specially depends on well-built group and entity presentations for recognizing industry
necessities.
 Just application that could be customized may be developed by this RAD model
 Needs extremely expert programmers.
 Enormous dependence on demonstration capability
 Inappropriate to low cost software since expense of representation and automatic
system creation is extremely excessive.
Use of the RAD model:
 This model must be used when there is a requirement to build a structure that can be
commutable in two to three months of occasion.
 RAD model ought to be used if there is elevated accessibility of fashionable for
reproducing and the resources is high sufficient to give the charge beside with the charge
of mechanical programming producing apparatuses.
 This type of model must be selected merely if possessions with high commerce
information are accessible and there is required to create the application in a small
duration of time (approx two to three months).

Agile model
This model is too a kind of Incremental model. Software application is built in increasing, fast
sequences. This outcome in little incrementing delivery with each release building on previous
functionality. Every delivery is comprehensively tested to make sure software excellence is
conserved. It is worn for case in tip vital software. Presently one of the most fine identified agile
development life cycle models is excessive encoding.
Advantages:
 Client fulfillment by fast, constant release of helpful application.
 Public and communications are emphasized rather than method and apparatus. Client,
programmers and test engineers continually interrelate with all other.
 Well configured application is free regularly within weeks rather than months.
 Individually argument is the optimum form of call.
 Secure day by day collaboration among industry populace and programmers.
 Uncontrollably absorption to unconscious radiance and good quality plan.
 Normal revision to altering conditions.
 So far postponed modifications in necessities are greets.
Drawbacks:
 In case of various software achievements, specially the big ones, it is hard to evaluate the
attempt necessary at the start of the software development life cycle.
 Require of importance on essential scheming and documents.
 The application may simply get taken off way if the client spokesperson is not obvious
what ultimate result that they desire.
 Only expert developers are able of pleasing the type of choices requisite in the expansion
method. Therefore it has no position for apprentice developers, except mutual with
knowledgeable possessions.
Use of the agile model:
 While fresh modifications are desirable to be executed. The liberty agile provides to
modify is extremely significant. Fresh modifications may be executed at incredibly small
charge as of the regularity of fresh growths that are formed.
 To apply a latest attribute the programmers require mislaying just the job of a little time,
or yet just hours, to turn back and execute it.
 This model requires very inadequate scheduling than the waterfall model and necessary
to get in progress with the application. Here presumes that the users’ desires are always
varying in an active commerce. Modifications may be conversed and statuses may be
recently consequence or detached basis on response. This successfully provides the client
the completed application they desire or require.
 Mutually application programmers and end users similar, discover they too acquire extra
liberty of instance and choices than if the application was built in an additional inflexible
in order technique. These opportunities provides them the capacity to abscond significant
result pending additional or improved statistics or yet complete hosting programs are
obtainable; denotation the application may carrying on to go onward devoid of terror of
attainment a rapid idle.
Iterative model
This type of life cycle model does not go to begin with a complete requirement of necessities. In
place of, expansion starts by identifying and execution just elements of the application, which
may then be evaluation in order to recognize additional necessities. This method is then repeated,
creating a new edition of the application for every sequence of the model.

Advantages:

 In this iterative model we can merely produce a high-level plan of the software prior to
we really start to make the application and describe the plan resolution for the whole
software. After that we can propose and developed a demo edition of that, and then
changed the intend basis on what had been developed.
 Here we are constructing and exceeding the function treads wise. Consequently we can
mean the faults at most important stages. This wipes the descending stream of the defects.
 Here in this model we may get the consistent end user response. When representing
demos and baseline of the application to end users for their rejoinder, we are competently
say them to envisage how the software will function.
 In iterative model fewer time is use on detailing and additional time is specified for
panning.
Drawbacks:
 Every stage of an iteration is inflexible with no overlies
 Expensive application structural design or design factor might occur as not all necessities
are get together obverse for the whole lifecycle
Use of the iterative model:
 Necessities of the absolute application are obviously distinct and implicit.
 Iterative model is used when the application is large
 Most important necessities should be distinct; but, various particulars can develop with
instance.

Spiral model
This model is parallel to the incremental model, with additional stress positioned on hazard
investigation. The spiral model has four stages: scheduling, hazard study, manufacturing and
evaluation .A software function regularly goes all through these phases in recurrence. The
commencement spirals, opening in the scheduling stage, necessities are collected and hazard is
evaluated. Every following spiral develops on the commencement spiral. Necessities are
assembles in the scheduling phase. In the hazard scrutiny stage, a method is assumed to
recognize hazard and exchange explanations. A replica is shaped at the come to an end of the
hazard investigation stage.
Application is formed in the manufacturing stage, beside with testing at the finish of the stage.
The assessment stage permits the client to estimate the production of the development to time
previous to the development going to the next spiral.

Benefits:
 Excessive quantity of hazard scrutiny therefore, escaping of hazard is improved.
 Good quality for huge and operation serious software.
 Well-built sanction and certification manage.
 Supplementary operation can be extra at a later on time.
 Software application is formed untimely in the software life cycle.
Drawbacks:
 Could be an expensive model to employ.
 Exposure examination needs tremendously specific skill.
 Victory of application is very reliant on the hazard examination stage.
 Functionality isn’t work well for lesser application.
Use of the Spiral model:
 While expenses and hazard estimate is significant
 Important for average to elevated hazard applications
 Continuing development assurance risky as of possible modifications to financial
precedence
 End users are uncertain of their requirements
 Necessities are multifaceted
 Fresh creation procession
 Important modifications are predictable

Q 5) A data flow diagram (or DFD) is a graphical representation of the flow of data through
an information system. It shows how information is input to and output from the system,
the sources and destinations of that information, and where that information is stored.
It shows how data enters and leaves the system, what changes the information, and where
data is stored.
The objective of a DFD is to show the scope and boundaries of a system as a whole. It
may be used as a communication tool between a system analyst and any person who
plays a part in the order that acts as a starting point for redesigning a system. The DFD is
also called as a data flow graph or bubble chart.

The following observations about DFD’s are essential:


1. All names should be unique. This makes it easier to refer to elements in the DFD.
2. Remember that DFD is not a flow chart. Arrows is a flow chart that represents the order
of events; arrows in DFD represents flowing data. A DFD does not involve any order of
events.
3. Suppress logical decisions. If we ever have the urge to draw a diamond-shaped box in a
DFD, suppress that urge! A diamond-shaped box is used in flow charts to represents
decision points with multiple exists paths of which the only one is taken. This implies an
ordering of events, which makes no sense in a DFD.
4. Do not become bogged down with details. Defer error conditions and error handling until
the end of the analysis.

Symbols of DFD:

Circle: A circle (bubble) shows a process that transforms data inputs into data outputs.

Data Flow: A curved line shows the flow of data into or out of a process or data store.
Data Store: A set of parallel lines shows a place for the collection of data items. A data
store indicates that the data is stored which can be used at a later stage or by the other
processes in a different order. The data store can have an element or group of elements.

Source or Sink: Source or Sink is an external entity and acts as a source of system inputs
or sink of system outputs.

Levels in Data Flow Diagrams (DFD)

The DFD may be used to perform a system or software at any level of abstraction. Infact,
DFD’s may be partitioned into levels that represent increasing information flow and
functional detail. Levels in DFD are numbered 0, 1, 2 or beyond. Here, we will see
primarily three levels in the data flow diagram, which are: 0-level DFD, 1-level DFD,
and 2-level DFD.

0-level DFD

It is also known as fundamental system model, or context diagram represents the entire
software requirement as a single bubble with input and output data denoted by incoming
and outgoing arrows. Then the system is decomposed and described as a DFD with
multiple bubbles. Parts of the system represented by each of these bubbles are then
decomposed and documented as more and more detailed DFD’s. This process may be
repeated at as many levels as necessary until the program at hand is well understood. It is
essential to preserve the number of inputs and outputs between levels, this concept is
called leveling by De Macro. Thus, if bubble "A" has two inputs x1 and x2 and one
output y, then the expanded DFD, that represents "A" should have exactly two external
inputs and one external output as shown in fig:

Data Flow Diagrams


The Level-0 DFD, also called context diagram of the result management system is shown
in fig. As the bubbles are decomposed into less and less abstract bubbles, the
corresponding data flow may also be needed to be decomposed.

Data Flow Diagrams


1-level DFD
In 1-level DFD, a context diagram is decomposed into multiple bubbles/processes. In this
level, we highlight the main objectives of the system and breakdown the high-level
process of 0-level DFD into sub processes.

2-Level DFD

2-level DFD goes one process deeper into parts of 1-level DFD. It can be used to project
or record the specific/necessary detail about the system's functioning.

Short Answers:
Q 1) Coding Standards
Different modules specified in the design document are coded in the Coding phase
according to the module specification. The main goal of the coding phase is to code from
the design document prepared after the design phase through a high-level language and
then to unit tests this code.
Good software development organizations want their programmers to maintain to some
well-defined and standard style of coding called coding standards. They usually make
their own coding standards and guidelines depending on what suits their organization best
and based on the types of software they develop. It is very important for the programmers
to maintain the coding standards otherwise the code will be rejected during code review.
Purpose of Having Coding Standards:
 A coding standard gives a uniform appearance to the codes written by different engineers.
 It improves readability, and maintainability of the code and it reduces complexity also.
 It helps in code reuse and helps to detect error easily.
 It promotes sound programming practices and increases efficiency of the programmers.
Some of the coding standards are given below:
1. Limited use of globals:
These rules tell about which types of data that can be declared global and the data that
can’t be.

2. Standard headers for different modules:


For better understanding and maintenance of the code, the header of different modules
should follow some standard format and information. The header format must contain
below things that is being used in various companies:
 Name of the module
 Date of module creation
 Author of the module
 Modification history
 Synopsis of the module about what the module does
 Different functions supported in the module along with their input output
parameters
 Global variables accessed or modified by the module

3. Naming conventions for local variables, global variables, constants and functions:
Some of the naming conventions are given below:
 Meaningful and understandable variables name helps anyone to understand the
reason of using it.
 Local variables should be named using camel case lettering starting with small
letter (e.g. local Data) whereas Global variables names should start with a capital
letter (e.g. Global Data). Constant names should be formed using capital letters
only (e.g. CONS DATA).
 It is better to avoid the use of digits in variable names.
 The names of the function should be written in camel case starting with small
letters.
 The name of the function must describe the reason of using the function clearly
and briefly.

4. Indentation:
Proper indentation is very important to increase the readability of the code. For making
the code readable, programmers should use White spaces properly. Some of the spacing
conventions are given below:
 There must be a space after giving a comma between two function arguments.
 Each nested block should be properly indented and spaced.
 Proper Indentation should be there at the beginning and at the end of each block
in the program.
 All braces should start from a new line and the code following the end of braces
also start from a new line.
5. Error return values and exception handling conventions:
All functions that encountering an error condition should either return a 0 or 1 for
simplifying the debugging.
On the other hand, coding guidelines give some general suggestions regarding the coding style
that to be followed for the betterment of understandability and readability of the code.
Some of the coding guidelines are given below:

6. Avoid using a coding style that is too difficult to understand:


Code should be easily understandable. The complex code makes maintenance and
debugging difficult and expensive.

7. Avoid using an identifier for multiple purposes:


Each variable should be given a descriptive and meaningful name indicating the reason
behind using it. This is not possible if an identifier is used for multiple purposes and thus
it can lead to confusion to the reader. Moreover, it leads to more difficulty during future
enhancements.

8. Code should be well documented:


The code should be properly commented for understanding easily. Comments regarding
the statements increase the understandability of the code.

9. Length of functions should not be very large:


Lengthy functions are very difficult to understand. That’s why functions should be small
enough to carry out small work and lengthy functions should be broken into small ones
for completing small tasks.

10. Try not to use GOTO statement:


GOTO statement makes the program unstructured, thus it reduces the understandability
of the program and also debugging becomes difficult.

Q 2) Software maintenance is a part of Software Development Life Cycle. Its main purpose is
to modify and update software application after delivery to correct faults and to improve
performance. Software is a model of the real world. When the real world changes, the
software requires alteration wherever possible.
Software maintenance is a vast activity which includes optimization, error correction, and
deletion of discarded features and enhancement of existing features. Since these changes
are necessary, a mechanism must be created for estimation, controlling and making
modifications. The essential part of software maintenance requires preparation of an
accurate plan during the development cycle.
Typically, maintenance takes up about 40- 80% of the project cost, usually closer to the
higher pole. Hence, a focus on maintenance definitely helps keep costs down.

Software Maintenance Processes are:

• The Software Maintenance process includes a maintenance plan which contains software
preparation, problem identification and find out about product configuration management.

• The problem analysis process includes checking validity, examining it and coming up with a
solution and finally getting all the required support to apply for modification.

• The process acceptance by confirming the changes with the individual who raised the request.

• The platform migration process, which is used if software is needed to be ported to another
platform without any change in functionality.

Some software points that affect maintenance cost include:

• Structure of Software Program

• Programming Language

• Dependence on external environment

• Staff reliability and availability

Q 3) Estimation is the process of finding an estimate, or approximation, which is a value that


can be used for some purpose even if input data may be incomplete, uncertain, or unstable.
Estimation determines how much money, effort, resources, and time it will take to build a
specific system or product. Estimation is based on −
 Past Data/Past Experience
 Available Documents/Knowledge
 Assumptions
 Identified Risks
The four basic steps in Software Project Estimation are −
 Estimate the size of the development product.
 Estimate the effort in person-months or person-hours.
 Estimate the schedule in calendar months.
 Estimate the project cost in agreed currency.
Observations on Estimation
 Estimation need not be a one-time task in a project. It can take place during −
o Acquiring a Project.
o Planning the Project.
o Execution of the Project as the need arises.
 Project scope must be understood before the estimation process begins. It will be helpful
to have historical Project Data.
 Project metrics can provide a historical perspective and valuable input for generation of
quantitative estimates.
 Planning requires technical managers and the software team to make an initial
commitment as it leads to responsibility and accountability.
 Past experience can aid greatly.
 Use at least two estimation techniques to arrive at the estimates and reconcile the
resulting values. Refer Decomposition Techniques in the next section to learn about
reconciling estimates.
 Plans should be iterative and allow adjustments as time passes and more details are
known.
General Project Estimation Approach
The Project Estimation Approach that is widely used is Decomposition Technique.
Decomposition techniques take a divide and conquer approach. Size, Effort and Cost estimation
are performed in a stepwise manner by breaking down a Project into major Functions or related
Software Engineering Activities.
Step 1 − Understand the scope of the software to be built.
Step 2 − Generate an estimate of the software size.
 Start with the statement of scope.
 Decompose the software into functions that can each be estimated individually.
 Calculate the size of each function.
 Derive effort and cost estimates by applying the size values to your baseline productivity
metrics.
 Combine function estimates to produce an overall estimate for the entire project.
Step 3 − Generate an estimate of the effort and cost. You can arrive at the effort and cost
estimates by breaking down a project into related software engineering activities.
 Identify the sequence of activities that need to be performed for the project to be
completed.
 Divide activities into tasks that can be measured.
 Estimate the effort (in person hours/days) required to complete each task.
 Combine effort estimates of tasks of activity to produce an estimate for the activity.
 Obtain cost units (i.e., cost/unit effort) for each activity from the database.
 Compute the total effort and cost for each activity.
 Combine effort and cost estimates for each activity to produce an overall effort and cost
estimate for the entire project.
Step 4 − Reconcile estimates: Compare the resulting values from Step 3 to those obtained from
Step 2. If both sets of estimates agree, then your numbers are highly reliable. Otherwise, if
widely divergent estimates occur conduct further investigation concerning whether −
 The scope of the project is not adequately understood or has been misinterpreted.
 The function and/or activity breakdown is not accurate.
 Historical data used for the estimation techniques is inappropriate for the application, or
obsolete, or has been misapplied.
Step 5 − Determine the cause of divergence and then reconcile the estimates.
Estimation Accuracy
Accuracy is an indication of how close something is to reality. Whenever you generate an
estimate, everyone wants to know how close the numbers are to reality. You will want every
estimate to be as accurate as possible, given the data you have at the time you generate it. And of
course you don’t want to present an estimate in a way that inspires a false sense of confidence in
the numbers.
Important factors that affect the accuracy of estimates are −
 The accuracy of all the estimate’s input data.
 The accuracy of any estimate calculation.
 How closely the historical data or industry data used to calibrate the model matches the
project you are estimating.
 The predictability of your organization’s software development process.
 The stability of both the product requirements and the environment that supports the
software engineering effort.
 Whether or not the actual project was carefully planned, monitored and controlled, and
no major surprises occurred that caused unexpected delays.
Following are some guidelines for achieving reliable estimates −
 Base estimates on similar projects that have already been completed.
 Use relatively simple decomposition techniques to generate project cost and effort
estimates.
 Use one or more empirical estimation models for software cost and effort estimation.
Refer to the section on Estimation Guidelines in this chapter.
To ensure accuracy, you are always advised to estimate using at least two techniques and
compare the results.
Estimation Issues
Often, project managers resort to estimating schedules skipping to estimate size. This may be
because of the timelines set by the top management or the marketing team. However, whatever
the reason, if this is done, then at a later stage it would be difficult to estimate the schedules to
accommodate the scope changes.
While estimating, certain assumptions may be made. It is important to note all these assumptions
in the estimation sheet, as some still do not document assumptions in estimation sheets.
Even good estimates have inherent assumptions, risks, and uncertainty, and yet they are often
treated as though they are accurate.
The best way of expressing estimates is as a range of possible outcomes by saying, for example,
that the project will take 5 to 7 months instead of stating it will be complete on a particular date
or it will be complete in a fixed no. of months. Beware of committing to a range that is too
narrow as that is equivalent to committing to a definite date.
 You could also include uncertainty as an accompanying probability value. For example,
there is a 90% probability that the project will complete on or before a definite date.
 Organizations do not collect accurate project data. Since the accuracy of the estimates
depend on the historical data, it would be an issue.
 For any project, there is a shortest possible schedule that will allow you to include the
required functionality and produce quality output. If there is a schedule constraint by
management and/or client, you could negotiate on the scope and functionality to be
delivered.
 Agree with the client on handling scope creeps to avoid schedule overruns.
 Failure in accommodating contingency in the final estimate causes issues. For e.g.,
meetings, organizational events.
 Resource utilization should be considered as less than 80%. This is because the resources
would be productive only for 80% of their time. If you assign resources at more than 80%
utilization, there is bound to be slippages.
Estimation Guidelines
One should keep the following guidelines in mind while estimating a project −
 During estimation, ask other people's experiences. Also, put your own experiences at
task.
 Assume resources will be productive for only 80 percent of their time. Hence, during
estimation take the resource utilization as less than 80%.
 Resources working on multiple projects take longer to complete tasks because of the time
lost switching between them.
 Include management time in any estimate.
 Always build in contingency for problem solving, meetings and other unexpected events.
 Allow enough time to do a proper project estimate. Rushed estimates are inaccurate,
high-risk estimates. For large development projects, the estimation step should really be
regarded as a mini project.
 Where possible, use documented data from your organization’s similar past projects. It
will result in the most accurate estimate. If your organization has not kept historical data,
now is a good time to start collecting it.
 Use developer-based estimates, as the estimates prepared by people other than those who
will do the work will be less accurate.
 Use several different people to estimate and use several different estimation
techniques.
 Reconcile the estimates. Observe the convergence or spread among the estimates.
Convergence means that you have got a good estimate. Wideband-Delphi technique can
be used to gather and discuss estimates using a group of people, the intention being to
produce an accurate, unbiased estimate.
 Re-estimate the project several times throughout its life cycle.

Q 4) Software Metrics
Software metric is a measure of software characteristics which are measurable or
countable. Software metrics are valuable for many reasons, including measuring software
performance, planning work items, measuring productivity, and many other uses.
Within the software development process, many metrics are that are all connected. Software
metrics are similar to the four functions of management: Planning, Organization, Control, or
Improvement.
Classification of Software Metrics
Software metrics can be classified into two types as follows:
1. Product Metrics: These are the measures of various characteristics of the software product.
The two important software characteristics are:
1. Size and complexity of software.
2. Quality and reliability of software.
These metrics can be computed for different stages of SDLC.
2. Process Metrics: These are the measures of various characteristics of the software
development process. For example, the efficiency of fault detection. They are used to measure
the characteristics of methods, techniques, and tools that are used for developing software.

Types of Metrics

Internal metrics: Internal metrics are the metrics used for measuring properties that are viewed
to be of greater importance to a software developer. For example, Lines of Code (LOC) measure.
External metrics: External metrics are the metrics used for measuring properties that are viewed
to be of greater importance to the user, e.g., portability, reliability, functionality, usability, etc.
Hybrid metrics: Hybrid metrics are the metrics that combine product, process, and resource
metrics. For example, cost per FP where FP stands for Function Point Metric.
Project metrics: Project metrics are the metrics used by the project manager to check the
project's progress. Data from the past projects are used to collect various metrics, like time and
cost; these estimates are used as a base of new software. Note that as the project proceeds, the
project manager will check its progress from time-to-time and will compare the effort, cost, and
time with the original effort, cost and time. Also understand that these metrics are used to
decrease the development costs, time efforts and risks. The project quality can also be improved.
As quality improves, the number of errors and time, as well as cost required, is also reduced.
Advantage of Software Metrics
Comparative study of various design methodology of software systems.
For analysis, comparison, and critical study of different programming language concerning their
characteristics.
In comparing and evaluating the capabilities and productivity of people involved in software
development.
In the preparation of software quality specifications.
In the verification of compliance of software systems requirements and specifications.
In making inference about the effort to be put in the design and development of the software
systems.
In getting an idea about the complexity of the code.
In taking decisions regarding further division of a complex module is to be done or not.
In guiding resource manager for their proper utilization.
In comparison and making design tradeoffs between software development and maintenance
cost.
In providing feedback to software managers about the progress and quality during various phases
of the software development life cycle.
In the allocation of testing resources for testing the code.
Disadvantage of Software Metrics
The application of software metrics is not always easy, and in some cases, it is difficult and
costly.
The verification and justification of software metrics are based on historical/empirical data
whose validity is difficult to verify.
These are useful for managing software products but not for evaluating the performance of the
technical staff.
The definition and derivation of Software metrics are usually based on assuming which are not
standardized and may depend upon tools available and working environment.
Most of the predictive models rely on estimates of certain variables which are often not known
precisely.

Q 5) Coupling and Cohesion

Module Coupling

In software engineering, the coupling is the degree of interdependence between software


modules. Two modules that are tightly coupled are strongly dependent on each other. However,
two modules that are loosely coupled are not dependent on each other. Uncoupled modules have
no interdependence at all within them.
There are various types of coupling techniques

A good design is the one that has low coupling. Coupling is measured by the number of relations
between the modules. That is, the coupling increases as the number of calls between modules
increase or the amount of shared data is large. Thus, it can be said that a design with high
coupling will have more errors.
Types of Module Coupling

1. No Direct Coupling: There is no direct coupling between M1 and M2.

In this case, modules are subordinates to different modules. Therefore, no direct coupling.
2. Data Coupling: When data of one module is passed to another module, this is called data
coupling.

3. Stamp Coupling: Two modules are stamp coupled if they communicate using composite data
items such as structure, objects, etc. When the module passes non-global data structure or entire
structure to another module, they are said to be stamp coupled. For example, passing structure
variable in C or object in C++ language to a module.
4. Control Coupling: Control Coupling exists among two modules if data from one module is
used to direct the structure of instruction execution in another.
5. External Coupling: External Coupling arises when two modules share an externally imposed
data format, communication protocols, or device interface. This is related to communication to
external tools and devices.
6. Common Coupling: Two modules are common coupled if they share information through
some global data items.

7. Content Coupling: Content Coupling exists among two modules if they share code, e.g., a
branch from one module into another module.

Module Cohesion
In computer programming, cohesion defines to the degree to which the elements of a module
belong together. Thus, cohesion measures the strength of relationships between pieces of
functionality within a given module. For example, in highly cohesive systems, functionality is
strongly related.
Cohesion is an ordinal type of measurement and is generally described as "high cohesion" or
"low cohesion."

Types of Modules Cohesion

1. Functional Cohesion: Functional Cohesion is said to exist if the different elements of a


module, cooperate to achieve a single function.
2. Sequential Cohesion: A module is said to possess sequential cohesion if the element of a
module form the components of the sequence, where the output from one component of
the sequence is input to the next.
3. Communicational Cohesion: A module is said to have communicational cohesion, if all
tasks of the module refer to or update the same data structure, e.g., the set of functions
defined on an array or a stack.
4. Procedural Cohesion: A module is said to be procedural cohesion if the set of purpose
of the module are all parts of a procedure in which particular sequence of steps has to be
carried out for achieving a goal, e.g., the algorithm for decoding a message.
5. Temporal Cohesion: When a module includes functions that are associated by the fact
that all the methods must be executed in the same time, the module is said to exhibit
temporal cohesion.
6. Logical Cohesion: A module is said to be logically cohesive if all the elements of the
module perform a similar operation. For example Error handling, data input and data
output, etc.
7. Coincidental Cohesion: A module is said to have coincidental cohesion if it performs a
set of tasks that are associated with each other very loosely, if at all.
Differentiate between Coupling and Cohesion
Coupling Cohesion

Coupling is also called Inter-Module Cohesion is also called Intra-Module Binding.


Binding.

Coupling shows the relationships between Cohesion shows the relationship within the module.
modules.

Coupling shows the Cohesion shows the module's relative functional strength.
relative independence between the
modules.

While creating, you should aim for low While creating you should aim for high cohesion, i.e., a
coupling, i.e., dependency among modules cohesive component/ module focuses on a single function
should be less. (i.e., single-mindedness) with little interaction with other
modules of the system.

In coupling, modules are linked to the other In cohesion, the module focuses on a single thing.
modules.

You might also like