Professional Documents
Culture Documents
ASSIGNMENT 2019
1. Statement of problem & system study - The first step of system development process is
started with statement of problem & system study. In this step, we get the more
knowledge from everywhere on computer and manually. With the help of this knowledge
we find out the errors at the present time software system which is need to change for
improvement. Here several important points are given on this stage which is:
1. Full knowledge of problems and errors.
2. Ability to improve
3. Find the target for achieving.
4. Find out the benefits which should be in new software.
5. Find the area of plan which is affected after that change.
If we study the problems then it is necessary to think about the other solutions of the
problem and the cost of it which should be in the budget of the user. For this
improvement a lot of skill and attention required.
2. Study of work ability - On the basis of first steps result we moves for the next step
which is about the study of work ability. In this step we think about the present system
and future system and compared them. The area of comparison is skilled manpower,
estimated time period, and other important things. A study of work ability helps to decide
the important things.
o This plan is in our favor or not.
o We are capable for the required resources or not.
o Think again.
o Many types of work ability checked.
5. Technical work ability -
We have the required technology?
With the available tools can we developed the new system?
Can the future system provide the result as required?
The new system will be more suitable for the user or not it is checked by the expert.
For Example: If a software actual requirement to develop visual basic with oracle at a
backend. But here we use less than 48 processer with 14 bit word length then this
software will not be technically sound. It is concerned about the used technology and
tools which are satisfying the need of the system or not.
6. Social work ability - This is the study of user behavior that people like or dislike
the new software.
7. Economic work ability - This factor is determined that new software benefits and
savings is more in the comparison of old software.
8. Legal work abilit - Legal work ability determined that the new software is under
the govt. rule or not. According to the result of the work ability study it is
analyzed to reach the following:
Formulation of the different solution planning.
To check the other solution planning & their benefits and compared them.
Find out the best output and annualizes him.
Software need analysis and specification - Analysis is a study of following factors which plays
a major role in this step.
o By the system many kind of activity are performed.
o Connections between many functions and sub systems.
o Finally the relationship out of the boundary of system.
Need analysis - The main objective of need analysis is to understand what the user expectation
with this software and collection of data and information about that.
o Working capacity
o performance
o easy to use
o easy to maintain
During this process several kind of tools and method used .flow chart, collected data, diagram
and etc. are the part of this exercise. After the resolution of all problems and needs regarding
this, information is organized into a software need specification document.
Software need specification - This topic covers the following points:
o All the document of the user should be arranged in a systematic way,
o Nature of its interface
o Need of hardware
o Base of agreement
o Moral, legal coordination between client and developer
o A detailed plan
o To analyses and confirmation by the customer that it have all quality which is
expected by customer.
o With the help of software engineers to develop a solution.
Software design and specification - During the process of this step need specification converted
in to a base, which is used in programming language. We have two types of approaches:
0. Traditional Approach - This approach is also divided in two parts which are
First part-
1. Specific needs of this software are moved out.
2. Structured analysis is converted in to a shape of software design.
3. Analysis of many functions, flow chart of data is a part of
structured analyses.
Second part- Architecture design takes place after analysis of structured.
1. Which components are required?
2. The general base of the software.
3. The programs provided by every design.
4. Interfaces between modules.
5. Data base and result form of the system.
1. Object oriented design - In this design many kind of object raised in the domain
of problem and relationship between these objects is figured out.
Coding and module testing - Coding phase comes after the software design. Coding is a process
with the help of this we can convert the shape structure in to a programming language. Every
part of the design is a program module. Hare every module checked for the surety of module that
is according to the need.
Integration and system testing - In this phase as a whole system all modules which are tested
jointly according to architectural design. For getting the information that interconnection
concerned to modules are correct or not this step taken by the developer. Effects of testing helps
to get
0. Production of high quality software
1. User more satisfied
2. Cheap cost of maintenance
3. Accuracy
4. Result of surety
This system is tested only for getting the information that it is according to SRS or not. At the
last this test is done in the client presence.
System Implementation - System implementation means providing the information on client
site. We have three types of implementation.
0. Direct conversion
1. Phased conversion
2. Parallel conversion
System Maintenance - This step is required after that when customer use our software and
getting some problems .These problems can be related to website, installation and operational.
Maintenance divided in three parts.
o Corrective maintenance - During the process of software development
corrective fault not found or discovered.
o Perfective maintenance - Under this step functions which are performed by this
software increased according to the need of customer.
o Adaptive maintenance - Transform the software to new operating system,
environments or to a new computer is called Adaptive maintenance.
2)
Abstraction
An abstraction is a tool that enables a designer to consider a component at an abstract level
without bothering about the internal details of the implementation. Abstraction can be used for
existing element as well as the component being designed.
Here, there are two common abstraction mechanisms
1. Functional Abstraction
2. Data Abstraction
Functional Abstraction
i. A module is specified by the method it performs.
ii. The details of the algorithm to accomplish the functions are not visible to the user of the
function.
Functional abstraction forms the basis for Function oriented design approaches.
Data Abstraction
Details of the data elements are not visible to the users of data. Data Abstraction forms the basis
for Object Oriented design approaches.
3)
Modularity
Modularity specifies to the division of software into separate modules which are differently
named and addressed and are integrated later on in to obtain the completely functional software.
It is the only property that allows a program to be intellectually manageable. Single large
programs are difficult to understand and read due to a large number of reference variables,
control paths, global variables, etc.
The desirable properties of a modular system are:
o Each module is a well-defined system that can be used with other applications.
o Each module has single specified objectives.
o Modules can be separately compiled and saved in the library.
o Modules should be easier to use than to build.
o Modules are simpler from outside than inside.
Advantages and Disadvantages of Modularity:
Advantages of Modularity
There are several advantages of Modularity
o It allows large programs to be written by several or different people
o It encourages the creation of commonly used routines to be placed in the library and used
by other programs.
o It simplifies the overlay procedure of loading a large program into main storage.
o It provides more checkpoints to measure progress.
o It provides a framework for complete testing, more accessible to test
o It produced the well designed and more readable program.
Disadvantages of Modularity
There are several disadvantages of Modularity
o Execution time maybe, but not certainly, longer
o Storage size perhaps, but is not certainly, increased
o Compilation and loading time may be longer
o Inter-module communication problems may be increased
o More linkage required, run-time may be longer, more source lines must be written, and
more documentation has to be done
3)
Modular Design
Modular design reduces the design complexity and results in easier and faster implementation by
allowing parallel development of various parts of a system. We discuss a different section of
modular design in detail in this section:
1. Functional Independence: Functional independence is achieved by developing functions that
perform only one kind of task and do not excessively interact with other modules. Independence
is important because it makes implementation more accessible and faster. The independent
modules are easier to maintain, test, and reduce error propagation and can be reused in other
programs as well. Thus, functional independence is a good design feature which ensures
software quality.
It is measured using two criteria:
o Cohesion: It measures the relative function strength of a module.
o Coupling: It measures the relative interdependence among modules.
2. Information hiding: The fundamental of Information hiding suggests that modules can be
characterized by the design decisions that protect from the others, i.e., In other words, modules
should be specified that data include within a module is inaccessible to other modules that do not
need for such information.
The use of information hiding as design criteria for modular system provides the most significant
benefits when modifications are required during testing's and later during software maintenance.
This is because as most data and procedures are hidden from other parts of the software,
inadvertent errors introduced during modifications are less likely to propagate to different
locations within the software.
4)
Strategy of Design
A good system design strategy is to organize the program modules in such a method that are easy
to develop and latter too, change. Structured design methods help developers to deal with the
size and complexity of programs. Analysts generate instructions for the developers about how
code should be composed and how pieces of code should fit together to form a program.
To design a system, there are two possible approaches:
1. Top-down Approach
2. Bottom-up Approach
1. Top-down Approach: This approach starts with the identification of the main components
and then decomposing them into their more detailed sub-components.
2. Bottom-up Approach: A bottom-up approach begins with the lower details and moves
towards up the hierarchy, as shown in fig. This approach is suitable in case of an existing system.
Q 3) The production of the requirements stage of the software development process
is Software Requirements Specifications (SRS) (also called a requirements
document). This report lays a foundation for software engineering activities and is
constructing when entire requirements are elicited and analyzed. SRS is a formal report,
which acts as a representation of software that enables the customers to review whether it
(SRS) is according to their requirements. Also, it comprises user requirements for a
system as well as detailed specifications of the system requirements.
The SRS is a specification for a specific software product, program, or set of applications
that perform particular functions in a specific environment. It serves several goals
depending on who is writing it. First, the SRS could be written by the client of a system.
Second, the SRS could be written by a developer of the system. The two methods create
entirely various situations and establish different purposes for the document altogether.
The first case, SRS, is used to define the needs and expectation of the users. The second
case, SRS, is written for various purposes and serves as a contract document between
customer and developer.
1. Correctness: User review is used to provide the accuracy of requirements stated in the SRS.
SRS is said to be perfect if it covers all the needs that are truly expected from the system.
2. Completeness: The SRS is complete if, and only if, it includes the following elements:
(1). All essential requirements, whether relating to functionality, performance, design,
constraints, attributes, or external interfaces.
(2). Definition of their responses of the software to all realizable classes of input data in all
available categories of situations.
Note: It is essential to specify the responses to both valid and invalid values.
(3). Full labels and references to all figures, tables, and diagrams in the SRS and definitions of all
terms and units of measure.
3. Consistency: The SRS is consistent if, and only if, no subset of individual requirements
described in its conflict. There are three types of possible conflict in the SRS:
(1). The specified characteristics of real-world objects may conflicts. For example,
(a) The format of an output report may be described in one requirement as tabular but in another
as textual.
(b) One condition may state that all lights shall be green while another states that all lights shall
be blue.
(2). There may be a reasonable or temporal conflict between the two specified actions. For
example,
(a) One requirement may determine that the program will add two inputs, and another may
determine that the program will multiply them.
(b) One condition may state that "A" must always follow "B," while other requires that "A and
B" co-occurs.
(3). Two or more requirements may define the same real-world object but use different terms for
that object. For example, a program's request for user input may be called a "prompt" in one
requirement's and a "cue" in another. The use of standard terminology and descriptions promotes
consistency.
4. Unambiguousness: SRS is unambiguous when every fixed requirement has only one
interpretation. This suggests that each element is uniquely interpreted. In case there is a method
used with multiple definitions, the requirements report should determine the implications in the
SRS so that it is clear and simple to understand.
5. Ranking for importance and stability: The SRS is ranked for importance and stability if
each requirement in it has an identifier to indicate either the significance or stability of that
particular requirement.
Typically, all requirements are not equally important. Some prerequisites may be essential,
especially for life-critical applications, while others may be desirable. Each element should be
identified to make these differences clear and explicit. Another way to rank requirements is to
distinguish classes of items as essential, conditional, and optional.
6. Modifiability: SRS should be made as modifiable as likely and should be capable of quickly
obtain changes to the system to some extent. Modifications should be perfectly indexed and
cross-referenced.
7. Verifiability: SRS is correct when the specified requirements can be verified with a cost-
effective system to check whether the final software meets those requirements. The requirements
are verified with the help of reviews.
8. Traceability: The SRS is traceable if the origin of each of the requirements is clear and if it
facilitates the referencing of each condition in future development or enhancement
documentation.
There are two types of Traceability:
1. Backward Traceability: This depends upon each requirement explicitly referencing its
source in earlier documents.
2. Forward Traceability: This depends upon each element in the SRS having a unique name or
reference number.
The forward traceability of the SRS is especially crucial when the software product enters the
operation and maintenance phase. As code and design document is modified, it is necessary to be
able to ascertain the complete set of requirements that may be concerned by those modifications.
9. Design Independence: There should be an option to select from multiple design alternatives
for the final system. More specifically, the SRS should not contain any implementation details.
10. Testability: An SRS should be written in such a method that it is simple to generate test
cases and test plans from the report.
11. Understandable by the customer: An end user may be an expert in his/her explicit domain
but might not be trained in computer science. Hence, the purpose of formal notations and
symbols should be avoided too as much extent as possible. The language should be kept simple
and clear.
12. The right level of abstraction: If the SRS is written for the requirements stage, the details
should be explained explicitly. Whereas,for a feasibility study, fewer analysis can be used.
Hence, the level of abstraction modifies according to the objective of the SRS.
Properties of a good SRS document
The essential properties of a good SRS document are the following:
Concise: The SRS report should be concise and at the same time, unambiguous, consistent, and
complete. Verbose and irrelevant descriptions decrease readability and also increase error
possibilities.
Structured: It should be well-structured. A well-structured document is simple to understand
and modify. In practice, the SRS document undergoes several revisions to cope up with the user
requirements. Often, user requirements evolve over a period of time. Therefore, to make the
modifications to the SRS document easy, it is vital to make the report well-structured.
Black-box view: It should only define what the system should do and refrain from stating how to
do these. This means that the SRS document should define the external behavior of the system
and not discuss the implementation issues. The SRS report should view the system to be
developed as a black box and should define the externally visible behavior of the system. For this
reason, the SRS report is also known as the black-box specification of a system.
Conceptual integrity: It should show conceptual integrity so that the reader can merely
understand it. Response to undesired events: It should characterize acceptable responses to
unwanted events. These are called system response to exceptional conditions.
Verifiable: All requirements of the system, as documented in the SRS document, should be
correct. This means that it should be possible to decide whether or not requirements have been
met in an implementation.
Q 4) The software development models are the variety of methods that are being
chosen for the expansion of the application depending on the application’s goals. There
are lots of development life cycle models that have been building to accomplish special
necessary aims. The models identify the range of phases of the procedure and in which
they are performed.
Software development models explain stages of the software application sequence and the sort
in which those stages are performed. There are plenty of representations, and several
organizations accept their hold nevertheless everyone contains extremely parallel models.
Every stage forms managements requisite with the subsequently stage in the existence
sequence. Necessities are interpreted to plan. System is formed at the execution that is
determined by the plan.
Testing authenticates the managements of the completion stage alongside
necessities.
So many types of software expansion representation are as follows:
1. Waterfall model
2. V model
3. Incremental model
4. RAD model
5. Agile model
6. Iterative model
7. Spiral model
Waterfall Model
This Model was first development model to be announced. It is also introduce to as a linear-
sequential life cycle model. Waterfall model is very easy to understand and implement. In every
stage must be finished entirely so the next stage can start. At the finish of every stage, an
assessment appears to establish if the application is on the accurate pathway or not to persist or
remove the function. In waterfall form stages do not goes backward.
Advantages:
Easy and effortless to utilize.
Simple to control due to the inflexibility of the model
Stages are enlargement and completed entity at a moment.
Workings fine for minor applications where necessities are extremely fine unstated.
Disadvantages:
One time a project is in the testing phase, it is awfully complicated to get rear and alter
somewhat that was not fine thinking out in the perception phase.
No equipped function is shaped in expectation of belated all through the life cycle.
Elevated quantities of danger and ambiguity.
Not a good quality model for multifaceted and object-oriented applications.
Inopportune replica for widespread and long-lasting applications.
v Not fitting for the software where necessities are at a reasonable to elevated hazard of
altering.
Use of the Waterfall Model:
Incremental model
In This type of model the entire necessity is separated into different executions. Numerous
expansion cycles perform here, building the life cycle a “multi-waterfall” sequence. Sequences
are separated up into lesser, more easily controls parts. Every part goes by the provisions,
diagram, implementation and testing phases. An efficient description of application is created
throughout the primary part, so we have functioning application initially on in the software life
cycle. Every consequent discharge of the parts adjoins method to the earlier discharge. The
method carries on until the whole requirement is accomplished.
Advantages:
Produces functioning application fast and early on throughout the life series of the
application.
Extra flexible – smaller amount exclusive to adjust competence and provisions.
Simpler to test and execute in a lesser repetition.
Client can react to every development.
Lesser primary release expense.
Easier to manage peril as dangerous parts are documented and absorbed in it would
recurrence.
Drawbacks:
Necessities outstanding grounding and contain it in intelligence.
Needs an understandable and all-encompassing connotation of the complete construction
prior to it may be broken down and executed increasingly.
Whole expense is more than waterfall model.
Use of the Incremental model:
Necessities of the whole structure are evidently distinct and implicit.
Main necessities should be clear; though, little information may develop with instance.
There is a requirement to acquire a result to the marketplace before time.
A fresh equipment have been used
Property with desirable expertise position are not accessible
There are various high danger property and aims.
RAD Model
RAD model stands for Rapid Application Development model. It is a kind of cumulative model.
In this model the elements or methods are built in equivalent since if they were small
applications. The executions are instance boxed, spread and after that composed into an prepared
model. RAD model can rapidly provide the client somewhat to observe and employ and to
supply response concerning the release and their necessities.
Agile model
This model is too a kind of Incremental model. Software application is built in increasing, fast
sequences. This outcome in little incrementing delivery with each release building on previous
functionality. Every delivery is comprehensively tested to make sure software excellence is
conserved. It is worn for case in tip vital software. Presently one of the most fine identified agile
development life cycle models is excessive encoding.
Advantages:
Client fulfillment by fast, constant release of helpful application.
Public and communications are emphasized rather than method and apparatus. Client,
programmers and test engineers continually interrelate with all other.
Well configured application is free regularly within weeks rather than months.
Individually argument is the optimum form of call.
Secure day by day collaboration among industry populace and programmers.
Uncontrollably absorption to unconscious radiance and good quality plan.
Normal revision to altering conditions.
So far postponed modifications in necessities are greets.
Drawbacks:
In case of various software achievements, specially the big ones, it is hard to evaluate the
attempt necessary at the start of the software development life cycle.
Require of importance on essential scheming and documents.
The application may simply get taken off way if the client spokesperson is not obvious
what ultimate result that they desire.
Only expert developers are able of pleasing the type of choices requisite in the expansion
method. Therefore it has no position for apprentice developers, except mutual with
knowledgeable possessions.
Use of the agile model:
While fresh modifications are desirable to be executed. The liberty agile provides to
modify is extremely significant. Fresh modifications may be executed at incredibly small
charge as of the regularity of fresh growths that are formed.
To apply a latest attribute the programmers require mislaying just the job of a little time,
or yet just hours, to turn back and execute it.
This model requires very inadequate scheduling than the waterfall model and necessary
to get in progress with the application. Here presumes that the users’ desires are always
varying in an active commerce. Modifications may be conversed and statuses may be
recently consequence or detached basis on response. This successfully provides the client
the completed application they desire or require.
Mutually application programmers and end users similar, discover they too acquire extra
liberty of instance and choices than if the application was built in an additional inflexible
in order technique. These opportunities provides them the capacity to abscond significant
result pending additional or improved statistics or yet complete hosting programs are
obtainable; denotation the application may carrying on to go onward devoid of terror of
attainment a rapid idle.
Iterative model
This type of life cycle model does not go to begin with a complete requirement of necessities. In
place of, expansion starts by identifying and execution just elements of the application, which
may then be evaluation in order to recognize additional necessities. This method is then repeated,
creating a new edition of the application for every sequence of the model.
Advantages:
In this iterative model we can merely produce a high-level plan of the software prior to
we really start to make the application and describe the plan resolution for the whole
software. After that we can propose and developed a demo edition of that, and then
changed the intend basis on what had been developed.
Here we are constructing and exceeding the function treads wise. Consequently we can
mean the faults at most important stages. This wipes the descending stream of the defects.
Here in this model we may get the consistent end user response. When representing
demos and baseline of the application to end users for their rejoinder, we are competently
say them to envisage how the software will function.
In iterative model fewer time is use on detailing and additional time is specified for
panning.
Drawbacks:
Every stage of an iteration is inflexible with no overlies
Expensive application structural design or design factor might occur as not all necessities
are get together obverse for the whole lifecycle
Use of the iterative model:
Necessities of the absolute application are obviously distinct and implicit.
Iterative model is used when the application is large
Most important necessities should be distinct; but, various particulars can develop with
instance.
Spiral model
This model is parallel to the incremental model, with additional stress positioned on hazard
investigation. The spiral model has four stages: scheduling, hazard study, manufacturing and
evaluation .A software function regularly goes all through these phases in recurrence. The
commencement spirals, opening in the scheduling stage, necessities are collected and hazard is
evaluated. Every following spiral develops on the commencement spiral. Necessities are
assembles in the scheduling phase. In the hazard scrutiny stage, a method is assumed to
recognize hazard and exchange explanations. A replica is shaped at the come to an end of the
hazard investigation stage.
Application is formed in the manufacturing stage, beside with testing at the finish of the stage.
The assessment stage permits the client to estimate the production of the development to time
previous to the development going to the next spiral.
Benefits:
Excessive quantity of hazard scrutiny therefore, escaping of hazard is improved.
Good quality for huge and operation serious software.
Well-built sanction and certification manage.
Supplementary operation can be extra at a later on time.
Software application is formed untimely in the software life cycle.
Drawbacks:
Could be an expensive model to employ.
Exposure examination needs tremendously specific skill.
Victory of application is very reliant on the hazard examination stage.
Functionality isn’t work well for lesser application.
Use of the Spiral model:
While expenses and hazard estimate is significant
Important for average to elevated hazard applications
Continuing development assurance risky as of possible modifications to financial
precedence
End users are uncertain of their requirements
Necessities are multifaceted
Fresh creation procession
Important modifications are predictable
Q 5) A data flow diagram (or DFD) is a graphical representation of the flow of data through
an information system. It shows how information is input to and output from the system,
the sources and destinations of that information, and where that information is stored.
It shows how data enters and leaves the system, what changes the information, and where
data is stored.
The objective of a DFD is to show the scope and boundaries of a system as a whole. It
may be used as a communication tool between a system analyst and any person who
plays a part in the order that acts as a starting point for redesigning a system. The DFD is
also called as a data flow graph or bubble chart.
Symbols of DFD:
Circle: A circle (bubble) shows a process that transforms data inputs into data outputs.
Data Flow: A curved line shows the flow of data into or out of a process or data store.
Data Store: A set of parallel lines shows a place for the collection of data items. A data
store indicates that the data is stored which can be used at a later stage or by the other
processes in a different order. The data store can have an element or group of elements.
Source or Sink: Source or Sink is an external entity and acts as a source of system inputs
or sink of system outputs.
The DFD may be used to perform a system or software at any level of abstraction. Infact,
DFD’s may be partitioned into levels that represent increasing information flow and
functional detail. Levels in DFD are numbered 0, 1, 2 or beyond. Here, we will see
primarily three levels in the data flow diagram, which are: 0-level DFD, 1-level DFD,
and 2-level DFD.
0-level DFD
It is also known as fundamental system model, or context diagram represents the entire
software requirement as a single bubble with input and output data denoted by incoming
and outgoing arrows. Then the system is decomposed and described as a DFD with
multiple bubbles. Parts of the system represented by each of these bubbles are then
decomposed and documented as more and more detailed DFD’s. This process may be
repeated at as many levels as necessary until the program at hand is well understood. It is
essential to preserve the number of inputs and outputs between levels, this concept is
called leveling by De Macro. Thus, if bubble "A" has two inputs x1 and x2 and one
output y, then the expanded DFD, that represents "A" should have exactly two external
inputs and one external output as shown in fig:
2-Level DFD
2-level DFD goes one process deeper into parts of 1-level DFD. It can be used to project
or record the specific/necessary detail about the system's functioning.
Short Answers:
Q 1) Coding Standards
Different modules specified in the design document are coded in the Coding phase
according to the module specification. The main goal of the coding phase is to code from
the design document prepared after the design phase through a high-level language and
then to unit tests this code.
Good software development organizations want their programmers to maintain to some
well-defined and standard style of coding called coding standards. They usually make
their own coding standards and guidelines depending on what suits their organization best
and based on the types of software they develop. It is very important for the programmers
to maintain the coding standards otherwise the code will be rejected during code review.
Purpose of Having Coding Standards:
A coding standard gives a uniform appearance to the codes written by different engineers.
It improves readability, and maintainability of the code and it reduces complexity also.
It helps in code reuse and helps to detect error easily.
It promotes sound programming practices and increases efficiency of the programmers.
Some of the coding standards are given below:
1. Limited use of globals:
These rules tell about which types of data that can be declared global and the data that
can’t be.
3. Naming conventions for local variables, global variables, constants and functions:
Some of the naming conventions are given below:
Meaningful and understandable variables name helps anyone to understand the
reason of using it.
Local variables should be named using camel case lettering starting with small
letter (e.g. local Data) whereas Global variables names should start with a capital
letter (e.g. Global Data). Constant names should be formed using capital letters
only (e.g. CONS DATA).
It is better to avoid the use of digits in variable names.
The names of the function should be written in camel case starting with small
letters.
The name of the function must describe the reason of using the function clearly
and briefly.
4. Indentation:
Proper indentation is very important to increase the readability of the code. For making
the code readable, programmers should use White spaces properly. Some of the spacing
conventions are given below:
There must be a space after giving a comma between two function arguments.
Each nested block should be properly indented and spaced.
Proper Indentation should be there at the beginning and at the end of each block
in the program.
All braces should start from a new line and the code following the end of braces
also start from a new line.
5. Error return values and exception handling conventions:
All functions that encountering an error condition should either return a 0 or 1 for
simplifying the debugging.
On the other hand, coding guidelines give some general suggestions regarding the coding style
that to be followed for the betterment of understandability and readability of the code.
Some of the coding guidelines are given below:
Q 2) Software maintenance is a part of Software Development Life Cycle. Its main purpose is
to modify and update software application after delivery to correct faults and to improve
performance. Software is a model of the real world. When the real world changes, the
software requires alteration wherever possible.
Software maintenance is a vast activity which includes optimization, error correction, and
deletion of discarded features and enhancement of existing features. Since these changes
are necessary, a mechanism must be created for estimation, controlling and making
modifications. The essential part of software maintenance requires preparation of an
accurate plan during the development cycle.
Typically, maintenance takes up about 40- 80% of the project cost, usually closer to the
higher pole. Hence, a focus on maintenance definitely helps keep costs down.
• The Software Maintenance process includes a maintenance plan which contains software
preparation, problem identification and find out about product configuration management.
• The problem analysis process includes checking validity, examining it and coming up with a
solution and finally getting all the required support to apply for modification.
• The process acceptance by confirming the changes with the individual who raised the request.
• The platform migration process, which is used if software is needed to be ported to another
platform without any change in functionality.
• Programming Language
Q 4) Software Metrics
Software metric is a measure of software characteristics which are measurable or
countable. Software metrics are valuable for many reasons, including measuring software
performance, planning work items, measuring productivity, and many other uses.
Within the software development process, many metrics are that are all connected. Software
metrics are similar to the four functions of management: Planning, Organization, Control, or
Improvement.
Classification of Software Metrics
Software metrics can be classified into two types as follows:
1. Product Metrics: These are the measures of various characteristics of the software product.
The two important software characteristics are:
1. Size and complexity of software.
2. Quality and reliability of software.
These metrics can be computed for different stages of SDLC.
2. Process Metrics: These are the measures of various characteristics of the software
development process. For example, the efficiency of fault detection. They are used to measure
the characteristics of methods, techniques, and tools that are used for developing software.
Types of Metrics
Internal metrics: Internal metrics are the metrics used for measuring properties that are viewed
to be of greater importance to a software developer. For example, Lines of Code (LOC) measure.
External metrics: External metrics are the metrics used for measuring properties that are viewed
to be of greater importance to the user, e.g., portability, reliability, functionality, usability, etc.
Hybrid metrics: Hybrid metrics are the metrics that combine product, process, and resource
metrics. For example, cost per FP where FP stands for Function Point Metric.
Project metrics: Project metrics are the metrics used by the project manager to check the
project's progress. Data from the past projects are used to collect various metrics, like time and
cost; these estimates are used as a base of new software. Note that as the project proceeds, the
project manager will check its progress from time-to-time and will compare the effort, cost, and
time with the original effort, cost and time. Also understand that these metrics are used to
decrease the development costs, time efforts and risks. The project quality can also be improved.
As quality improves, the number of errors and time, as well as cost required, is also reduced.
Advantage of Software Metrics
Comparative study of various design methodology of software systems.
For analysis, comparison, and critical study of different programming language concerning their
characteristics.
In comparing and evaluating the capabilities and productivity of people involved in software
development.
In the preparation of software quality specifications.
In the verification of compliance of software systems requirements and specifications.
In making inference about the effort to be put in the design and development of the software
systems.
In getting an idea about the complexity of the code.
In taking decisions regarding further division of a complex module is to be done or not.
In guiding resource manager for their proper utilization.
In comparison and making design tradeoffs between software development and maintenance
cost.
In providing feedback to software managers about the progress and quality during various phases
of the software development life cycle.
In the allocation of testing resources for testing the code.
Disadvantage of Software Metrics
The application of software metrics is not always easy, and in some cases, it is difficult and
costly.
The verification and justification of software metrics are based on historical/empirical data
whose validity is difficult to verify.
These are useful for managing software products but not for evaluating the performance of the
technical staff.
The definition and derivation of Software metrics are usually based on assuming which are not
standardized and may depend upon tools available and working environment.
Most of the predictive models rely on estimates of certain variables which are often not known
precisely.
Module Coupling
A good design is the one that has low coupling. Coupling is measured by the number of relations
between the modules. That is, the coupling increases as the number of calls between modules
increase or the amount of shared data is large. Thus, it can be said that a design with high
coupling will have more errors.
Types of Module Coupling
In this case, modules are subordinates to different modules. Therefore, no direct coupling.
2. Data Coupling: When data of one module is passed to another module, this is called data
coupling.
3. Stamp Coupling: Two modules are stamp coupled if they communicate using composite data
items such as structure, objects, etc. When the module passes non-global data structure or entire
structure to another module, they are said to be stamp coupled. For example, passing structure
variable in C or object in C++ language to a module.
4. Control Coupling: Control Coupling exists among two modules if data from one module is
used to direct the structure of instruction execution in another.
5. External Coupling: External Coupling arises when two modules share an externally imposed
data format, communication protocols, or device interface. This is related to communication to
external tools and devices.
6. Common Coupling: Two modules are common coupled if they share information through
some global data items.
7. Content Coupling: Content Coupling exists among two modules if they share code, e.g., a
branch from one module into another module.
Module Cohesion
In computer programming, cohesion defines to the degree to which the elements of a module
belong together. Thus, cohesion measures the strength of relationships between pieces of
functionality within a given module. For example, in highly cohesive systems, functionality is
strongly related.
Cohesion is an ordinal type of measurement and is generally described as "high cohesion" or
"low cohesion."
Coupling shows the relationships between Cohesion shows the relationship within the module.
modules.
Coupling shows the Cohesion shows the module's relative functional strength.
relative independence between the
modules.
While creating, you should aim for low While creating you should aim for high cohesion, i.e., a
coupling, i.e., dependency among modules cohesive component/ module focuses on a single function
should be less. (i.e., single-mindedness) with little interaction with other
modules of the system.
In coupling, modules are linked to the other In cohesion, the module focuses on a single thing.
modules.