You are on page 1of 25

Introduction to Software Engineering

Outlines:
 Introduction to computer-based-system  Architectural Design
engineering  Object Orient Design
 Project Management  Functional Orient Design
 Software Specification  User Interface Design
 Requirement Specification  Quality Assurance
 Requirement Engineering  Processes and Configuration Management
 System Modelling  Introduction to advance Issue
 UML Modelling  Reusability
 Software Prototyping  SDLC
 Software Design

1. Introduction to Computer-based-system Engineering


 Software engineering occurs as a consequence of system engineering
 System engineering may take on two different forms depending on the application domain
 “Business process” engineering – conducted when the context of the work focuses on a business enterprise
 Product engineering – conducted when the context of the work focuses on a product that is to be built
 Both forms bring order to the development of computer-based systems
 Both forms work to allocate a role for computer software and to establish the links that tie software to other elements of a
computer-based system
System:
 A purposeful collection of inter-related components working together towards some common objective.
 A system may include software, mechanical, electrical and electronic hardware and be operated by people.
 System components are dependent on other system components
Computer-based System
 A set or arrangement of elements that are organized to accomplish some predefined goal by processing information
 The goal may be to support some business function or to develop a product that can be sold to generate business revenue
 A computer-based system makes use of system elements
 Elements constituting one system may represent one macro element of a still larger system
 Example
o A factory automation system may consist of a numerical control machine, robots, and data entry devices; each can
be its own system
o At the next lower hierarchical level, a manufacturing cell is its own computer-based system that may integrate other
macro elements
 The role of the system engineer is to define the elements of a specific computer-based system in the context of the overall
hierarchy of systems
 A computer-based system makes use of the following four system elements that combine in a variety of ways to transform
information
o Software: computer programs, data structures, and related work products that serve to effect the logical method,
procedure, or control that is required
o Hardware: electronic devices that provide computing capability, interconnectivity devices that enable flow of data,
and electromechanical devices that provide external functions
o People: Users and operators of hardware and software
o Database: A large, organized collection of information that is accessed via software and persists over time
 The uses of these elements are described in the following:
o Documentation: Descriptive information that portrays the use and operation of the system
o Procedures: The steps that define the specific use of each system element or the procedural context in which the
system resides
System Engineering Process
 The system engineering process begins with a world view; the business or product domain is examined to ensure that the
proper business or technology context can be established
 The world view is refined to focus on a specific domain of interest
 Within a specific domain, the need for targeted system elements is analyzed
 Finally, the analysis, design, and construction of a targeted system element are initiated
 At the world view level, a very broad context is established
 At the bottom level, detailed technical activities are conducted by the relevant engineering discipline
Business Process Engineering
 “Business process” engineering defines architectures that will enable a business to use information effectively
 It involves the specification of the appropriate computing architecture and the development of the software architecture for
the organization's computing resources
 Three different architectures must be analyzed and designed within the context of business objectives and goals
o The data architecture provides a framework for the information needs of a business (e.g., ERD)
o The application architecture encompasses those elements of a system that transform objects within the data
architecture for some business purpose
o The technology infrastructure provides the foundation for the data and application architectures
 It includes the hardware and software that are used to support the applications and data
Product Engineering
 Product engineering translates the customer's desire for a set of defined capabilities into a working product
 It achieves this goal by establishing a product architecture and a support infrastructure
o Product architecture components consist of people, hardware, software, and data
o Support infrastructure includes the technology required to tie the components together and the information to support
the components
 Requirements engineering elicits the requirements from the customer and allocates function and behavior to each of the four
components
 System component engineering happens next as a set of concurrent activities that address each of the components separately
o Each component takes a domain-specific view but maintains communication with the other domains
o The actual activities of the engineering discipline takes on an element view
 Analysis modeling allocates requirements into function, data, and behavior
 Design modeling maps the analysis model into data/class, architectural, interface, and component design

2. Project Management
A project is well-defined task, which is a collection of several operations done in order to achieve a goal (for example, software
development and delivery). A Project can be characterized as:
 Every project may has a unique and distinct goal.
 Project is not routine activity or day-to-day operations.
 Project comes with a start time and end time.
 Project ends when its goal is achieved hence it is a temporary phase in the lifetime of an organization.
 Project needs adequate resources in terms of time, manpower, finance, material and knowledge-bank.

Software Project
A Software Project is the complete procedure of software development from requirement gathering to testing and maintenance,
carried out according to the execution methodologies, in a specified period of time to achieve intended software product.
Need of software project management
Software is said to be an intangible product. Software development is a kind of all new stream in world business and there’s very
little experience in building software products. Most software products are tailor made to fit client’s requirements. The most
important is that the underlying technology changes and advances so frequently and rapidly that experience of one product may not
be applied to the other one. All such business and environmental constraints bring risk in software development hence it is essential
to manage software projects efficiently.
The image above shows triple constraints for software projects. It is an essential part of software organization to deliver quality
product, keeping the cost within client’s budget constrain and deliver the project as per scheduled. There are several factors, both
internal and external, which may impact this triple constrain triangle. Any of three factor can severely impact the other two.
Therefore, software project management is essential to incorporate user requirements along with budget and time constraints.
Software Project Manager
A software project manager is a person who undertakes the responsibility of executing the software project. Software project manager
is thoroughly aware of all the phases of SDLC that the software would go through. Project manager may never directly involved in
producing the end product but he controls and manages the activities involved in production.
A project manager closely monitors the development process, prepares and executes various plans, arranges necessary and adequate
resources, maintains communication among all team members in order to address issues of cost, budget, resources, time, and quality
and customer satisfaction.
Let us see few responsibilities that a project manager shoulders -
Managing People
 Act as project leader  Managing human resources
 Liaison with stakeholders  Setting up reporting hierarchy etc.
Managing Project
 Defining and setting up project scope  Risk analysis at every phase
 Managing project management activities  Take necessary step to avoid or come out of problems
 Monitoring progress and performance  Act as project spokesperson
Software Management Activities
Software project management comprises of a number of activities, which contains planning of project, deciding scope of software
product, estimation of cost in various terms, scheduling of tasks and events, and resource management. Project management activities
may include:
 Project Planning  Scope Management  Project Estimation
Project Planning
Software project planning is task, which is performed before the production of software actually starts. It is there for the software
production but involves no concrete activity that has any direction connection with software production; rather it is a set of multiple
processes, which facilitates software production. Project planning may include the following:
Scope Management
It defines the scope of project; this includes all the activities, process need to be done in order to make a deliverable software product.
Scope management is essential because it creates boundaries of the project by clearly defining what would be done in the project and
what would not be done. This makes project to contain limited and quantifiable tasks, which can easily be documented and in turn
avoids cost and time overrun.
During Project Scope management, it is necessary to -
 Define the scope  Verify the scope
 Decide its verification and control  Control the scope by incorporating changes to the
 Divide the project into various smaller parts for ease scope
of management.
Project Estimation
For an effective management accurate estimation of various measures is a must. With correct estimation managers can manage and
control the project more efficiently and effectively.
Project estimation may involve the following:
 Software size estimation
Software size may be estimated either in terms of KLOC (Kilo Line of Code) or by calculating number of function points in
the software. Lines of code depend upon coding practices and Function points vary according to the user or software
requirement.
 Effort estimation
The managers estimate efforts in terms of personnel requirement and man-hour required to produce the software. For effort
estimation software size should be known. This can either be derived by managers’ experience, organization’s historical data
or software size can be converted into efforts by using some standard formulae.
 Time estimation
Once size and efforts are estimated, the time required to produce the software can be estimated. Efforts required is
segregated into sub categories as per the requirement specifications and interdependency of various components of software.
Software tasks are divided into smaller tasks, activities or events by Work Breakthrough Structure (WBS). The tasks are
scheduled on day-to-day basis or in calendar months.
The sum of time required to complete all tasks in hours or days is the total time invested to complete the project.
 Cost estimation
This might be considered as the most difficult of all because it depends on more elements than any of the previous ones. For
estimating project cost, it is required to consider -
o Size of software o Skilled personnel with task-specific skills
o Software quality o Travel involved
o Hardware o Communication
o Additional software or tools, licenses etc. o Training and support
Project Estimation Techniques
We discussed various parameters involving project estimation such as size, effort, time and cost.
Project manager can estimate the listed factors using two broadly recognized techniques –
Decomposition Technique
This technique assumes the software as a product of various compositions.
There are two main models -
 Line of Code Estimation is done on behalf of number of line of codes in the software product.
 Function Points Estimation is done on behalf of number of function points in the software product.
Empirical Estimation Technique
This technique uses empirically derived formulae to make estimation. These formulae are based on LOC or FPs.
 Putnam Model
This model is made by Lawrence H. Putnam, which is based on Norden’s frequency distribution (Rayleigh curve). Putnam
model maps time and efforts required with software size.
 COCOMO
COCOMO stands for COnstructive COst MOdel, developed by Barry W. Boehm. It divides the software product into three
categories of software: organic, semi-detached and embedded.
Project Scheduling
Project Scheduling in a project refers to roadmap of all activities to be done with specified order and within time slot allotted to each
activity. Project managers tend to define various tasks, and project milestones and arrange them keeping various factors in mind.
They look for tasks lie in critical path in the schedule, which are necessary to complete in specific manner (because of task
interdependency) and strictly within the time allocated. Arrangement of tasks which lies out of critical path are less likely to impact
over all schedule of the project.
For scheduling a project, it is necessary to -
 Break down the project tasks into smaller, manageable form
 Find out various tasks and correlate them
 Estimate time frame required for each task
 Divide time into work-units
 Assign adequate number of work-units for each task
 Calculate total time required for the project from start to finish
Resource management
All elements used to develop a software product may be assumed as resource for that project. This may include human resource,
productive tools and software libraries.
The resources are available in limited quantity and stay in the organization as a pool of assets. The shortage of resources hampers the
development of project and it can lag behind the schedule. Allocating extra resources increases development cost in the end. It is
therefore necessary to estimate and allocate adequate resources for the project.
Resource management includes -
 Defining proper organization project by creating a project team and allocating responsibilities to each team member
 Determining resources required at a particular stage and their availability
 Manage Resources by generating resource request when they are required and de-allocating them when they are no more
needed.
Project Risk Management
Risk management involves all activities pertaining to identification, analyzing and making provision for predictable and non-
predictable risks in the project. Risk may include the following:
 Experienced staff leaving the project and new staff coming in.
 Change in organizational management.
 Requirement change or misinterpreting requirement.
 Under-estimation of required time and resources.
 Technological changes, environmental changes, business competition.
Risk Management Process
There are following activities involved in risk management process:
 Identification - Make note of all possible risks, which may occur in the project.
 Categorize - Categorize known risks into high, medium and low risk intensity as per their possible impact on the project.
 Manage - Analyze the probability of occurrence of risks at various phases. Make plan to avoid or face risks. Attempt to
minimize their side-effects.
 Monitor - Closely monitor the potential risks and their early symptoms. Also monitor the effects of steps taken to mitigate or
avoid them.
Project Execution & Monitoring
In this phase, the tasks described in project plans are executed according to their schedules.
Execution needs monitoring in order to check whether everything is going according to the plan. Monitoring is observing to check the
probability of risk and taking measures to address the risk or report the status of various tasks.
These measures include -
 Activity Monitoring - All activities scheduled within some task can be monitored on day-to-day basis. When all activities in
a task are completed, it is considered as complete.
 Status Reports - The reports contain status of activities and tasks completed within a given time frame, generally a week.
Status can be marked as finished, pending or work-in-progress etc.
 Milestones Checklist - Every project is divided into multiple phases where major tasks are performed (milestones) based on
the phases of SDLC. This milestone checklist is prepared once every few weeks and reports the status of milestones.
Project Communication Management
Effective communication plays vital role in the success of a project. It bridges gaps between client and the organization, among the
team members as well as other stake holders in the project such as hardware suppliers.
Communication can be oral or written. Communication management process may have the following steps:
 Planning - This step includes the identifications of all the stakeholders in the project and the mode of communication among
them. It also considers if any additional communication facilities are required.
 Sharing - After determining various aspects of planning, manager focuses on sharing correct information with the correct
person on correct time. This keeps everyone involved the project up to date with project progress and its status.
 Feedback - Project managers use various measures and feedback mechanism and create status and performance reports. This
mechanism ensures that input from various stakeholders is coming to the project manager as their feedback.
 Closure - At the end of each major event, end of a phase of SDLC or end of the project itself, administrative closure is
formally announced to update every stakeholder by sending email, by distributing a hardcopy of document or by other mean
of effective communication.
After closure, the team moves to next phase or project.
Configuration Management
Configuration management is a process of tracking and controlling the changes in software in terms of the requirements, design,
functions and development of the product.
IEEE defines it as “the process of identifying and defining the items in the system, controlling the change of these items throughout
their life cycle, recording and reporting the status of items and change requests, and verifying the completeness and correctness of
items”.
Generally, once the SRS is finalized there is less chance of requirement of changes from user. If they occur, the changes are
addressed only with prior approval of higher management, as there is a possibility of cost and time overrun.
Baseline
A phase of SDLC is assumed over if it baselined, i.e. baseline is a measurement that defines completeness of a phase. A phase is
baselined when all activities pertaining to it are finished and well documented. If it was not the final phase, its output would be used
in next immediate phase.
Configuration management is a discipline of organization administration, which takes care of occurrence of any change (process,
requirement, technological, strategical etc.) after a phase is baselined. CM keeps check on any changes done in software.
Change Control
Change control is function of configuration management, which ensures that all changes made to software system are consistent and
made as per organizational rules and regulations.
A change in the configuration of product goes through following steps -
 Identification - A change request arrives from either internal or external source. When change request is identified formally,
it is properly documented.
 Validation - Validity of the change request is checked and its handling procedure is confirmed.
 Analysis - The impact of change request is analyzed in terms of schedule, cost and required efforts. Overall impact of the
prospective change on system is analyzed.
 Control - If the prospective change either impacts too many entities in the system or it is unavoidable, it is mandatory to take
approval of high authorities before change is incorporated into the system. It is decided if the change is worth incorporation
or not. If it is not, change request is refused formally.
 Execution - If the previous phase determines to execute the change request, this phase take appropriate actions to execute the
change, does a thorough revision if necessary.
 Close request - The change is verified for correct implementation and merging with the rest of the system. This newly
incorporated change in the software is documented properly and the request is formally is closed.
Project Management Tools
The risk and uncertainty rises multifold with respect to the size of the project, even when the project is developed according to set
methodologies.
There are tools available, which aid for effective project management. A few are described -
Gantt chart
Gantt charts was devised by Henry Gantt (1917). It represents project schedule with and time scheduled for the project
respect to time periods. It is a horizontal bar chart with bars representing activities activities.

PERT Chart
PERT (Program Evaluation & Review Technique) chart is a tool that depicts project dependency of the later event over
as network diagram. It is capable of graphically representing main events of project the previous one.
in both parallel and consecutive way. Events, which occur one after another, show

Events are shown as numbered nodes. They are connected by labeled arrows depicting sequence of tasks in the project.
Resource Histogram
This is a graphical tool that contains bar or chart representing number of resources (usually skilled staff) required over time for a
project event (or phase). Resource Histogram is an effective tool for staff planning and coordination.

Critical Path Analysis


This tools is useful in recognizing interdependent tasks in the project. It also helps to find out the shortest path or critical path to
complete the project successfully. Like PERT diagram, each event is allotted a specific time frame. This tool shows dependency of
event assuming an event can proceed to next only if the previous one is completed.
The events are arranged according to their earliest possible start time. Path between start and end node is critical path which cannot
be further reduced and all events require to be executed in same order.

3. Software Requirement Specification


The production of the requirements stage of the software development process is Software Requirements Specifications (SRS) (also
called a requirements document). This report lays a foundation for software engineering activities and is constructing when entire
requirements are elicited and analyzed. SRS is a formal report, which acts as a representation of software that enables the customers to
review whether it (SRS) is according to their requirements. Also, it comprises user requirements for a system as well as detailed
specifications of the system requirements.
The SRS is a specification for a specific software product, program, or set of applications that perform particular functions in a
specific environment. It serves several goals depending on who is writing it. First, the SRS could be written by the client of a system.
Second, the SRS could be written by a developer of the system. The two methods create entirely various situations and establish
different purposes for the document altogether. The first case, SRS, is used to define the needs and expectation of the users. The
second case, SRS, is written for various purposes and serves as a contract document between customer and developer.

Characteristics of good SRS


Following are the features of a good SRS document:
1. Correctness: User review is used to provide the accuracy of requirements stated in the SRS. SRS is said to be perfect if it covers all
the needs that are truly expected from the system.
2. Completeness: The SRS is complete if, and only if, it includes the following elements:
(1). All essential requirements, whether relating to functionality, performance, design, constraints, attributes, or external interfaces.
(2). Definition of their responses of the software to all realizable classes of input data in all available categories of situations.
(3). Full labels and references to all figures, tables, and diagrams in the SRS and definitions of all terms and units of measure.
3. Consistency: The SRS is consistent if, and only if, no subset of individual requirements described in its conflict. There are three
types of possible conflict in the SRS:
(1).The specified characteristics of real-world objects may conflicts. For example,
(a) The format of an output report may be described in one requirement as tabular but in another as textual.
(b) One condition may state that all lights shall be green while another states that all lights shall be blue.
(2). There may be a reasonable or temporal conflict between the two specified actions. For example,
(a) One requirement may determine that the program will add two inputs, and another may determine that the program will multiply
them.
(b) One condition may state that "A" must always follow "B," while other requires that "A and B" co-occurs.
(3). Two or more requirements may define the same real-world object but use different terms for that object. For example, a program's
request for user input may be called a "prompt" in one requirement's and a "cue" in another. The use of standard terminology and
descriptions promotes consistency.
4. Unambiguousness: SRS is unambiguous when every fixed requirement has only one interpretation. This suggests that each element
is uniquely interpreted. In case there is a method used with multiple definitions, the requirements report should determine the
implications in the SRS so that it is clear and simple to understand.
5. Ranking for importance and stability: The SRS is ranked for importance and stability if each requirement in it has an identifier to
indicate either the significance or stability of that particular requirement.
Typically, all requirements are not equally important. Some prerequisites may be essential, especially for life-critical applications,
while others may be desirable. Each element should be identified to make these differences clear and explicit. Another way to rank
requirements is to distinguish classes of items as essential, conditional, and optional.
6. Modifiability: SRS should be made as modifiable as likely and should be capable of quickly obtain changes to the system to some
extent. Modifications should be perfectly indexed and cross-referenced.
7. Verifiability: SRS is correct when the specified requirements can be verified with a cost-effective system to check whether the final
software meets those requirements. The requirements are verified with the help of reviews.
8. Traceability: The SRS is traceable if the origin of each of the requirements is clear and if it facilitates the referencing of each
condition in future development or enhancement documentation.
There are two types of Traceability:
1. Backward Traceability: This depends upon each requirement explicitly referencing its source in earlier documents.
2. Forward Traceability: This depends upon each element in the SRS having a unique name or reference number.
The forward traceability of the SRS is especially crucial when the software product enters the operation and maintenance phase. As
code and design document is modified, it is necessary to be able to ascertain the complete set of requirements that may be concerned
by those modifications.
9. Design Independence: There should be an option to select from multiple design alternatives for the final system. More specifically,
the SRS should not contain any implementation details.
10. Testability: An SRS should be written in such a method that it is simple to generate test cases and test plans from the report.
11. Understandable by the customer: An end user may be an expert in his/her explicit domain but might not be trained in computer
science. Hence, the purpose of formal notations and symbols should be avoided too as much extent as possible. The language should
be kept simple and clear.
12. The right level of abstraction: If the SRS is written for the requirements stage, the details should be explained explicitly.
Whereas, for a feasibility study, fewer analysis can be used. Hence, the level of abstraction modifies according to the objective of the
SRS.
Properties of a good SRS document
The essential properties of a good SRS document are the following:
Concise: The SRS report should be concise and at the same time, unambiguous, consistent, and complete. Verbose and irrelevant
descriptions decrease readability and also increase error possibilities.
Structured: It should be well-structured. A well-structured document is simple to understand and modify. In practice, the SRS
document undergoes several revisions to cope up with the user requirements. Often, user requirements evolve over a period of time.
Therefore, to make the modifications to the SRS document easy, it is vital to make the report well-structured.
Black-box view: It should only define what the system should do and refrain from stating how to do these. This means that the SRS
document should define the external behavior of the system and not discuss the implementation issues. The SRS report should view
the system to be developed as a black box and should define the externally visible behavior of the system. For this reason, the SRS
report is also known as the black-box specification of a system.
Conceptual integrity: It should show conceptual integrity so that the reader can merely understand it. Response to undesired events:
It should characterize acceptable responses to unwanted events. These are called system response to exceptional conditions.
Verifiable: All requirements of the system, as documented in the SRS document, should be correct. This means that it should be
possible to decide whether or not requirements have been met in an implementation.

4. Requirement Engineering
Requirement Engineering
The process to gather the software requirements from client, analyze and document them is known as requirement engineering.
The goal of requirement engineering is to develop and maintain sophisticated and descriptive ‘System Requirements Specification’
document.
Requirement Engineering Process
It is a four step process, which includes –
 Feasibility Study
 Requirement Gathering
 Software Requirement Specification
 Software Requirement Validation

1. Feasibility Study:
The objective behind the feasibility study is to create the reasons for developing the software that is acceptable to users, flexible to
change and conformable to established standards.
Types of Feasibility:
1. Technical Feasibility - Technical feasibility evaluates the current technologies, which are needed to accomplish customer
requirements within the time and budget.
2. Operational Feasibility - Operational feasibility assesses the range in which the required software performs a series of levels
to solve business problems and customer requirements.
3. Economic Feasibility - Economic feasibility decides whether the necessary software can generate financial profits for an
organization.
2. Requirement Elicitation and Analysis:
This is also known as the gathering of requirements. Here, requirements are identified with the help of customers and existing
systems processes, if available.
Analysis of requirements starts with requirement elicitation. The requirements are analyzed to identify inconsistencies, defects,
omission, etc. We describe requirements in terms of relationships and also resolve conflicts if any.
Problems of Elicitation and Analysis
o Getting all, and only, the right people involved.
o Stakeholders often don't know what they want
o Stakeholders express requirements in their terms.
o Stakeholders may have conflicting requirements.
o Requirement change during the analysis process.
o Organizational and political factors may influence system requirements.
3. Software Requirement Specification:
Software requirement specification is a kind of document which is created by a software analyst after the requirements collected from
the various sources - the requirement received by the customer written in ordinary language. It is the job of the analyst to write the
requirement in technical language so that they can be understood and beneficial by the development team.
The models used at this stage include ER diagrams, data flow diagrams (DFDs), function decomposition diagrams (FDDs), data
dictionaries, etc.
o Data Flow Diagrams: Data Flow Diagrams (DFDs) are used widely for modeling the requirements. DFD shows the flow of
data through a system. The system may be a company, an organization, a set of procedures, a computer hardware system, a
software system, or any combination of the preceding. The DFD is also known as a data flow graph or bubble chart.
o Data Dictionaries: Data Dictionaries are simply repositories to store information about all data items defined in DFDs. At
the requirements stage, the data dictionary should at least define customer data items, to ensure that the customer and
developers use the same definition and terminologies.
o Entity-Relationship Diagrams: Another tool for requirement specification is the entity-relationship diagram, often called an
"E-R diagram." It is a detailed logical representation of the data for the organization and uses three main constructs i.e. data
entities, relationships, and their associated attributes.
4. Software Requirement Validation:
After requirement specifications developed, the requirements discussed in this document are validated. The user might demand illegal,
impossible solution or experts may misinterpret the needs. Requirements can be the check against the following conditions -
o If they can practically implement
o If they are correct and as per the functionality and specially of software
o If there are any ambiguities
o If they are full
o If they can describe
Requirements Validation Techniques
o Requirements reviews/inspections: systematic manual analysis of the requirements.
o Prototyping: Using an executable model of the system to check requirements.
o Test-case generation: Developing tests for requirements to check testability.
o Automated consistency analysis: checking for the consistency of structured requirements descriptions.
Software Requirements
We should try to understand what sort of requirements may arise in the requirement elicitation phase and what kinds of requirements
are expected from the software system.
Broadly software requirements should be categorized in two categories:
Functional Requirements
Requirements, which are related to functional aspect of software fall into this category.
They define functions and functionality within and from the software system.
Examples -
 Search option given to user to search from various invoices.
 User should be able to mail any report to management.
 Users can be divided into groups and groups can be given separate rights.
 Should comply business rules and administrative functions.
 Software is developed keeping downward compatibility intact.
Non-Functional Requirements
Requirements, which are not related to functional aspect of software, fall into this category. They are implicit or expected
characteristics of software, which users make assumption of.
Non-functional requirements include -
 Security  Performance  Disaster recovery
 Logging  Cost  Accessibility
 Storage  Interoperability
 Configuration  Flexibility
Requirements are categorized logically as
 Must Have: Software cannot be said operational without them.
 Should have: Enhancing the functionality of software.
 Could have: Software can still properly function with these requirements.
 Wish list: These requirements do not map to any objectives of software.
While developing software, ‘Must have’ must be implemented, ‘Should have’ is a matter of debate with stakeholders and negation,
whereas ‘could have’ and ‘wish list’ can be kept for software updates.
User Interface requirements
UI is an important part of any software or hardware or hybrid system. A software is widely accepted if it is -
 easy to operate
 quick in response
 effectively handling operational errors
 providing simple yet consistent user interface
User acceptance majorly depends upon how user can use the software. UI is the only way for users to perceive the system. A well
performing software system must also be equipped with attractive, clear, consistent and responsive user interface. Otherwise the
functionalities of software system cannot be used in convenient way. A system is said be good if it provides means to use it
efficiently. User interface requirements are briefly mentioned below -
 Content presentation  Consistent UI elements  Strategical use of color and
 Easy Navigation  Feedback mechanism texture.
 Simple interface  Default settings  Provide help information
 Responsive  Purposeful layout  User centric approach
 Group based view settings.
Software System Analyst
System analyst in an IT organization is a person, who analyzes the requirement of proposed system and ensures that requirements are
conceived and documented properly & correctly. Role of an analyst starts during Software Analysis Phase of SDLC. It is the
responsibility of analyst to make sure that the developed software meets the requirements of the client.
System Analysts have the following responsibilities:
 Analyzing and understanding requirements of intended software
 Understanding how the project will contribute in the organization objectives
 Identify sources of requirement
 Validation of requirement
 Develop and implement requirement management plan
 Documentation of business, technical, process and product requirements
 Coordination with clients to prioritize requirements and remove and ambiguity
 Finalizing acceptance criteria with client and other stakeholders
Software Metrics and Measures
Software Measures can be understood as a process of quantifying and symbolizing various attributes and aspects of software.
Software Metrics provide measures for various aspects of software process and software product.
Software measures are fundamental requirement of software engineering. They not only help to control the software development
process but also aid to keep quality of ultimate product excellent.
According to Tom DeMarco, a (Software Engineer), “You cannot control what you cannot measure.” By his saying, it is very clear
how important software measures are.
Let us see some software metrics:
 Size Metrics - LOC (Lines of Code), mostly calculated in thousands of delivered source code lines, denoted as KLOC.
Function Point Count is measure of the functionality provided by the software. Function Point count defines the size of
functional aspect of software.
 Complexity Metrics - McCabe’s Cyclomatic complexity quantifies the upper bound of the number of independent paths in a
program, which is perceived as complexity of the program or its modules. It is represented in terms of graph theory concepts
by using control flow graph.
 Quality Metrics - Defects, their types and causes, consequence, intensity of severity and their implications define the quality
of product.
The number of defects found in development process and number of defects reported by the client after the product is
installed or delivered at client-end, define quality of product.
 Process Metrics - In various phases of SDLC, the methods and tools used, the company standards and the performance of
development are software process metrics.
 Resource Metrics - Effort, time and various resources used, represents metrics for resource measurement.

5. System Modelling
System Modelling
System modeling is the process of developing abstract models of a system, with each model presenting a different view or
perspective of that system.
System modeling has now come to mean representing a system using some kind of graphical notation, which is now almost
always based on notations in the Unified Modeling Language (UML).
System modelling helps the analyst to understand the functionality of the system and models are used to communicate with
customers.
Models of the existing system are used during requirements engineering. They help clarify what the existing system does and can
be used as a basis for discussing its strengths and weaknesses. These then lead to requirements for the new system.
Models of the new system are used during requirements engineering to help explain the proposed requirements to other system
stakeholders. Engineers use these models to discuss design proposals and to document the system for implementation.
In a model-driven engineering process, it is possible to generate a complete or partial system implementation from the system
model.
An external perspective, where you model the context or environment of the system.
An interaction perspective, where you model the interactions between a system and its environment, or between the components
of a system.
A structural perspective, where you model the organization of a system or the structure of the data that is processed by the
system.
A behavioral perspective, where you model the dynamic behavior of the system and how it responds to events.

6. UML Modelling
UML, short for Unified Modeling Language, is a standardized modeling language consisting of an integrated set of diagrams,
developed to help system and software developers for specifying, visualizing, constructing, and documenting the artifacts of
software systems, as well as for business modeling and other non-software systems. The UML represents a collection of best
engineering practices that have proven successful in the modeling of large and complex systems. The UML is a very important
part of developing object oriented software and the software development process. The UML uses mostly graphical notations
to express the design of software projects. Using the UML helps project teams communicate, explore potential designs, and
validate the architectural design of the software.
Some terms of UML
 Abstract syntax compliance Users can move models across different tools, even if they use different notations
 Common Warehouse Metamodel (CWM) Standard interfaces that are used to enable interchange of warehouse and business
intelligence metadata between warehouse tools, warehouse platforms and warehouse metadata repositories in distributed
heterogeneous environments
 Concrete syntax compliance Users can continue to use a notation they are familiar with across different tools
 Core In the context of UML, the core usually refers to the "Core package" which is a complete metamodel particularly designed
for high reusability
 Language Unit Consists of a collection of tightly coupled modeling concepts that provide users with the power to represent
aspects of the system under study according to a particular paradigm or formalism
 Level 0 (L0) Bottom compliance level for UML infrastructure - a single language unit that provides for modeling the kinds of
class-based structures encountered in most popular object-oriented programming languages
 Meta Object Facility (MOF) An OMG modeling specification that provides the basis for metamodel definitions in OMG's family
of MDA languages
 Metamodel Defines the language and processes from which to form a model
 Metamodel Constructs (LM) Second compliance level in the UML infrastructure - an extra language unit for more advanced
class-based structures used for building metamodels (using CMOF) such as UML itself. UML only has two compliance levels
 Model Driven Architecture (MDA) An approach and a plan to achieve a cohesive set of model-driven technology specifications
 Object Constraint Language (OCL) A declarative language for describing rules that apply to Unified Modeling Language. OCL
supplements UML by providing terms and flowchart symbols that are more precise than natural language but less difficult to
master than mathematics
 Object Management Group (OMG) Is a not-for-profit computer industry specifications consortium whose members define and
maintain the UML specification
 UML 1 First version of the Unified Modeling Language
 Unified Modeling Language (UML) A visual language for specifying, constructing, and documenting the artifacts of systems
 XMI An XML-based specification of corresponding model interchange formats
Modeling concepts specified by UML
System development focuses on three overall different system models:
 Functional: These are Use Case diagrams, which describe system functionality from the point of view of the user.
 Object: These are Class Diagrams, which describe the structure of the system in terms of objects, attributes, associations, and
operations.
 Dynamic: Interaction Diagrams, State Machine Diagrams, and Activity Diagrams are used to describe the internal behavior of the
system.
These system models are visualized through two different types of diagrams: structural and behavioral.
Object-oriented concepts in UML
The objects in UML are real world entities that exist around us. In software development, objects can be used to describe, or model,
the system being created in terms that are relevant to the domain. Objects also allow the decomposition of complex systems into
understandable components that allow one piece to be built at a time.
Here are some fundamental concepts of an object-oriented world:
 Objects Represent an entity and the basic building block.
 Class Blue print of an object.
 Abstraction Behavior of a real world entity.
 Encapsulation Mechanism of binding the data together and hiding them from outside world.
 Inheritance Mechanism of making new classes from existing one.
 Polymorphism It defines the mechanism to exist in different forms.
Types of UML
Structure Diagram
Structure diagrams show the static structure of the system and its parts on different abstraction and implementation levels and how
they are related to each other. The elements in a structure diagram represent the meaningful concepts of a system, and may include
abstract, real world and implementation concepts
What is a Class Diagram?
The class diagram is a central modeling technique that runs through nearly all object-oriented methods. This diagram describes the
types of objects in the system and various kinds of static relationships which exist between them.
Relationships
There are three principal kinds of relationships which are important:
1. Association - represent relationships between instances of types (a person works for a company, a company has a number
of offices.
2. Inheritance - the most obvious addition to ER diagrams for use in OO. It has an immediate correspondence to inheritance
in OO design.
3. Aggregation - Aggregation, a form of object composition in object-oriented design.
What is Component Diagram?
In the Unified Modeling Language, a component diagram depicts how components are wired together to form larger components or
software systems. It illustrates the architectures of the software components and the dependencies between them. Those software
components including run-time components, executable components also the source code components.
What is a Deployment Diagram?
The Deployment Diagram helps to model the physical aspect of an Object-Oriented software system. It is a structure diagram
which shows architecture of the system as deployment (distribution) of software artifacts to deployment targets. Artifacts represent
concrete elements in the physical world that are the result of a development process. It models the run-time configuration in a static
view and visualizes the distribution of artifacts in an application. In most cases, it involves modeling the hardware configurations
together with the software components that lived on.
What is an Object Diagram?
An object diagram is a graph of instances, including objects and data values. A static object diagram is an instance of a class
diagram; it shows a snapshot of the detailed state of a system at a point in time. The difference is that a class diagram represents an
abstract model consisting of classes and their relationships. However, an object diagram represents an instance at a particular
moment, which is concrete in nature. The use of object diagrams is fairly limited, namely to show examples of data structure.
What is a Package Diagram?
Package diagram is UML structure diagram which shows packages and dependencies between the packages. Model diagrams allow
to show different views of a system, for example, as multi-layered (aka multi-tiered) application - multi-layered application model.
What is a Composite Structure Diagram?
Composite Structure Diagram is one of the new artifacts added to UML 2.0. A composite structure diagram is similar to a class
diagram and is a kind of component diagram mainly used in modeling a system at micro point-of-view, but it depicts individual
parts instead of whole classes. It is a type of static structure diagram that shows the internal structure of a class and the
collaborations that this structure makes possible.
This diagram can include internal parts, ports through which the parts interact with each other or through which instances of the
class interact with the parts and with the outside world, and connectors between parts or ports. A composite structure is a set of
interconnected elements that collaborate at runtime to achieve some purpose. Each element has some defined role in the
collaboration.
What is a Profile Diagram?
A profile diagram enables you to create domain and platform specific stereotypes and define the relationships between them. You
can create stereotypes by drawing stereotype shapes and relate them with composition or generalization through the resource-
centric interface. You can also define and visualize tagged values of stereotypes.
Behavior diagrams
Behavior diagrams show the dynamic behavior of the objects in a system, which can be described as a series of changes to the
system over time
What is a Use Case Diagram?
A use-case model describes a system's functional requirements in terms of use cases. It is a model of the system's intended
functionality (use cases) and its environment (actors). Use cases enable you to relate what you need from a system to how the
system delivers on those needs.
Think of a use-case model as a menu, much like the menu you'd find in a restaurant. By looking at the menu, you know what's
available to you, the individual dishes as well as their prices. You also know what kind of cuisine the restaurant serves: Italian,
Mexican, Chinese, and so on. By looking at the menu, you get an overall impression of the dining experience that awaits you in that
restaurant. The menu, in effect, "models" the restaurant's behavior.
Because it is a very powerful planning instrument, the use-case model is generally used in all phases of the development cycle by
all team members.
What is an Activity Diagram?
Activity diagrams are graphical representations of workflows of stepwise activities and actions with support for choice, iteration
and concurrency. It describes the flow of control of the target system, such as the exploring complex business rules and operations,
describing the use case also the business process. In the Unified Modeling Language, activity diagrams are intended to model both
computational and organizational processes (i.e. workflows).
What is a State Machine Diagram?
A state diagram is a type of diagram used in UML to describe the behavior of systems which is based on the concept of state
diagrams by David Harel. State diagrams depict the permitted states and transitions as well as the events that effect these
transitions. It helps to visualize the entire lifecycle of objects and thus help to provide a better understanding of state-based
systems.
What is a Sequence Diagram?
The Sequence Diagram models the collaboration of objects based on a time sequence. It shows how the objects interact with others
in a particular scenario of a use case. With the advanced visual modeling capability, you can create complex sequence diagram in
few clicks. Besides, some modeling tool such as Visual Paradigm can generate sequence diagram from the flow of events which
you have defined in the use case description.
What is a Communication Diagram?
Similar to Sequence Diagram, the Communication Diagram is also used to model the dynamic behavior of the use case. When
compare to Sequence Diagram, the Communication Diagram is more focused on showing the collaboration of objects rather than
the time sequence. They are actually semantically equivalent, so some of the modeling tool such as, Visual Paradigm allows you to
generate it from one to the other.
What is Interaction Overview Diagram?
The Interaction Overview Diagram focuses on the overview of the flow of control of the interactions. It is a variant of the Activity
Diagram where the nodes are the interactions or interaction occurrences. The Interaction Overview Diagram describes the
interactions where messages and lifelines are hidden. You can link up the "real" diagrams and achieve high degree navigability
between diagrams inside the Interaction Overview Diagram.

7. Software Prototyping
What Is Software Prototyping?
Have you ever beta tested a software application? You know, have you played a game or used a program whose publishers said it
wasn't quite up to par and they needed your opinions before developing the final product? If so, you have participated in one form of
software prototyping.
Software prototyping is similar to prototyping in other industries. It is an opportunity for the manufacturer to get an idea of what the
final product will look like before additional resources, such as time and money, are put into finalizing the product. Prototyping gives
the software publisher the opportunity to evaluate the product, ensure its doing what it's intended, and determine if improvements need
to be made.
Often, the software prototype is not complete. Sometimes, only certain aspects of the program are prototyped, such as those elements
the publisher is most concerned about or areas where user interface may be tricky.
The Software Prototyping Process
There is typically a four-step process for prototyping:
1. Identify initial requirements: In this step, the software publisher decides what the software will be able to do. The publisher
considers who the user will likely be and what the user will want from the product, then the publisher sends the project and
specifications to a software designer or developer.
2. Develop initial prototype: In step two, the developer will consider the requirements as proposed by the publisher and begin
to put together a model of what the finished product might look like. An initial prototype may be as simple as a drawing on a
whiteboard, or it may consist of sticky notes on a wall, or it may be a more elaborate working model.
3. Review: Once the prototype is developed, the publisher has a chance to see what the product might look like; how the
developer has envisioned the publisher's specifications. In more advanced prototypes, the end consumer may have an
opportunity to try out the product and offer suggestions for improvement. This is what we know of as beta testing.
4. Revise: The final step in the process is to make revisions to the prototype based on the feedback of the publisher and/or beta
testers.
Models of Prototyping
There are two main models for prototypes. The throwaway model is designed to be thrown away once the review process has been
completed. It is just a look at what the end product may look like, and it's typically not well defined and may only have a few of the
publisher's requirements mapped out. The evolutionary model for prototyping is more complete and is incorporated into the final
product. The revisions in step four are made directly to the prototype in order to get it to the final stage.
Advantages & Disadvantages
Of course, there are advantages and disadvantages to using prototypes in the development of software. Some of the advantages
include:
 The developer can use feedback from the publisher and/or consumer to make improvements and fix glitches.
 The publisher can determine if the software application does what it was intended to do and meets all of the requirements.
 The development of the prototype can give the publisher an idea of how much time and money may be needed for the
manufacture of the product.
 During the course of developing the prototype, the developer might find additional uses for the product the publisher hadn't
thought of, thus making the product even more valuable.
Some of the disadvantages include:
 If the prototype is being beta tested, the user may not be aware he or she is using a prototype, which could result in negative
feelings about the product.
 Sometimes the development and testing of the prototype takes longer than expected.
 The developers may be resistant to revising the prototype they have worked hard to develop.
Tools
There are a variety of tools available to assist in the development of prototypes:
 Screen generators show the publisher what the user interface screen will look like. Note that they are not functioning
software applications; rather, they are used for demonstration purposes.
 Scripting languages such as visual basic are easy to use and can create a basic look at how the software will function.
 Simulated software allows the publisher to quickly see their requirements turned into a usable product.

8. Software Design
Software design is a process to transform user requirements into some suitable form, which helps the programmer in software coding
and implementation.
For assessing user requirements, an SRS (Software Requirement Specification) document is created whereas for coding and
implementation, there is a need of more specific and detailed requirements in software terms. The output of this process can directly
be used into implementation in programming languages.
Software design is the first step in SDLC (Software Design Life Cycle), which moves the concentration from problem domain to
solution domain. It tries to specify how to fulfill the requirements mentioned in SRS.
Software Design Levels
Software design yields three levels of results:
 Architectural Design - The architectural design is the highest abstract version of the system. It identifies the software as a
system with many components interacting with each other. At this level, the designers get the idea of proposed solution
domain.
 High-level Design- The high-level design breaks the ‘single entity-multiple component’ concept of architectural design into
less-abstracted view of sub-systems and modules and depicts their interaction with each other. High-level design focuses on
how the system along with all of its components can be implemented in forms of modules. It recognizes modular structure of
each sub-system and their relation and interaction among each other.
 Detailed Design- Detailed design deals with the implementation part of what is seen as a system and its sub-systems in the
previous two designs. It is more detailed towards modules and their implementations. It defines logical structure of each
module and their interfaces to communicate with other modules.
Modularization
Modularization is a technique to divide a software system into multiple discrete and independent modules, which are expected to be
capable of carrying out task(s) independently. These modules may work as basic constructs for the entire software. Designers tend to
design modules such that they can be executed and/or compiled separately and independently.
Modular design unintentionally follows the rules of ‘divide and conquer’ problem-solving strategy this is because there are many
other benefits attached with the modular design of a software.
Advantage of modularization:
 Smaller components are easier to maintain
 Program can be divided based on functional aspects
 Desired level of abstraction can be brought in the program
 Components with high cohesion can be re-used again
 Concurrent execution can be made possible
 Desired from security aspect
Concurrency
Back in time, all software are meant to be executed sequentially. By sequential execution we mean that the coded instruction will be
executed one after another implying only one portion of program being activated at any given time. Say, a software has multiple
modules, then only one of all the modules can be found active at any time of execution.
In software design, concurrency is implemented by splitting the software into multiple independent units of execution, like modules
and executing them in parallel. In other words, concurrency provides capability to the software to execute more than one part of code
in parallel to each other.
It is necessary for the programmers and designers to recognize those modules, which can be made parallel execution.
Example
The spell check feature in word processor is a module of software, which runs alongside the word processor itself.
Coupling and Cohesion
When a software program is modularized, its tasks are divided into several modules based on some characteristics. As we know,
modules are set of instructions put together in order to achieve some tasks. They are though, considered as single entity but may refer
to each other to work together. There are measures by which the quality of a design of modules and their interaction among them can
be measured. These measures are called coupling and cohesion.
Cohesion
Cohesion is a measure that defines the degree of intra-dependability within elements of a module. The greater the cohesion, the better
is the program design.
There are seven types of cohesion, namely –
 Co-incidental cohesion - It is unplanned and random cohesion, which might be the result of breaking the program into
smaller modules for the sake of modularization. Because it is unplanned, it may serve confusion to the programmers and is
generally not-accepted.
 Logical cohesion - When logically categorized elements are put together into a module, it is called logical cohesion.
 Temporal Cohesion - When elements of module are organized such that they are processed at a similar point in time, it is
called temporal cohesion.
 Procedural cohesion - When elements of module are grouped together, which are executed sequentially in order to perform a
task, it is called procedural cohesion.
 Communicational cohesion - When elements of module are grouped together, which are executed sequentially and work on
same data (information), it is called communicational cohesion.
 Sequential cohesion - When elements of module are grouped because the output of one element serves as input to another
and so on, it is called sequential cohesion.
 Functional cohesion - It is considered to be the highest degree of cohesion, and it is highly expected. Elements of module in
functional cohesion are grouped because they all contribute to a single well-defined function. It can also be reused.
Coupling
Coupling is a measure that defines the level of inter-dependability among modules of a program. It tells at what level the modules
interfere and interact with each other. The lower the coupling, the better the program.
There are five levels of coupling, namely -
 Content coupling - When a module can directly access or modify or refer to the content of another module, it is called
content level coupling.
 Common coupling- When multiple modules have read and write access to some global data, it is called common or global
coupling.
 Control coupling- Two modules are called control-coupled if one of them decides the function of the other module or
changes its flow of execution.
 Stamp coupling- When multiple modules share common data structure and work on different part of it, it is called stamp
coupling.
 Data coupling- Data coupling is when two modules interact with each other by means of passing data (as parameter). If a
module passes data structure as parameter, then the receiving module should use all its components.
Ideally, no coupling is considered to be the best.
Design Verification
The output of software design process is design documentation, pseudo codes, detailed logic diagrams, process diagrams, and
detailed description of all functional or non-functional requirements.
The next phase, which is the implementation of software, depends on all outputs mentioned above.
It is then becomes necessary to verify the output before proceeding to the next phase. The early any mistake is detected, the better it
is or it might not be detected until testing of the product. If the outputs of design phase are in formal notation form, then their
associated tools for verification should be used otherwise a thorough design review can be used for verification and validation.
By structured verification approach, reviewers can detect defects that might be caused by overlooking some conditions. A good
design review is important for good software design, accuracy and quality.

9. Architectural Design
The software needs the architectural design to represents the design of software. IEEE defines architectural design as “the process
of defining a collection of hardware and software components and their interfaces to establish the framework for the development
of a computer system.” The software that is built for computer-based systems can exhibit one of these many architectural styles.  
Each style will describe a system category that consists of: 
 
 A set of components (eg: a database, computational modules) that will perform a function required by the system.
 The set of connectors will help in coordination, communication, and cooperation between the components.
 Conditions that how components can be integrated to form the system.
 Semantic models that help the designer to understand the overall properties of the system.
The use of architectural styles is to establish a structure for all the components of the system.  
Taxonomy of Architectural styles: 
 
1. Data centered architectures: 
 A data store will reside at the center of this architecture and is accessed frequently by the other components that update,
add, delete or modify the data present within the store.
 The figure illustrates a typical data centered style. The client software access a central repository. Variation of this
approach are used to transform the repository into a blackboard when data related to client or data of interest for the client
change the notifications to client software.
 This data-centered architecture will promote integrability. This means that the existing components can be changed and
new client components can be added to the architecture without the permission or concern of other clients.
 Data can be passed among clients using blackboard mechanism.
2. Data flow architectures: 
 This kind of architecture is used when input data to be transformed into output data through a series of computational
manipulative components.
 The figure represents pipe-and-filter architecture since it uses both pipe and filter and it has a set of components called
filters connected by pipes.
 Pipes are used to transmit data from one component to the next.
 Each filter will work independently and is designed to take data input of a certain form and produces data output to the
next filter of a specified form. The filters don’t require any knowledge of the working of neighboring filters.
 If the data flow degenerates into a single line of transforms, then it is termed as batch sequential. This structure accepts the
batch of data and then applies a series of sequential components to transform it.
3. Call and Return architectures: It is used to create a program that is easy to scale and modify. Many sub-styles exist
within this category. Two of them are explained below. 
 Remote procedure call architecture: This components is used to present in a main program or sub program architecture
distributed among multiple computers on a network.
 Main program or Subprogram architectures: The main program structure decomposes into number of subprograms or
function into a control hierarchy. Main program contains number of subprograms that can invoke other components.  
 

1. Object oriented architecture: The components of a system encapsulate data and the operations that must be applied to
manipulate the data. The coordination and communication between the components are established via the message passing.
2. Layered architecture: 
 A number of different layers are defined with each layer performing a well-defined set of operations. Each layer will do
some operations that becomes closer to machine instruction set progressively.
 At the outer layer, components will receive the user interface operations and at the inner layers, components will perform
the operating system interfacing(communication and coordination with OS)
 Intermediate layers to utility services and application software functions.

10. Functional Oriented Design


Function Oriented design is a method to software design where the model is decomposed into a set of interacting units or modules
where each unit or module has a clearly defined function. Thus, the system is designed from a functional viewpoint.
Design Notations
Design Notations are primarily meant to be used during the process of design and are used to represent design or design decisions. For
a function-oriented design, the design can be represented graphically or mathematically by the following:

Data Flow Diagram


Data-flow design is concerned with designing a series of functional transformations that convert system inputs into the required
outputs. The design is described as data-flow diagrams. These diagrams show how data flows through a system and how the output is
derived from the input through a series of functional transformations.
Data-flow diagrams are a useful and intuitive way of describing a system. They are generally understandable without specialized
training, notably if control information is excluded. They show end-to-end processing. That is the flow of processing from when data
enters the system to where it leaves the system can be traced.
Data-flow design is an integral part of several design methods, and most CASE tools support data-flow diagram creation. Different
ways may use different icons to represent data-flow diagram entities, but their meanings are similar.
The notation which is used is based on the following symbols:

The report generator produces a report which describes all of the named entities in a data-flow diagram. The user inputs the name of
the design represented by the diagram. The report generator then finds all the names used in the data-flow diagram. It looks up a data
dictionary and retrieves information about each name. This is then collated into a report which is output by the system.
Data Dictionaries
A data dictionary lists all data elements appearing in the DFD model of a system. The data items listed contain all data flows and the
contents of all data stores looking on the DFDs in the DFD model of a system.
A data dictionary lists the objective of all data items and the definition of all composite data elements in terms of their component data
items. For example, a data dictionary entry may contain that the data grossPay consists of the parts regularPay and overtimePay.
                  grossPay = regularPay + overtimePay
For the smallest units of data elements, the data dictionary lists their name and their type.
A data dictionary plays a significant role in any software development process because of the following reasons:
 A Data dictionary provides a standard language for all relevant information for use by engineers working in a project. A
consistent vocabulary for data items is essential since, in large projects, different engineers of the project tend to use
different terms to refer to the same data, which unnecessarily causes confusion.
 The data dictionary provides the analyst with a means to determine the definition of various data structures in terms of
their component elements.
Structured Charts
It partitions a system into block boxes. A Black box system that functionality is known to the user without the knowledge of internal
design.

Structured Chart is a graphical representation which shows:


o System partitions into modules
o Hierarchy of component modules
o The relation between processing modules
o Interaction between modules
o Information passed between modules
The following notations are used in structured chart:

Pseudo-code
Pseudo-code notations can be used in both the preliminary and detailed design phases. Using pseudo-code, the designer describes
system characteristics using short, concise, English Language phases that are structured by keywords such as If-Then-Else, While-Do,
and End.

11. Object-Oriented Design


In the object-oriented design method, the system is viewed as a collection of objects (i.e., entities). The state is distributed among the
objects, and each object handles its state data. For example, in a Library Automation Software, each library representative may be a
separate object with its data and functions to operate on these data. The tasks defined for one purpose cannot refer or change data of
other objects. Objects have their internal data which represent their state. Similar objects create a class. In other words, each object is a
member of some class. Classes may inherit features from the superclass.
The different terms related to object design are:

1. Objects: All entities involved in the solution design are known as objects. For example, person, banks, company, and users
are considered as objects. Every entity has some attributes associated with it and has some methods to perform on the
attributes.
2. Classes: A class is a generalized description of an object. An object is an instance of a class. A class defines all the attributes,
which an object can have and methods, which represents the functionality of the object.
3. Messages: Objects communicate by message passing. Messages consist of the integrity of the target object, the name of the
requested operation, and any other action needed to perform the function. Messages are often implemented as procedure or
function calls.
4. Abstraction In object-oriented design, complexity is handled using abstraction. Abstraction is the removal of the irrelevant
and the amplification of the essentials.
5. Encapsulation: Encapsulation is also called an information hiding concept. The data and operations are linked to a single
unit. Encapsulation not only bundles essential information of an object together but also restricts access to the data and
methods from the outside world.
6. Inheritance: OOD allows similar classes to stack up in a hierarchical manner where the lower or sub-classes can import,
implement, and re-use allowed variables and functions from their immediate super classes. This property of OOD is called an
inheritance. This makes it easier to define a specific class and to create generalized classes from specific ones.
7. Polymorphism: OOD languages provide a mechanism where methods performing similar tasks but vary in arguments, can
be assigned the same name. This is known as polymorphism, which allows a single interface is performing functions for
different types. Depending upon how the service is invoked, the respective portion of the code gets executed.

12. User Interface Design


The visual part of a computer application or operating system through which a client interacts with a computer or software. It
determines how commands are given to the computer or the program and how data is displayed on the screen.
Types of User Interface
There are two main types of User Interface:
o Text-Based User Interface or Command Line Interface
o Graphical User Interface (GUI)
Text-Based User Interface: This method relies primarily on the keyboard. A typical example of this is UNIX.
Advantages
o Many and easier to customizations options.
o Typically capable of more important tasks.
Disadvantages
o Relies heavily on recall rather than recognition.
o Navigation is often more difficult.
Graphical User Interface (GUI): GUI relies much more heavily on the mouse. A typical example of this type of interface is
any versions of the Windows operating systems.
GUI Characteristics

Characteristics Descriptions

Windows Multiple windows allow different information to be displayed simultaneously on the user's screen.

Icons Icons different types of information. On some systems, icons represent files. On other icons describes processes.

Menus Commands are selected from a menu rather than typed in a command language.

Pointing A pointing device such as a mouse is used for selecting choices from a menu or indicating items of interests in a
window.

Graphics Graphics elements can be mixed with text or the same display.

Advantages
o Less expert knowledge is required to use it.
o Easier to Navigate and can look through folders quickly in a guess and check manner.
o The user may switch quickly from one task to another and can interact with several different applications.
Disadvantages
o Typically decreased options.
o Usually less customizable. Not easy to use one button for tons of different variations.
UI Design Principles
Structure: Design should organize the user interface purposefully, in the meaningful and usual based on precise, consistent models
that are apparent and recognizable to users, putting related things together and separating unrelated things, differentiating dissimilar
things and making similar things resemble one another. The structure principle is concerned with overall user interface architecture.
Simplicity: The design should make the simple, common task easy, communicating clearly and directly in the user's language, and
providing good shortcuts that are meaningfully related to longer procedures.
Visibility: The design should make all required options and materials for a given function visible without distracting the user with
extraneous or redundant data.
Feedback: The design should keep users informed of actions or interpretation, changes of state or condition, and bugs or exceptions
that are relevant and of interest to the user through clear, concise, and unambiguous language familiar to users.
Tolerance: The design should be flexible and tolerant, decreasing the cost of errors and misuse by allowing undoing and redoing
while also preventing bugs wherever possible by tolerating varied inputs and sequences and by interpreting all reasonable actions.

13. Quality Assurance


What is Quality?
Quality is extremely hard to define, and it is simply stated: “Fit for use or purpose.” It is all about meeting the needs and expectations
of customers with respect to functionality, design, reliability, durability, & price of the product.
What is Assurance?
Assurance is nothing but a positive declaration on a product or service, which gives confidence. It is certainty of a product or a service,
which it will work well. It provides a guarantee that the product will work without any problems as per the expectations or
requirements.
Quality Assurance in Software Testing
Quality Assurance in Software Testing is defined as a procedure to ensure the quality of software products or services provided to
the customers by an organization. Quality assurance focuses on improving the software development process and making it efficient
and effective as per the quality standards defined for software products. Quality Assurance is popularly known as QA Testing.
How to do Quality Assurance: Complete Process
Quality Assurance methodology has a defined cycle called PDCA cycle or Deming cycle. The phases of this cycle are:
 Plan  Do  Check  Act

These above steps are repeated to ensure that processes followed in the organization are evaluated and improved on a periodic basis.
Let’s look into the above QA Process steps in detail –
 Plan – Organization should plan and establish the process related objectives and determine the processes that are required to
deliver a high-Quality end product.
 Do – Development and testing of Processes and also “do” changes in the processes
 Check – Monitoring of processes, modify the processes, and check whether it meets the predetermined objectives
 Act – A Quality Assurance tester should implement actions that are necessary to achieve improvements in the processes
An organization must use Quality Assurance to ensure that the product is designed and implemented with correct procedures. This
helps reduce problems and errors, in the final product.
What is Quality Control?
Quality control popularly abbreviated as QC. It is a Software Engineering process used to ensure quality in a product or a service. It
does not deal with the processes used to create a product; rather it examines the quality of the “end products” and the final outcome.
The main aim of Quality control is to check whether the products meet the specifications and requirements of the customer. If an issue
or problem is identified, it needs to be fixed before delivery to the customer.
QC also evaluates people on their quality level skill sets and imparts training and certifications. This evaluation is required for the
service based organization and helps provide “perfect” service to the customers.
Difference between Quality Control and Quality Assurance?
Sometimes, QC is confused with the QA. Quality control is to examine the product or service and check for the result. Quality
Assurance in Software Engineering is to examine the processes and make changes to the processes which led to the end-product.
Examples of QC and QA activities are as follows:
Quality Control Activities Quality Assurance Activities

Walkthrough Quality Audit

Testing Defining Process

Inspection Tool Identification and selection

Checkpoint review Training of Quality Standards and Processes


The above activities are concerned with Quality Assurance and Control mechanisms for any product and not essentially
software. With respect to software
 QA becomes SQA ( Software Quality Assurance)
 QC becomes Software Testing.
Differences between SQA and Software Testing
Following table explains on differences between SQA and Software Testing:
SQA Software Testing

Software Quality Assurance is about engineering process that ensures Software Testing is to test a product for problems before the
quality product goes live

Involves activities related to the implementation of processes, Involves actives concerning verification of product Example –
procedures, and standards. Example – Audits Training   Review Testing

Process focused Product focused

Preventive technique Corrective technique

Proactive measure Reactive measure

The scope of SQA applied to all products that will be created by the The scope of Software Testing applies to a particular product
organization being tested.

14. Processes and Configuration Management


Software Configuration Management
When we develop software, the product (software) undergoes many changes in their maintenance phase; we need to handle these
changes effectively.
Several individuals (programs) works together to achieve these common goals. This individual produces several work product (SC
Items) e.g., Intermediate version of modules or test data used during debugging, parts of the final product.
The elements that comprise all information produced as a part of the software process are collectively called a software configuration.
As software development progresses, the number of Software Configuration elements (SCI's) grow rapidly.
These are handled and controlled by SCM. This is where we require software configuration management.
A configuration of the product refers not only to the product's constituent but also to a particular version of the component.
Therefore, SCM is the discipline which
o Identify change
o Monitor and control change
o Ensure the proper implementation of change made to the item.
o Auditing and reporting on the change made.
Configuration Management (CM) is a technic of identifying, organizing, and controlling modification to software being built by a
programming team.
The objective is to maximize productivity by minimizing mistakes (errors).
CM is used to essential due to the inventory management, library management, and updating management of the items essential for the
project.
Why do we need Configuration Management?
Multiple people are working on software which is consistently updating. It may be a method where multiple version, branches, authors
are involved in a software project, and the team is geographically distributed and works concurrently. It changes in user requirements,
and policy, budget, schedules need to be accommodated.
Importance of SCM
It is practical in controlling and managing the access to various SCIs e.g., by preventing the two members of a team for checking out
the same component for modification at the same time.
It provides the tool to ensure that changes are being properly implemented.
It has the capability of describing and storing the various constituent of software.
SCM is used in keeping a system in a consistent state by automatically producing derived version upon modification of the same
component.
SCM Process
It uses the tools which keep that the necessary change has been
implemented adequately to the appropriate component. The
SCM process defines a number of tasks:
o Identification of objects in the software configuration
o Version Control
o Change Control
o Configuration Audit
o Status Reporting

Identification
Basic Object: Unit of Text created by a software engineer during analysis, design, code, or test.
Aggregate Object: A collection of essential objects and other aggregate objects. Design Specification is an aggregate object.
Each object has a set of distinct characteristics that identify it uniquely: a name, a description, a list of resources, and a "realization."
The interrelationships between configuration objects can be described with a Module Interconnection Language (MIL).
Version Control
Version Control combines procedures and tools to handle different version of configuration objects that are generated during the
software process.
Clam defines version control in the context of SCM: Configuration management allows a user to specify the alternative
configuration of the software system through the selection of appropriate versions. This is supported by associating attributes with
each software version, and then allowing a configuration to be specified [and constructed] by describing the set of desired attributes.
Change Control
James Bach describes change control in the context of SCM is: Change Control is Vital. But the forces that make it essential also make
it annoying.
We worry about change because a small confusion in the code can create a big failure in the product. But it can also fix a significant
failure or enable incredible new capabilities.
We worry about change because a single rogue developer could sink the project, yet brilliant ideas originate in the mind of those
rogues, and
A burdensome change control process could effectively discourage them from doing creative work.
A change request is submitted and calculated to assess technical merit; potential side effects, the overall impact on other configuration
objects and system functions, and projected cost of the change.
The results of the evaluations are presented as a change report, which is used by a change control authority (CCA) - a person or a
group who makes a final decision on the status and priority of the change.
The "check-in" and "check-out" process implements two necessary elements of change control-access control and synchronization
control.
Access Control governs which software engineers have the authority to access and modify a particular configuration object.
Synchronization Control helps to ensure that parallel changes, performed by two different people, don't overwrite one another.
Configuration Audit
SCM audits to verify that the software product satisfies the baselines requirements and ensures that what is built and what is delivered.
SCM audits also ensure that traceability is maintained between all CIs and that all work requests are associated with one or more CI
modification.
SCM audits are the "watchdogs" that ensures that the integrity of the project's scope is preserved.
Status Reporting
Configuration Status reporting (sometimes also called status accounting) providing accurate status and current configuration data to
developers, testers, end users, customers and stakeholders through admin guides, user guides, FAQs, Release Notes, Installation
Guide, Configuration Guide, etc.

Testing
Software Testing is evaluation of the software against requirements gathered from users and system specifications. Testing is
conducted at the phase level in software development life cycle or at module level in program code. Software testing comprises of
Validation and Verification.
Software Validation
Validation is process of examining whether or not the software satisfies the user requirements. It is carried out at the end of the
SDLC. If the software matches requirements for which it was made, it is validated.
 Validation ensures the product under development is as per the user requirements.
 Validation answers the question – "Are we developing the product which attempts all that user needs from this software".
 Validation emphasizes on user requirements.
Software Verification
Verification is the process of confirming if the software is meeting the business requirements, and is developed adhering to the proper
specifications and methodologies.
 Verification ensures the product being developed is according to design specifications.
 Verification answers the question– "Are we developing this product by firmly following all design specifications"
 Verifications concentrates on the design and system specifications.
Target of the test are -
 Errors - These are actual coding mistakes made by developers. In addition, there is a difference in output of software and
desired output, is considered as an error.
 Fault - When error exists fault occurs. A fault, also known as a bug, is a result of an error which can cause system to fail.
 Failure - failure is said to be the inability of the system to perform the desired task. Failure occurs when fault exists in the
system.
Manual Vs Automated Testing
Testing can either be done manually or using an automated testing tool:
 Manual - This testing is performed without taking help of automated testing tools. The software tester prepares test cases for
different sections and levels of the code, executes the tests and reports the result to the manager.
Manual testing is time and resource consuming. The tester needs to confirm whether or not right test cases are used. Major
portion of testing involves manual testing.
 Automated This testing is a testing procedure done with aid of automated testing tools. The limitations with manual testing
can be overcome using automated test tools.
A test needs to check if a webpage can be opened in Internet Explorer. This can be easily done with manual testing. But to check if
the web-server can take the load of 1 million users, it is quite impossible to test manually.
There are software and hardware tools which helps tester in conducting load testing, stress testing, regression testing.
Testing Approaches
Tests can be conducted based on two approaches –
 Functionality testing
 Implementation testing
When functionality is being tested without taking the actual implementation in concern it is known as black-box testing. The other
side is known as white-box testing where not only functionality is tested but the way it is implemented is also analyzed.
Exhaustive tests are the best-desired method for a perfect testing. Every single possible value in the range of the input and output
values is tested. It is not possible to test each and every value in real world scenario if the range of values is large.
Black-box testing
It is carried out to test functionality of the program. It is also called ‘Behavioral’ testing. The tester in this case, has a set of input
values and respective desired results. On providing input, if the output matches with the desired results, the program is tested ‘ok’,
and problematic otherwise.

In this testing method, the design and structure of the code are not known to the tester, and testing engineers and end users conduct
this test on the software.
Black-box testing techniques:
 Equivalence class - The input is divided into similar classes. If one element of a class passes the test, it is assumed that all
the class is passed.
 Boundary values - The input is divided into higher and lower end values. If these values pass the test, it is assumed that all
values in between may pass too.
 Cause-effect graphing - In both previous methods, only one input value at a time is tested. Cause (input) – Effect (output) is
a testing technique where combinations of input values are tested in a systematic way.
 Pair-wise Testing - The behavior of software depends on multiple parameters. In pairwise testing, the multiple parameters
are tested pair-wise for their different values.
 State-based testing - The system changes state on provision of input. These systems are tested based on their states and
input.
White-box testing
It is conducted to test program and its implementation, in order to improve code efficiency or structure. It is also known as
‘Structural’ testing.

In this testing method, the design and structure of the code are known to the tester. Programmers of the code conduct this test on the
code.
The below are some White-box testing techniques:
 Control-flow testing - The purpose of the control-flow testing to set up test cases which covers all statements and branch
conditions. The branch conditions are tested for both being true and false, so that all statements can be covered.
 Data-flow testing - This testing technique emphasis to cover all the data variables included in the program. It tests where the
variables were declared and defined and where they were used or changed.
Testing Levels
Testing itself may be defined at various levels of SDLC. The testing process runs parallel to software development. Before jumping
on the next stage, a stage is tested, validated and verified.
Testing separately is done just to make sure that there are no hidden bugs or issues left in the software. Software is tested on various
levels -
Unit Testing
While coding, the programmer performs some tests on that unit of program to know if it is error free. Testing is performed under
white-box testing approach. Unit testing helps developers decide that individual units of the program are working as per requirement
and are error free.
Integration Testing
Even if the units of software are working fine individually, there is a need to find out if the units if integrated together would also
work without errors. For example, argument passing and data updating etc.
System Testing
The software is compiled as product and then it is tested as a whole. This can be accomplished using one or more of the following
tests:
 Functionality testing - Tests all functionalities of the software against the requirement.
 Performance testing - This test proves how efficient the software is. It tests the effectiveness and average time taken by the
software to do desired task. Performance testing is done by means of load testing and stress testing where the software is put
under high user and data load under various environment conditions.
 Security & Portability - These tests are done when the software is meant to work on various platforms and accessed by
number of persons.
Acceptance Testing
When the software is ready to hand over to the customer it has to go through last phase of testing where it is tested for user-
interaction and response. This is important because even if the software matches all user requirements and if user does not like the
way it appears or works, it may be rejected.
 Alpha testing - The team of developer themselves perform alpha testing by using the system as if it is being used in work
environment. They try to find out how user would react to some action in software and how the system should respond to
inputs.
 Beta testing - After the software is tested internally, it is handed over to the users to use it under their production
environment only for testing purpose. This is not as yet the delivered product. Developers expect that users at this stage will
bring minute problems, which were skipped to attend.
Regression Testing
Whenever a software product is updated with new code, feature or functionality, it is tested thoroughly to detect if there is any
negative impact of the added code. This is known as regression testing.s
Testing Documentation
Testing documents are prepared at different stages -
Before Testing
Testing starts with test cases generation. Following documents are needed for reference –
 SRS document - Functional Requirements document
 Test Policy document - This describes how far testing should take place before releasing the product.
 Test Strategy document - This mentions detail aspects of test team, responsibility matrix and rights/responsibility of test
manager and test engineer.
 Traceability Matrix document - This is SDLC document, which is related to requirement gathering process. As new
requirements come, they are added to this matrix. These matrices help testers know the source of requirement. They can be
traced forward and backward.
While Being Tested
The following documents may be required while testing is started and is being done:
 Test Case document - This document contains list of tests required to be conducted. It includes Unit test plan, Integration
test plan, System test plan and Acceptance test plan.
 Test description - This document is a detailed description of all test cases and procedures to execute them.
 Test case report - This document contains test case report as a result of the test.
 Test logs - This document contains test logs for every test case report.
After Testing
The following documents may be generated after testing:
 Test summary - This test summary is collective analysis of all test reports and logs. It summarizes and concludes if the
software is ready to be launched. The software is released under version control system if it is ready to launch.
Testing vs. Quality Control, Quality Assurance and Audit
We need to understand that software testing is different from software quality assurance, software quality control and software
auditing.
 Software quality assurance - These are software development process monitoring means, by which it is assured that all the
measures are taken as per the standards of organization. This monitoring is done to make sure that proper software
development methods were followed.
 Software quality control - This is a system to maintain the quality of software product. It may include functional and non-
functional aspects of software product, which enhance the goodwill of the organization. This system makes sure that the
customer is receiving quality product for their requirement and the product certified as ‘fit for use’.
 Software audit - This is a review of procedure used by the organization to develop the software. A team of auditors,
independent of development team examines the software process, procedure, requirements and other aspects of SDLC. The
purpose of software audit is to check that software and its development process, both conform standards, rules and
regulations.

You might also like