You are on page 1of 31

UNIT 3: Coding and Testing

 Coding
The coding is the process of transforming the design of a system into a computer
language format. This coding phase of software development is concerned with
software translating design specification into the source code. It is necessary to write
source code & internal documentation so that conformance of the code to its
specification can be easily verified.
Coding is done by the coder or programmers who are independent people than the
designer. The goal is not to reduce the effort and cost of the coding phase, but to cut
to the cost of a later stage. The cost of testing and maintenance can be significantly
reduced with efficient coding.
 Goals of Coding:
1. To translate the design of system into a computer language format:
The coding is the process of transforming the design of a system into a computer
language format, which can be executed by a computer and that perform tasks as
specified by the design of operation during the design phase.
2. To reduce the cost of later phases:
The cost of testing and maintenance can be significantly reduced with efficient
coding.
3. Making the program more readable:
Program should be easy to read and understand. It increases code understanding
having readability and understand ability as a clear objective of the coding activity can
itself help in producing more maintainable software.
For implementing our design into code, we require a high-level functional language.
A programming language should have the following characteristics:

 Characteristics of Programming Language


Following are the characteristics of Programming Language:
1. Readability: A good high-level language will allow programs to be written
in some methods that resemble a quite-English description of the underlying
functions. The coding may be done in an essentially self-documenting way.
2. Portability: High-level languages, being virtually machine-independent,
should be easy to develop portable software.
3. Generality: Most high-level languages allow the writing of a vast collection
of programs, thus relieving the programmer of the need to develop into an
expert in many diverse languages.
4. Brevity: Language should have the ability to implement the algorithm with
less amount of code. Programs mean in high-level languages are often
significantly shorter than their low-level equivalents.
5. Error checking: A programmer is likely to make many errors in the
development of a computer program. Many high-level languages invoke a
lot of bugs checking both at compile-time and run-time.
6. Cost: The ultimate cost of a programming language is a task of many of its
characteristics.
7. Quick translation: It should permit quick translation.
8. Efficiency: It should authorize the creation of an efficient object code.
9. Modularity: It is desirable that programs can be developed in the language
as several separately compiled modules, with the appropriate structure for
ensuring self-consistency among these modules.
10. Widely available: Language should be widely available, and it should be
feasible to provide translators for all the major machines and all the primary
operating systems.
A coding standard lists several rules to be followed during coding, such as the way
variables are to be named, the way the code is to be laid out, error return conventions,
etc.

 Coding Standards
General coding standards refers to how the developer writes code, so here we will
discuss some essential standards regardless of the programming language being used.
The following are some representative coding standards:
 Indentation: Proper and consistent indentation is essential in producing easy to
read and maintainable programs.
Indentation should be used to:
o Emphasize the body of a control structure such as a loop or a select
statement.
o Emphasize the body of a conditional statement
o Emphasize a new scope block
 Inline comments: Inline comments analyze the functioning of the subroutine, or
key aspects of the algorithm shall be frequently used.
 Rules for limiting the use of global: These rules file what types of data can be
declared global and what cannot.
 Structured Programming: Structured (or Modular) Programming methods shall
be used. "GOTO" statements shall not be used as they lead to "spaghetti" code,
which is hard to read and maintain, except as outlined line in the FORTRAN
Standards and Guidelines.
 Naming conventions for global variables, local variables, and constant
identifiers: A possible naming convention can be that global variable names
always begin with a capital letter, local variable names are made of small letters,
and constant names are always capital letters.
 Error return conventions and exception handling system: Different functions
in a program report the way error conditions are handled should be standard within
an organization. For example, different tasks while encountering an error condition
should either return a 0 or 1 consistently.

 Coding Guidelines
General coding guidelines provide the programmer with a set of the best methods
which can be used to make programs more comfortable to read and maintain. Most of
the examples use the C language syntax, but the guidelines can be tested to all
languages.
The following are some representative coding guidelines recommended by many
software development organizations.

1. Line Length: It is considered a good practice to keep the length of source code
lines at or below 80 characters. Lines longer than this may not be visible properly on
some terminals and tools. Some printers will truncate lines longer than 80 columns.
2. Spacing: The appropriate use of spaces within a line of code can improve
readability.
3. The code should be well-documented: As a rule of thumb, there must be at
least one comment line on the average for every three-source line.
4. The length of any function should not exceed 10 source lines: A very
lengthy function is generally very difficult to understand as it possibly carries out many
various functions. For the same reason, lengthy functions are possible to have a
disproportionately larger number of bugs.
5. Do not use goto statements: Use of goto statements makes a program
unstructured and very tough to understand.
6. Inline Comments: Inline comments promote readability.
7. Error Messages: Error handling is an essential aspect of computer
programming. This does not only include adding the necessary logic to test for and
handle errors but also involves making error messages meaningful.

 Code Review:
 What is Code Review?
Code review is a systematic and collaborative process in software development
where one or more developers examine and evaluate another developer’s code to
identify any issues, provide any feedback, and ensure that the code meets quality
standards. The primary goal is to improve the maintainability, quality, and reliability
of the codebase resulting in the overall success of a project while also promoting
knowledge sharing and learning among team members.

 Why is Code Review important?


Code review is crucial in software development due to the following aspects:

 Bug Detection and Prevention: Code review helps detect and address errors, bugs,
and issues in the code before they propagate to production. This early detection
significantly helps in reducing the cost and effort required to fix problems later in the
development process or after deployment.
 Improved Quality of Code: Code review encourages writing of clean,
maintainable, and efficient code. Reviewers can suggest enhancements, recommend
best practices, and ensure that the code aligns with the established coding standards
and guidelines.
 Consistency: Code review enforces consistency in coding style and practices across
the team which makes the codebase easier to maintain and understand.
 Knowledge Sharing and Learning: Code reviews offer opportunities for knowledge
sharing among team members to learn from one another. Discussions during these
reviews can help spread domain knowledge and best practices.
 Code Ownership: Code reviewers take partial ownership of the code they review
thereby encouraging collective responsibility for the software’s quality.
 Collaboration: Reviewers can collaborate during the review process, discussing
design decisions, trade-offs, and alternative solutions which can lead to improved
design decisions and more robust solutions.
 Documentation: Code reviews can also serve as a form of documentation. Review
comments and discussions capture insights and decisions that may not be noticeable
from the code itself.
 Quality Assurance: Code review is a critical part of quality assurance processes. By
thoroughly analyzing the code changes, teams can ensure that the software meets
the functional requirements and user expectations.
 Continuous Improvement: Teams can often use code review as a tool for continuous
improvement. By analyzing the patterns of the feedback of code review, teams can
identify recurring issues and work on addressing them through training or process
improvements.

 Types of Code Review


Code reviews can take various forms depending on the team’s preferences. Some
common types of code review are as follows:
1. Pull Request (PR) Reviews: In Git-based version control systems like GitHub,
etc., developers often create pull or merge requests to propose changes to the
codebase. These pull requests are reviewed by the team members before the changes
are merged.
2. Pair Programming: In pair programming, two developers work together on the
same computer, with one writing code while the other reviewing it in real time. This form
of code review is highly interactive.
3. Over-the-Shoulder Reviews: A developer may ask a team member to review their
code by physically sitting together and going through the code on the computer screen.
4. Tool-Assisted Reviews: Various tools and platforms are available to facilitate code
reviews, such as GitHub, GitLab, Bitbucket, and code review-specific tools like Crucible
and Review Board.
5. Email-Based Review: In email-based reviews, code changes are sent via email,
and the reviewers provide feedback and comments in response. The discussion occurs
over email threads.
6. Checklist Review: In a checklist review, a checklist is used to evaluate the code.
Reviewers go through the list and check off items as they are reviewed.
7. Ad Hoc Review: Ad hoc reviews are informal and spontaneous. Developers may
spontaneously ask team members to take a quick look at their code or discuss changes
without a formal process.
8. Formal Inspection: Formal inspections are a structured form of code review that
follows a predefined process. They often involve a dedicated inspection team and
detailed documentation.

 How to perform a Code Review?

Performing code review involves a structured and collaborative process. The following
steps are involved in reviewing code:
1. Preparation: The developer who has completed a piece of code or feature initiates
the code review by creating a review request or notifying the team.
2. Reviewer Selection: One or more team members are assigned as reviewers.
Reviewers must have the relevant expertise to assess the code properly.
3. Code Review Environment or Tools: Code reviews are often facilitated using
specialized tools or integrated development environments (IDEs) that allow reviewers to
view the code changes and leave feedback.
4. Code Review Checklist: Reviewers may refer to a code review checklist or coding
standards to ensure that the code follows established guidelines.
5. Code Inspection: Reviewers examine the code thoroughly, looking for issues such
as: syntax errors, logic errors, performance bottlenecks, security vulnerabilities, coding
standard consistencies, adherence to best practices along with the code’s overall design,
scalability, and maintainability.
6. Feedbacks and Comments: Reviewers provide comments within the code review
tool, explaining any issues they find. These comments should be clear, constructive, and
specific, making it easier for the author to understand the feedback.
7. Discussion: The author and reviewers engage in a discussion about the code
changes. This dialogue can involve clarifications, explanations, and suggestions for
improvement.
8. Revisions: The author makes necessary changes based on the feedback received
and continues to engage in discussions until all concerns are addressed. This may
involve multiple review iterations.
9. Approval: Once reviewers are satisfied with the code changes and all concerns have
been resolved, they approve the code for integration.
10. Integration: The approved code changes are integrated into the main codebase,
typically using version control systems.
11. Follow-Up: After integration, it’s crucial to monitor the code in the production
environment to ensure that the changes work as expected and do not introduce new
issues.
12. Documentation and Closure: The code review process is documented for future
reference, and the review is officially closed. This documentation may include the review
comments, decisions made, and any action items.

 Advantages and Limitations of Code Review


Code review has its own advantages and disadvantages. Understanding these can help
teams optimize their code review practices and make better decisions about when and
how to conduct reviews. Here are some of the advantages and limitations of code
review:

 Advantages of Code Review:

 Enhanced Code Quality: Code review helps identify and correct bugs early in the
development process, resulting in higher code quality.
 Knowledge Sharing: Code reviews facilitate knowledge sharing among team members.
 Consistency: Code reviews enforce coding standards and guidelines, ensuring that the
codebase is consistent and easier to maintain.
 Security Improvements: Code review can help identify and mitigate security
vulnerabilities, improving the overall security of the software.
 Team Collaboration: Code reviews encourage collaboration and communication among
team members. They provide a structured forum for discussing code changes and
making collective decisions.
 Ownership: Code ownership becomes collective, reducing dependencies on specific
individuals since the reviews take partial ownership of the code being reviewed.
 Documentation: Code reviews often involve discussions and comments, which can
serve as a form of documentation.
 Code Reusability: By examining code thoroughly, developers can identify opportunities
for code reuse, leading to more efficient and maintainable software development.

 Limitations of Code Review:

 Time-Consuming: Code review can be time-consuming, especially for large or complex


code changes. This may lead to delays in the development process.
 Subjectivity: Code review feedback can be subjective, and different reviewers may
have different opinions on the code style and quality.
 Overhead: Excessive code reviews, especially for trivial changes, can create
unnecessary overhead and slow down development.
 Reviewer Bias: Reviewers may have biases or blind spots that can influence their
feedback. They may focus on certain aspects of the code while overlooking others.
 Communication Challenges: Reviewers and authors may face challenges in effectively
communicating feedback and understanding each other’s perspectives.
 Skill Level: The effectiveness of code reviews can vary depending on the skill level and
experience of the reviewers. Inexperienced reviewers may miss important issues, while
highly experienced ones may be overly critical.
 False Positives: Reviewers may identify issues that are not actually problems, leading
to unnecessary discussions and changes.
 Mental Fatigue: Reviewers can experience mental fatigue, especially during lengthy or
frequent code reviews, which can reduce their ability to provide thorough feedback.
 5 Components of Coding:
The fundamental coding concepts include terminology and a set of basic principles
that programmers follow for writing code. The code is modular, efficient, and simple
to fundamental coding concepts understand. Some basic coding concepts are variable
declaration, control structures, data structures, syntax, and tools. These will help build
better code.
1. Variable Declaration:
Variables are basic building blocks of any programming language, and these
variables are also known as containers. Information can be stored in these containers.
The variable is declared to instruct the operating system to reserve a piece of memory
with that variable name. The main thing in the variables is in the form of letters, digits,
and underscores. Variable stores the standard datatypes like numbers, strings, lists,
tuples, and dictionaries.
2. Data Structures
Data structures are important for programming languages. They help you
organize, manage, and store data quickly and efficiently. data structures also include the
operations that can be applied to that data. The computer programmer can use this data
structure to complete tasks and run applications.
3. Control Structures
A control structure shows the flow of control in a program, and this also shows
the program’s direction, where the program can be started and where it can be ended,
and where we can apply logic. This structure is very easier to develop the algorithms and
write programs; there are three basic control structures: sequential, selection, and
iteration.
4. Syntax
While writing the program, it must and should follow some set rules and
instructions, which is necessary for programming language. The computer understands
this syntax and the computer can check whether it must be correct or not. A tool is
software that leads programmers to write code more quickly. The integrated
development environment (IDE) provides developers with tools for compiling, writing,
and executing code.
5. Tools:
In the physical world, tools allow workers to perform tasks that would otherwise
be extremely difficult (think of how a hammer helps drive a nail into a piece of wood and
what this job would be like without tools). Similarly, a tool in computer programming is a
piece of software that helps programmers write code much faster.
For example, one of the most important tools for computer programmers is an
Integrated Development Environment (IDE). An IDE can check the syntax of code for
errors, organize files, autocomplete commonly used code, and help you easily navigate
through your code. Tools are the final crucial element to code, as they streamline
processes and ensure accuracy.

 Good Coding Practices:


In the software engineering community, standardized coding conventions help keep the
code relevant and useful for clients, future developers, and the coders themselves. Any
professional programmer will tell you that the majority of their time is spent
reading code, rather than writing it. You are more likely to work on existing code than to
write something from scratch. So it’s important that your code is easy to understand for
other engineers.
Now that we know what best practices are, let’s delve into 10 techniques that will help
you write production-level code and ultimately make you a better developer.

1. Enhance Code Readability


We can’t emphasize it enough: try to always write code that can be easily
understood by others. Writing highly optimized mathematical routines, or creating
complex libraries, is relatively easy. Writing 200 lines of code that can be instantly
understood by another software engineer is more of a challenge.
It may seem like extra effort at the time, but this additional work will pay
dividends in the future. It will make your life so much easier when returning to
update your code. In addition, the debugging process should be much smoother
for yourself, or for other engineers who need to edit your work.
Professionally written code is clean and modular. It is easily readable, as well
as logically structured into modules and functions. Using modules makes your code
more efficient, reusable, and organized.
Always remember, future-proofing your code in this way should always be
prioritized over finishing quickly. You may think you are saving time by hacking
away, but in fact, you are creating hours of extra work down the line.

2. Ensure Your Code Works Efficiently


In order to optimize your code, you need to make sure it executes the function
quickly. In the world of software engineering, writing code quickly and correctly is
pointless if the end product is slow and unstable. This is especially true in large, complex
programs. Even the smallest amount of lag can add up considerably, rendering your
application - and all of your engineering work - useless.
Equally important is minimizing the memory footprint of your code. In terms of
performance, working with many functions on a large amount of data can drastically
reduce the efficiency of your code.

3. Refactor Your Code


Refactoring is basically improving the structure of your code, without making
modifications to its actual functionality. If you have multiple blocks of code that do
similar things, it’s useful to refactor them into a single function. This will allow you to
simplify your code. And if you want to change how that function works, you (or any
other engineer) only need to update one function rather than several.
In order to create a high-quality program, devoting time to refactor your code is
essential. In the long run, refactoring will speed up your development time, and make
the software engineering process much smoother.

4. Develop A Professional Coding Style


A professional coding approach is not an exact science. It’s a mindset that can
only be developed over time, by reading and writing a lot of code, and developing your
software engineering knowledge. You’ll realize that expert coders don’t turn out
unstructured blocks of code just so they can get it done quickly. Instead, their code is
almost universally understandable by other engineers, no matter how much time it takes
to write it.
There are a number of principles that will help you develop an effective coding
style. Some of them are:
o Using descriptive names for functions and variables
o Implementing modularity in your code
o Avoiding excessive indentation
Therefore, whether your code is general (for all programming and markup languages) or
production quality (specific to a particular language), it should always follow good coding
convention.

5. Use Version Control


Version control refers to a software engineering framework that tracks all changes
and synchronizes them with a master file stored on a remote server. An effective version
control system is a key aspect of writing production code. Using version control in a
professional project will give you the following benefits:
o In case your system crashes, the entire code is backed up on the server.
o All members of the software engineering team can sync their changes
regularly by downloading them from the server.
o Numerous developers can work independently to add/remove features or
make changes to a single project, without impacting other member’s work.
o If serious bugs are created, you can always return to a previous, stable
version of the codebase that was known to work.

6. Test Your Code


Good testing practices not only ensure quality standards in software engineering, but
also guide and shape the development process itself. Testing ensures the code gives the
desired result and meets all necessary user requirements.
o Unit tests are used to test small, self-contained parts of the code logic. These
engineering tests are designed to cover all possible conditions.
o The process of writing the test is beneficial in itself. It requires you to figure
out the behavior of your code and how you’re going to test it.
o Small and regular tests are better than one large test after completion. This
approach helps maintain a modular structure and will result in a higher quality
end product.
o
7. The KISS Principle
The popular acronym KISS, noted by the U.S. Navy in 1960, is extremely relevant
in the software engineering community. It stands for “Keep It Simple, Silly” (some
variations replace “Silly” with “Stupid,” but you get the idea). Whatever the variations,
the underlying idea remains the same.
The keyword here is “Simple,” and the idea is to keep your code as concise as
possible. In the context of coding, this means making your code meaningful, to-the-
point, and avoiding unnecessary engineering work.
The KISS Principle ensures that your code has high maintainability. You should be
able to go back and debug it easily, without wasting time and effort.

8. The YAGNI Principle


Another acronym that’s popular among software engineers, YAGNI means “You
Aren’t Gonna Need It”. This principle focuses on eliminating any unnecessary coding and
works in tandem with the KISS principle.
Instinctively, meticulous engineers start by planning and listing everything that
‘might be used’ during the project. However, sometimes they end up with components
that are never used.
So it’s always a good idea to avoid coding something that you don’t need “right
now.” You can always add it later if circumstances change.

9. The DRY Principle


Like the previous two points, the DRY (Don’t Repeat Yourself) Principle aims at
reducing repetition and redundancies within the software engineering process. This is
achieved by replacing repetitions with abstractions or by grouping code into functions.
If your code is performing the same instruction in 6 different places, it can be a
real struggle to manage, even for experienced engineers. Let’s say you decide to change
how that instruction works. You need to update your code 6 times for just 1 change! It’s
therefore much better practice to create a single function that performs the instruction
once, and reference this function each time it’s needed. If you want to change how it
works, you only need to update the code once.

 Testing:
Software Testing is a method to assess the functionality of the software program.
The process checks whether the actual software matches the expected requirements and
ensures the software is bug-free. The purpose of software testing is to identify the
errors, faults, or missing requirements in contrast to actual requirements. It mainly aims
at measuring the specification, functionality, and performance of a software program or
application.
 Software testing can be divided into two steps:
o Verification: It refers to the set of tasks that ensure that the software
correctly implements a specific function. It means “Are we building the
product right?”
o Validation: It refers to a different set of tasks that ensure that the
software that has been built is traceable to customer requirements. It
means “Are we building the right product?”

 Importance of Software Testing:


o Defects can be identified early: Software testing is important because
if there are any bugs they can be identified early and can be fixed before
the delivery of the software.
o Improves quality of software: Software Testing uncovers the defects in
the software, and fixing them improves the quality of the software.
o Increased customer satisfaction: Software testing ensures reliability,
security, and high performance which results in saving time, costs, and
customer satisfaction.
o Helps with scalability: Software testing type non-functional testing
helps to identify the scalability issues and the point where an application
might stop working.
o Saves time and money: After the application is launched it will be very
difficult to trace and resolve the issues, as performing this activity will
incur more costs and time. Thus, it is better to conduct software testing at
regular intervals during software development.

 Different Types of Software Testing


Software Testing can be broadly classified into 3 types:
1. Functional Testing: Functional testing is a type of software testing that
validates the software systems against the functional requirements. It is
performed to check whether the application is working as per the software’s
functional requirements or not. Various types of functional testing are Unit
testing, Integration testing, System testing, Smoke testing, and so on.
2. Non-functional Testing: Non-functional testing is a type of software testing
that checks the application for non-functional requirements like performance,
scalability, portability, stress, etc. Various types of non-functional testing are
Performance testing, Stress testing, Usability testing, and so on.
3. Maintenance Testing: Maintenance testing is the process of changing,
modifying, and updating the software to keep up with the customer’s needs. It
involves regression testing that verifies that recent changes to the code have not
adversely affected other previously working parts of the software.

 Benefits of Software Testing


o Product quality: Testing ensures the delivery of a high-quality product as the
errors are discovered and fixed early in the development cycle.
o Customer satisfaction: Software testing aims to detect the errors or
vulnerabilities in the software early in the development phase so that the
detected bugs can be fixed before the delivery of the product. Usability testing is
a type of software testing that checks the application for how easily usable it is
for the users to use the application.
o Cost-effective: Testing any project on time helps to save money and time for
the long term. If the bugs are caught in the early phases of software testing, it
costs less to fix those errors.
o Security: Security testing is a type of software testing that is focused on testing
the application for security vulnerabilities from internal or external sources.

 Principles of Testing
o All the tests should meet the customer’s requirements.
o To make our software testing should be performed by a third party.
o Exhaustive testing is not possible. As we need the optimal amount of
testing based on the risk assessment of the application.
o All the tests to be conducted should be planned before implementing it
o It follows the Pareto rule (80/20 rule) which states that 80% of errors
come from 20% of program components.
o Start testing with small parts and extend it to large parts.
o Types of Testing

 Unit Testing:
Unit Testing is a software testing technique using which individual units of
software i.e. group of computer program modules, usage procedures, and operating
procedures are tested to determine whether they are suitable for use or not. It is a
testing method using which every independent module is tested to determine if there is
an issue by the developer himself. It is correlated with the functional correctness of the
independent modules. Unit testing is defined as a type of software testing where
individual components of a software are tested. Unit testing of the software product is
carried out during the development of an application. An individual component may be
either an individual function or a procedure. Unit testing is typically performed by the
developer. In SDLC or V Model, Unit testing is the first level of testing done before
integration testing. Unit testing is a type of testing technique that is usually performed
by developers. Although due to the reluctance of developers to test, quality assurance
engineers also do unit testing.
 Objective of Unit Testing:
The objective of Unit Testing is:
o To isolate a section of code.
o To verify the correctness of the code.
o To test every function and procedure.
o To fix bugs early in the development cycle and to save costs.
o To help the developers understand the code base and enable them to
make changes quickly.
o To help with code reuse

 Workflow of Unit Testing:

 Unit Testing Techniques:


There are 3 types of Unit Testing Techniques. They are
1. Black Box Testing: This testing technique is used in covering the unit tests for
input, user interface, and output parts.
2. White Box Testing: This technique is used in testing the functional behaviour of the
system by giving the input and checking the functionality output including the
internal design structure and code of the modules.
3. Grey Box Testing: This technique is used in executing the relevant test cases, test
methods, and test functions, and analysing the code performance for the modules.

 Unit Testing Tools:


Here are some commonly used Unit Testing tools:
1. Jtest
2. Junit
3. NUnit
4. EMMA
5. PHPUnit
 Advantages of Unit Testing:
1. Unit testing allows developers to learn what functionality is provided by a unit and
how to use it to gain a basic understanding of the unit API.
2. Unit testing allows the programmer to refine code and make sure the module
works properly.
3. Unit testing enables testing parts of the project without waiting for others to be
completed.
4. Early Detection of Issues: Unit testing allows developers to detect and fix
issues early in the development process before they become larger and more
difficult to fix.
5. Improved Code Quality: Unit testing helps to ensure that each unit of code
works as intended and meets the requirements, improving the overall quality of
the software.
6. Increased Confidence: Unit testing provides developers with confidence in their
code, as they can validate that each unit of the software is functioning as
expected.
7. Faster Development: Unit testing enables developers to work faster and more
efficiently, as they can validate changes to the code without having to wait for the
full system to be tested.
8. Better Documentation: Unit testing provides clear and concise documentation
of the code and its behaviour, making it easier for other developers to understand
and maintain the software.
9. Facilitation of Refactoring: Unit testing enables developers to safely make
changes to the code, as they can validate that their changes do not break existing
functionality.
10. Reduced Time and Cost: Unit testing can reduce the time and cost required for
later testing, as it helps to identify and fix issues early in the development
process.

 Disadvantages of Unit Testing:


1. The process is time-consuming for writing the unit test cases.
2. Unit testing will not cover all the errors in the module because there is a chance
of having errors in the modules while doing integration testing.
3. Unit testing is not efficient for checking the errors in the UI (User Interface) part
of the module.
4. It requires more time for maintenance when the source code is changed
frequently.
5. It cannot cover the non-functional testing parameters such as scalability, the
performance of the system, etc.
6. Time and Effort: Unit testing requires a significant investment of time and effort
to create and maintain the test cases, especially for complex systems.
7. Dependence on Developers: The success of unit testing depends on the
developers, who must write clear, concise, and comprehensive test cases to
validate the code.
8. Difficulty in Testing Complex Units: Unit testing can be challenging when
dealing with complex units, as it can be difficult to isolate and test individual units
in isolation from the rest of the system.
9. Difficulty in Testing Interactions: Unit testing may not be sufficient for testing
interactions between units, as it only focuses on individual units.
10. Difficulty in Testing User Interfaces: Unit testing may not be suitable for testing
user interfaces, as it typically focuses on the functionality of individual units.
11. Over-reliance on Automation: Over-reliance on automated unit tests can lead
to a false sense of security, as automated tests may not uncover all possible
issues or bugs.
12. Maintenance Overhead: Unit testing requires ongoing maintenance and
updates, as the code and test cases must be kept up-to-date with changes to the
software.
 Black Box Testing:
Black-box testing is a type of software testing in which the tester is not
concerned with the internal knowledge or implementation details of the software but
rather focuses on validating the functionality based on the provided specifications or
requirements.

Black box testing can be done in the following ways:


1. Syntax-Driven Testing – This type of testing is applied to systems that can be
syntactically represented by some language. For example, language can be represented
by context-free grammar. In this, the test cases are generated so that each grammar rule
is used at least once.
2. Equivalence partitioning – It is often seen that many types of inputs work similarly
so instead of giving all of them separately we can group them and test only one input of
each group. The idea is to partition the input domain of the system into several equivalence
classes such that each member of the class works similarly, i.e., if a test case in one class
results in some error, other members of the class would also result in the same error.

The technique involves two steps:


1. Identification of equivalence class – Partition any input domain into a
minimum of two sets: valid values and invalid values. For example, if the valid
range is 0 to 100 then select one valid input like 49 and one invalid like 104.

2. Generating test cases –


3. To each valid and invalid class of input assign a unique identification number.
4. Write a test case covering all valid and invalid test cases considering that no two
invalid inputs mask each other. To calculate the square root of a number, the
equivalence classes will be (a) Valid inputs:
5. The whole number which is a perfect square-output will be an integer.
6. The entire number which is not a perfect square-output will be a decimal number.
7. Positive decimals
8. Negative numbers (integer or decimal).
9. Characters other than numbers like “a”,”!”, ”;”, etc.

3. Boundary value analysis – Boundaries are very good places for errors to occur.
Hence, if test cases are designed for boundary values of the input domain then the
efficiency of testing improves and the probability of finding errors also increases. For
example – If the valid range is 10 to 100 then test for 10,100 also apart from valid and
invalid inputs.

4. Cause effect graphing – This technique establishes a relationship between logical


input called causes with corresponding actions called the effect. The causes and effects
are represented using Boolean graphs. The following steps are followed:
o Identify inputs (causes) and outputs (effect).
o Develop a cause-effect graph.
o Transform the graph into a decision table.
o Convert decision table rules to test cases.

For example, in the following cause-effect graph:


5. Requirement-based testing –It includes validating the requirements given
in the SRS of a software system.

6. Compatibility testing – The test case results not only depends on the product but is
also on the infrastructure for delivering functionality. When the infrastructure parameters
are changed it is still expected to work properly. Some parameters that generally affect
the compatibility of software are:
o Processor (Pentium 3, Pentium 4) and several processors.
o Architecture and characteristics of machine (32-bit or 64-bit).
o Back-end components such as database servers.
o Operating System (Windows, Linux, etc).

 Black Box Testing Type


The following are the several categories of black box testing:
o Functional Testing
o Regression Testing
o Non-functional Testing (NFT)

 Functional Testing: It determines the system’s software functional


requirements.
 Regression Testing: It ensures that the newly added code is compatible
with the existing code. In other words, a new software update has no impact
on the functionality of the software. This is carried out after a system
maintenance operation and upgrades.
 Non-functional Testing: Non-functional testing is also known as NFT. This
testing is not functional testing of software. It focuses on the software’s
performance, usability, and scalability.

 Tools Used for Black Box Testing:


1. Appium
2. Selenium
3. Microsoft Coded UI
4. Applitools
5. HP QTP.

 What can be identified by Black Box Testing?


o Discovers missing functions, incorrect function & interface errors
o Discover the errors faced in accessing the database
o Discovers the errors that occur while initiating & terminating any functions.
o Discovers the errors in performance or behaviour of software.

 Features of black box testing:


o Independent testing: Black box testing is performed by testers who are not
involved in the development of the application, which helps to ensure that
testing is unbiased and impartial.
o Testing from a user’s perspective: Black box testing is conducted from the
perspective of an end user, which helps to ensure that the application meets
user requirements and is easy to use.
o No knowledge of internal code: Testers performing black box testing do
not have access to the application’s internal code, which allows them to focus
on testing the application’s external behaviour and functionality.
o Requirements-based testing: Black box testing is typically based on the
application’s requirements, which helps to ensure that the application meets
the required specifications.
o Different testing techniques: Black box testing can be performed using
various testing techniques, such as functional testing, usability testing,
acceptance testing, and regression testing.
o Easy to automate: Black box testing is easy to automate using various
automation tools, which helps to reduce the overall testing time and effort.
o Scalability: Black box testing can be scaled up or down depending on the
size and complexity of the application being tested.
o Limited knowledge of application: Testers performing black box testing
have limited knowledge of the application being tested, which helps to ensure
that testing is more representative of how the end users will interact with the
application.

 Advantages of Black Box Testing:


The tester does not need to have more functional knowledge or programming
skills to implement the Black Box Testing.
o It is efficient for implementing the tests in the larger system.
o Tests are executed from the user’s or client’s point of view.
o Test cases are easily reproducible.
o It is used in finding the ambiguity and contradictions in the functional
specifications.

 Disadvantages of Black Box Testing:


o There is a possibility of repeating the same tests while implementing the
testing process.
o Without clear functional specifications, test cases are difficult to implement.
o It is difficult to execute the test cases because of complex inputs at different
stages of testing.
o Sometimes, the reason for the test failure cannot be detected.
o Some programs in the application are not tested.
o It does not reveal the errors in the control structure.
o Working with a large sample space of inputs can be exhaustive and consumes
a lot of time.
 White Box Testing:

White box testing techniques analyse the internal structures the used data
structures, internal design, code structure, and the working of the software rather
than just the functionality as in black box testing. It is also called glass box testing
or clear box testing or structural testing. White Box Testing is also known as
transparent testing or open box testing.
White box testing is a software testing technique that involves testing the
internal structure and workings of a software application. The tester has access to
the source code and uses this knowledge to design test cases that can verify the
correctness of the software at the code level.
White box testing is also known as structural testing or code-based testing, and
it is used to test the software’s internal logic, flow, and structure. The tester creates
test cases to examine the code paths and logic flows to ensure they meet the
specified requirements.

 Process of White Box Testing


o Input: Requirements, Functional specifications, design documents, source
code.
o Processing: Performing risk analysis to guide through the entire process.
o Proper test planning: Designing test cases to cover the entire code. Execute
rinse-repeat until error-free software is reached. Also, the results are
communicated.
o Output: Preparing final report of the entire testing process.

 White Box Testing Types:


1. Statement Coverage:
In this technique, the aim is to traverse all statements at least once.
Hence, each line of code is tested. In the case of a flowchart, every node must be
traversed at least once. Since all lines of code are covered, it helps in pointing out
faulty code. This technique requires every possible statement in the code to be
tested at least once during the testing process of software engineering.
In this technique, test cases are written to ensure that every statement in
the code is executed at least once.
In the code's flow-chart representation, the two arrows represent test
cases required to cover all the statements.

Statement Coverage Example

2. Branch Coverage:
In this technique, test cases are designed so that each branch from all
decision points is traversed at least once. In a flowchart, all edges must be
traversed at least once. This is like exploring all possible routes on a GPS. If
you’re at an intersection, branch testing involves going straight, turning left, and
turning right to ensure all paths lead to valid destinations. This technique checks
every possible path (if-else and other conditional loops) of a software application.
In a programming language, "branch" is equivalent to "IF statements." True
and False are the two branches of an IF statement. Therefore, in branch
coverage, we verify if each branch is executed at least once. Branch coverage is
also known as decision coverage.
There are two test criteria for an "IF statement":
1. To authenticate the actual branch.
2. To authenticate the false branch.

3.Path testing
In the path testing, we will write the flow graphs and test all independent
paths. Here writing the flow graph implies that flow graphs are representing the
flow of the program and also show how every program is added with one another
as we can see in the below image:
And test all the independent paths implies that suppose a path from main() to
function G, first set the parameters and test if the program is correct in that
particular path, and in the same way test all other paths and fix the bugs. Path
coverage analyzes all of the program's pathways. This robust strategy ensures that
all program routes are traversed at least once. Its coverage is more effective than
branch coverage. This method is handy for testing sophisticated applications.

4. Data Flow Testing (DFT): In this approach, you track the specific variables
through each possible calculation, thus defining the set of intermediate paths through
the code. DFT tends to reflect dependencies but it is mainly through sequences of
data manipulation. In short, each data variable is tracked and its use is verified. This
approach tends to uncover bugs like variables used but not initialize, or declared but
not used, and so on. Keeping with a proper testing technique, we write methods to
ensure they are testable – most simply by having the method return a value.
Additionally, we predetermine the “answer” that is returned when the method is
called with certain parameters so that our testing returns that predetermined value.

 Tools required for White box testing:


o PyUnit
o Sqlmap
o Nmap
o Parasoft Jtest
o Nunit
o VeraUnit
o CppUnit
o Bugzilla
o Fiddler
o JSUnit.net
o OpenGrok
o Wireshark
o HP Fortify
o CSUnit

 Features of White box Testing


o Code coverage analysis: White box testing helps to analyze the code
coverage of an application, which helps to identify the areas of the code that
are not being tested.
o Access to the source code: White box testing requires access to the
application’s source code, which makes it possible to test individual functions,
methods, and modules.
o Knowledge of programming languages: Testers performing white box
testing must have knowledge of programming languages like Java, C++,
Python, and PHP to understand the code structure and write tests.
o Identifying logical errors: White box testing helps to identify logical errors
in the code, such as infinite loops or incorrect conditional statements.
o Integration testing: White box testing is useful for integration testing, as it
allows testers to verify that the different components of an application are
working together as expected.
o Unit testing: White box testing is also used for unit testing, which involves
testing individual units of code to ensure that they are working correctly.
o Optimization of code: White box testing can help to optimize the code by
identifying any performance issues, redundant code, or other areas that can
be improved.
o Security testing: White box testing can also be used for security testing, as
it allows testers to identify any vulnerabilities in the application’s code.
o Verification of Design: It verifies that the software’s internal design is
implemented in accordance with the designated design documents.
o Check for Accurate Code: It verifies that the code operates in accordance
with the guidelines and specifications.
o Identifying Coding Mistakes: It finds and fix programming flaws in your
code, including syntactic and logical errors.
o Path Examination: It ensures that each possible path of code execution is
explored and test various iterations of the code.
o Determining the Dead Code: It finds and remove any code that isn’t used
when the programme is running normally (dead code).
o Advantages of Whitebox Testing
o Thorough Testing: White box testing is thorough as the entire code and
structures are tested.
o Code Optimization: It results in the optimization of code removing errors
and helps in removing extra lines of code.
o Early Detection of Defects: It can start at an earlier stage as it doesn’t
require any interface as in the case of black box testing.
o Integration with SDLC: White box testing can be easily started in Software
Development Life Cycle.
o Detection of Complex Defects: Testers can identify defects that cannot be
detected through other testing techniques.
o Comprehensive Test Cases: Testers can create more comprehensive and
effective test cases that cover all code paths.
o Testers can ensure that the code meets coding standards and is optimized for
performance.

 Disadvantages of White box Testing


o Programming Knowledge and Source Code Access: Testers need to have
programming knowledge and access to the source code to perform tests.
o Overemphasis on Internal Workings: Testers may focus too much on the
internal workings of the software and may miss external issues.
o Bias in Testing: Testers may have a biased view of the software since they
are familiar with its internal workings.
o Test Case Overhead: Redesigning code and rewriting code needs test cases
to be written again.
o Dependency on Tester Expertise: Testers are required to have in-depth
knowledge of the code and programming language as opposed to black-box
testing.
o Inability to Detect Missing Functionalities: Missing functionalities cannot
be detected as the code that exists is tested.
o Increased Production Errors: High chances of errors in production.

 Differences between Black Box Testing vs White Box Testing:


Black Box Testing White Box Testing

o It is a way of testing the


o It is a way of software testing
software in which the tester
in which the internal structure
has knowledge about the
or the program or the code is
internal structure or the code
hidden and nothing is known
or the program of the
about it.
software.

o Code implementation is
o Implementation of code is not
necessary for white box
needed for black box testing.
testing.

o It is mostly done by software o It is mostly done by software


testers. developers.

o No knowledge of o Knowledge of implementation


implementation is needed. is required.

o It can be referred to as outer o It is the inner or the internal


or external software testing. software testing.

o It is a functional test of the o It is a structural test of the


software. software.

o This testing can be initiated o This type of testing of software


based on the requirement is started after a detail design
specifications document. document.

o No knowledge of programming o It is mandatory to have


is required. knowledge of programming.

o It is the behavior testing of o It is the logic testing of the


the software. software.

o It is generally applicable to the


o It is applicable to the higher
lower levels of software
levels of testing of software.
testing.
Black Box Testing White Box Testing

o It is also called as clear box


o It is also called closed testing.
testing.

o It is least time consuming. o It is most time consuming.

o It is not suitable or preferred o It is suitable for algorithm


for algorithm testing. testing.

o Data domains along with inner


o Can be done by trial and error
or internal boundaries can be
ways and methods.
better tested.

o Example: Search something o Example: By input to check


on google by using keywords and verify loops

Black-box test design techniques-


White-box test design techniques-
o Decision table testing
o Control flow testing
o All-pairs testing
o Data flow testing
o Equivalence partitioning
o Branch testing
o Error guessing

Types of Black Box Testing: Types of White Box Testing:


o Functional Testing o Path Testing
o Non-functional testing o Loop Testing
o Regression Testing o Condition testing

o It is less exhaustive as o It is comparatively more


compared to white box exhaustive than black box
testing. testing.

 Program Analysis Tools:


 What is Program Analysis Tool?
Program Analysis Tool is an automated tool whose input is the source code or the
executable code of a program and the output is the observation of characteristics of
the program. It gives various characteristics of the program such as its size,
complexity, adequacy of commenting, adherence to programming standards and
many other characteristics. These tools are essential to software engineering because
they help programmers comprehend, improve and maintain software systems over
the course of the whole development life cycle.

 Importance of Program Analysis Tools


o Finding faults and Security Vulnerabilities in the Code: Automatic
programme analysis tools can find and highlight possible faults, security
flaws and bugs in the code. This lowers the possibility that bugs will get it into
production by assisting developers in identifying problems early in the
process.
o Memory Leak Detection: Certain tools are designed specifically to find
memory leaks and inefficiencies. By doing so, developers may make sure that
their software doesn’t gradually use up too much memory.
o Vulnerability Detection: Potential vulnerabilities like buffer overflows,
injection attacks or other security flaws can be found using programme
analysis tools, particularly those that are security-focused. For the
development of reliable and secure software, this is essential.
o Dependency analysis: By examining the dependencies among various
system components, tools can assist developers in comprehending and
controlling the connections between modules. This is necessary in order to
make well-informed decisions during refactoring.
o Automated Testing Support: To automate testing procedures, CI/CD
pipelines frequently combine programme analysis tools. Only well-tested,
high-quality code is released into production thanks to this integration,
helping in identifying problems early in the development cycle.

 Classification of Program Analysis Tools


Program Analysis Tools are classified into two categories:

1. Static Program Analysis Tools


Static Program Analysis Tool is such a program analysis tool that evaluates and
computes various characteristics of a software product without executing it.
Normally, static program analysis tools analyze some structural representation of a
program to reach a certain analytical conclusion. Basically some structural properties
are analyzed using static program analysis tools.

 The structural properties that are usually analyzed are:


o Whether the coding standards have been fulfilled or not.
o Some programming errors such as uninitialized variables.
o Mismatch between actual and formal parameters.
o Variables that are declared but never used.
o Code walkthroughs and code inspections are considered as static analysis
methods but static program analysis tool is used to designate automated
analysis tools. Hence, a compiler can be considered as a static program
analysis tool.

2. Dynamic Program Analysis Tools


Dynamic Program Analysis Tool is such type of program analysis tool that require
the program to be executed and its actual behavior to be observed. A dynamic
program analyzer basically implements the code. It adds additional statements in the
source code to collect the traces of program execution. When the code is executed, it
allows us to observe the behavior of the software for different test cases. Once the
software is tested and its behavior is observed, the dynamic program analysis tool
performs a post execution analysis and produces reports which describe the
structural coverage that has been achieved by the complete testing process for the
program.
For example, the post execution dynamic analysis report may provide data on
extent statement, branch and path coverage achieved. The results of dynamic
program analysis tools are in the form of a histogram or a pie chart. It describes the
structural coverage obtained for different modules of the program. The output of a
dynamic program analysis tool can be stored and printed easily and provides
evidence that complete testing has been done. The result of dynamic analysis is the
extent of testing performed as white box testing. If the testing result is not
satisfactory then more test cases are designed and added to the test scenario. Also
dynamic analysis helps in elimination of redundant test cases.

 Integration testing
Integration testing is the process of testing the interface between two software
units or modules. It focuses on determining the correctness of the interface. The
purpose of integration testing is to expose faults in the interaction between
integrated units. Once all the modules have been unit-tested, integration testing is
performed.
Integration testing is a software testing technique that focuses on verifying the
interactions and data exchange between different components or modules of a
software application. The goal of integration testing is to identify any problems or
bugs that arise when different components are combined and interact with each
other. Integration testing is typically performed after unit testing and before system
testing. It helps to identify and resolve integration issues early in the development
cycle, reducing the risk of more severe and costly problems later on.
Integration testing can be done by picking module by module. This can be done
so that there should be a proper sequence to be followed. And also if you don’t want
to miss out on any integration scenarios then you have to follow the proper
sequence. Exposing the defects is the major focus of the integration testing and the
time of interaction between the integrated units. Integration test approaches – There
are four types of integration testing approaches.

 Those approaches are the following:


1. Big-Bang Integration Testing – It is the simplest integration testing
approach, where all the modules are combined and the functionality is verified after
the completion of individual module testing. In simple words, all the modules of the
system are simply put together and tested. This approach is practicable only for very
small systems. If an error is found during the integration testing, it is very difficult to
localize the error as the error may potentially belong to any of the modules being
integrated. So, debugging errors reported during Big Bang integration testing is very
expensive to fix.
Big-bang integration testing is a software testing approach in which all
components or modules of a software application are combined and tested at once.
This approach is typically used when the software components have a low degree of
interdependence or when there are constraints in the development environment that
prevent testing individual components. The goal of big-bang integration testing is to
verify the overall functionality of the system and to identify any integration problems
that arise when the components are combined. While big-bang integration testing
can be useful in some situations, it can also be a high-risk approach, as the
complexity of the system and the number of interactions between components can
make it difficult to identify and diagnose problems.
 Advantages:
o It is convenient for small systems.
o Simple and straightforward approach.
o Can be completed quickly.
o Does not require a lot of planning or coordination.
o May be suitable for small systems or projects with a low degree of
interdependence between components.

 Disadvantages:
o There will be quite a lot of delay because you would have to wait for all
the modules to be integrated.
o High-risk critical modules are not isolated and tested on priority since all
modules are tested at once.
o Not Good for long projects.
o High risk of integration problems that are difficult to identify and diagnose.
o This can result in long and complex debugging and troubleshooting efforts.
o This can lead to system downtime and increased development costs.
o May not provide enough visibility into the interactions and data exchange
between components.
o This can result in a lack of confidence in the system’s stability and
reliability.
o This can lead to decreased efficiency and productivity.
o This may result in a lack of confidence in the development team.
o This can lead to system failure and decreased user satisfaction.

2. Bottom-Up Integration Testing – In bottom-up testing, each module at


lower levels are tested with higher modules until all modules are tested. The primary
purpose of this integration testing is that each subsystem tests the interfaces among
various modules making up the subsystem. This integration testing uses test drivers
to drive and pass appropriate data to the lower-level modules.
 Advantages:
o In bottom-up testing, no stubs are required.
o A principal advantage of this integration testing is that several disjoint
subsystems can be tested simultaneously.
o It is easy to create the test conditions.
o Best for applications that uses bottom up design approach.
o It is Easy to observe the test results.
 Disadvantages:
o Driver modules must be produced.
o In this testing, the complexity that occurs when the system is made up of
a large number of small subsystems.
o As Far modules have been created, there is no working model can be
represented.

3. Top-Down Integration Testing – Top-down integration testing technique is


used in order to simulate the behaviour of the lower-level modules that are not yet
integrated. In this integration testing, testing takes place from top to bottom. First,
high-level modules are tested and then low-level modules and finally integrating the
low-level modules to a high level to ensure the system is working as intended.
 Advantages:
o Separately debugged module.
o Few or no drivers needed.
o It is more stable and accurate at the aggregate level.
o Easier isolation of interface errors.
o In this, design defects can be found in the early stages.
 Disadvantages:
o Needs many Stubs.
o Modules at lower level are tested inadequately.
o It is difficult to observe the test output.
o It is difficult to stub design.

4. Mixed Integration Testing – A mixed integration testing is also called


sandwiched integration testing. A mixed integration testing follows a combination of
top down and bottom-up testing approaches. In top-down approach, testing can start
only after the top-level module have been coded and unit tested. In bottom-up
approach, testing can start only after the bottom level modules are ready. This
sandwich or mixed approach overcomes this shortcoming of the top-down and
bottom-up approaches. It is also called the hybrid integration testing. also, stubs and
drivers are used in mixed integration testing.
 Advantages:
o Mixed approach is useful for very large projects having several sub
projects.
o This Sandwich approach overcomes this shortcoming of the top-down and
bottom-up approaches.
o Parallel test can be performed in top and bottom layer tests.
 Disadvantages:
o For mixed integration testing, it requires very high cost because one part
has a Top-down approach while another part has a bottom-up approach.
o This integration testing cannot be used for smaller systems with huge
interdependence between different modules.

 Applications:
o Identify the components: Identify the individual components of your
application that need to be integrated. This could include the frontend,
backend, database, and any third-party services.
o Create a test plan: Develop a test plan that outlines the scenarios and
test cases that need to be executed to validate the integration points
between the different components. This could include testing data flow,
communication protocols, and error handling.
o Set up test environment: Set up a test environment that mirrors the
production environment as closely as possible. This will help ensure that
the results of your integration tests are accurate and reliable.
o Execute the tests: Execute the tests outlined in your test plan, starting
with the most critical and complex scenarios. Be sure to log any defects or
issues that you encounter during testing.
o Analyze the results: Analyze the results of your integration tests to
identify any defects or issues that need to be addressed. This may involve
working with developers to fix bugs or make changes to the application
architecture.
o Repeat testing: Once defects have been fixed, repeat the integration
testing process to ensure that the changes have been successful and that
the application still works as expected.

 System Testing
System Testing is a type of software testing that is performed on a complete
integrated system to evaluate the compliance of the system with the
corresponding requirements. In system testing, integration testing passed
components are taken as input. The goal of integration testing is to detect any
irregularity between the units that are integrated together. System testing
detects defects within both the integrated units and the whole system. The result
of system testing is the observed behavior of a component or a system when it is
tested. System Testing is carried out on the whole system in the context of either
system requirement specifications or functional requirement specifications or in
the context of both. System testing tests the design and behavior of the system
and also the expectations of the customer. It is performed to test the system
beyond the bounds mentioned in the software requirements specification (SRS).
System Testing is basically performed by a testing team that is independent of
the development team that helps to test the quality of the system impartial. It
has both functional and non-functional testing. System Testing is a black-box
testing. System Testing is performed after the integration testing and before the
acceptance testing.

 System Testing Process: System Testing is performed in the following


steps:
 Test Environment Setup: Create testing environment for the better quality
testing.
 Create Test Case: Generate test case for the testing process.
 Create Test Data: Generate the data that is to be tested.
 Execute Test Case: After the generation of the test case and the test data,
test cases are executed.
 Defect Reporting: Defects in the system are detected.
 Regression Testing: It is carried out to test the side effects of the testing
process.
 Log Defects: Defects are fixed in this step.
 Retest: If the test is not successful then again test is performed.
 Types of System Testing:
o Performance Testing: Performance Testing is a type of software testing that
is carried out to test the speed, scalability, stability and reliability of the
software product or application.
o Load Testing: Load Testing is a type of software Testing which is carried out
to determine the behavior of a system or software product under extreme
load.
o Stress Testing: Stress Testing is a type of software testing performed to
check the robustness of the system under the varying loads.
o Scalability Testing: Scalability Testing is a type of software testing which is
carried out to check the performance of a software application or system in
terms of its capability to scale up or scale down the number of user request
load.

 Tools used for System Testing :


o JMeter
o Gallen Framework
o Selenium
o Here are a few common tools used for System Testing:
o HP Quality Center/ALM
o IBM Rational Quality Manager
o Microsoft Test Manager
o Selenium
o Appium
o LoadRunner
o Gatling
o JMeter
o Apache JServ
o SoapUI

 Advantages of System Testing :


The testers do not require more knowledge of programming to carry out this
testing.
It will test the entire product or software so that we will easily detect the errors
or defects which cannot be identified during the unit testing and integration testing.
The testing environment is similar to that of the real time production or business
environment.
It checks the entire functionality of the system with different test scripts and also
it covers the technical and business requirements of clients.
After this testing, the product will almost cover all the possible bugs or errors and
hence the development team will confidently go ahead with acceptance testing.
Here are some advantages of System Testing:
o Verifies the overall functionality of the system.
o Detects and identifies system-level problems early in the development cycle.
o Helps to validate the requirements and ensure the system meets the user
needs.
o Improves system reliability and quality.
o Facilitates collaboration and communication between development and testing
teams.
o Enhances the overall performance of the system.
o Increases user confidence and reduces risks.
o Facilitates early detection and resolution of bugs and defects.
o Supports the identification of system-level dependencies and inter-module
interactions.
o Improves the system’s maintainability and scalability.

 Disadvantages of System Testing :


o This testing is time consuming process than another testing techniques since
it checks the entire product or software.
o The cost for the testing will be high since it covers the testing of entire
software.
o It needs good debugging tool otherwise the hidden errors will not be found.
o Here are some disadvantages of System Testing:
o Can be time-consuming and expensive.
o Requires adequate resources and infrastructure.
o Can be complex and challenging, especially for large and complex systems.
o Dependent on the quality of requirements and design documents.
o Limited visibility into the internal workings of the system.
o Can be impacted by external factors like hardware and network
configurations.
o Requires proper planning, coordination, and execution.
o Can be impacted by changes made during development.
o Requires specialized skills and expertise.
o May require multiple test cycles to achieve desired results.

You might also like