You are on page 1of 18

What's the difference between load and stress testing ?

One of the most common, but unfortunate misuse of terminology


is treating "load testing" and "stress testing" as synonymous. The
consequence of this ignorant semantic abuse is usually that the system
is neither properly "load tested" nor subjected to a meaningful stress
test.

1. Stress testing is subjecting a system to an unreasonable load


while denying it the resources (e.g., RAM, disc, mips, interrupts,
etc.) needed to process that load. The idea is to stress a system to
the breaking point in order to find bugs that will make that break
potentially harmful. The system is not expected to process the
overload without adequate resources, but to behave (e.g., fail) in a
decent manner (e.g., not corrupting or losing data). Bugs and failure
modes discovered under stress testing may or may not be repaired
depending on the application, the failure mode, consequences, etc.
The load (incoming transaction stream) in stress testing is often
deliberately distorted so as to force the system into resource
depletion.

2. Load testing is subjecting a system to a statistically


representative (usually) load. The two main reasons for using such
loads is in support of software reliability testing and in
performance testing. The term "load testing" by itself is too vague
and imprecise to warrant use. For example, do you mean representative
load," "overload," "high load," etc. In performance testing, load is
varied from a minimum (zero) to the maximum level the system can
sustain without running out of resources or having, transactions
suffer (application-specific) excessive delay.

3. A third use of the term is as a test whose objective is to


determine the maximum sustainable load the system can handle.
In this usage, "load testing" is merely testing at the highest
transaction arrival rate in performance testing.

What's the difference between QA and testing?


Most testing jobs I see are nevertheless advertised as "QA". Some people
suggest that QC is a better set of initials to describe testing.

In my courses and my practice, I stick to the ANSI/IEEE


definitions, which agree with what is generally held *outside* the
software arena. The definitions boil down to:
* TESTING means "quality control"
* QUALITY CONTROL measures the quality of a product
* QUALITY ASSURANCE measures the quality of processes used
to create a quality product.

What is black box/white box testing?

Black-box and white-box are test design methods. Black-box test design
treats the system as a "black-box", so it doesn't explicitly use
knowledge of the internal structure. Black-box test design is usually
described as focusing on testing functional requirements. Synonyms for
black-box include: behavioral, functional, opaque-box, and
closed-box. White-box test design allows one to peek inside the "box",
and it focuses specifically on using internal knowledge of the software
to guide the selection of test data. Synonyms for white-box include:
structural, glass-box and clear-box.

While black-box and white-box are terms that are still in popular use,
many people prefer the terms "behavioral" and "structural". Behavioral
test design is slightly different from black-box test design because
the use of internal knowledge isn't strictly forbidden, but it's still
discouraged. In practice, it hasn't proven useful to use a single test
design method. One has to use a mixture of different methods so that
they aren't hindered by the limitations of a particular one. Some call
this "gray-box" or "translucent-box" test design, but others wish we'd
stop talking about boxes altogether.

It is important to understand that these methods are used during the


test design phase, and their influence is hard to see in the tests once
they're implemented. Note that any level of testing (unit testing,
system testing, etc.) can use any test design methods. Unit testing is
usually associated with structural test design, but this is because
testers usually don't have well-defined requirements at the unit level
to validate.

What do you like and dislike in this job?


Like: Finding faults with the code and driving an application to perfection. carrying out testing with
limited resources,limited time and limited effort and uncovering bugs is a very challenging &
creative task. it's very interesting,challenging by thinking about the application in the end user's
perception. it is interesting that we are doing destructive work but in the long term it is a
consructive work.
Dislikes: There is nothing that I can state as my dislike for my role as a tester.

Define quality for me as you understand it


It is a software that is reasonably bug-free and delivered on time and within the budget, meets the
requirements and expectations and is maintainable.

Describe the difference between validation and verification

Verification takes place before validation, and not vice versa. Verification evaluates documents,
plans, code, requirements and specifications. Validation evaluates the product itself. The input of
verification are checklists, issue lists, walkthroughs and inspection meetings, reviews and
meetings. The input of validation is the actual testing of an actual product. The output of
verification is a nearly perfect set of documents, plans, specifications and requirements.

Have you ever created a test plan?

I have never created any test plan. I developed and executed testcases. But I was
involved/participated actively with my Team Leader while creatiing Test Plans.

How do you determine what to test?


The duties of the software Tester is to go through the requirements documents and functional
specification and based on those documents one should focus on writing test cases, which
covers all the functionality to be tested. The tester should carry out all these procedures at the
time of application under development. Once the build is ready for testing, we know what to test
and how to proceed for the testing. Make sure, main functionality is tested first, so that all other
testers can be ready for testing their modules/functionality.

How do you test if you have minimal or no documentation about the product?
based on previous experiance on that particular product we start testing .this is mainly called ad-
hoc testing. this type of testing conducting who knows the domain knowledge,frame work,use
cases, this type of testing conduting ad-hoc testing,sanity testing
in this situation i will try to test the application with the perception of end user.
and
i use my(or someones) previous experiences.
here we are not prepare any formal test plans and testcase documents.
most of the time we perform ad-hoc on this type of applications.

What is the purpose of the testing?


The purpose of testing can be quality assurance, verification and validation, or reliability
estimation. Testing can be used as a generic metric as well. Correctness testing and reliability
testing are two major areas of testing. Software testing is a trade-off between budget, time and
quality. the main course of testing is to check for the existance of defects or errors in a program or
project or product, based up on some predefined instructions or conditions.Also we can say that
the Purpose of Testing is to Analyse that the Product or project is according to Requirnments.

What are unit, component and integration testing?

Unit. The smallest compilable component. A unit typically is the


work of one programmer (At least in principle). As defined, it does
not include any called sub-components (for procedural languages) or
communicating components in general.

Unit Testing: in unit testing called components (or communicating


components) are replaced with stubs, simulators, or trusted
components. Calling components are replaced with drivers or trusted
super-components. The unit is tested in isolation.

component: a unit is a component. The integration of one or more


components is a component.

Note: The reason for "one or more" as contrasted to "Two or


more" is to allow for components that call themselves
recursively.

component testing: the same as unit testing except that all stubs
and simulators are replaced with the real thing.

Two components (actually one or more) are said to be integrated when:


a. They have been compiled, linked, and loaded together.
b. They have successfully passed the integration tests at the
interface between them.

Thus, components A and B are integrated to create a new, larger,


component (A,B). Note that this does not conflict with the idea of
incremental integration -- it just means that A is a big component
and B, the component added, is a small one.

Integration testing:
This is easily generalized for OO languages by using the equivalent
constructs for message passing. In the following, the word "call"
is to be understood in the most general sense of a data flow and is
not restricted to just formal subroutine calls and returns -- for
example, passage of data through global data structures and/or the
use of pointers.

Let A and B be two components in which A calls B.


Let Ta be the component level tests of A
Let Tb be the component level tests of B
Tab The tests in A's suite that cause A to call B.
Tbsa The tests in B's suite for which it is possible to sensitize A
-- the inputs are to A, not B.
Tbsa + Tab == the integration test suite (+ = union).

Note: Sensitize is a technical term. It means inputs that will


cause a routine to go down a specified path. The inputs are to
A. Not every input to A will cause A to traverse a path in
which B is called. Tbsa is the set of tests which do cause A to
follow a path in which B is called. The outcome of the test of
B may or may not be affected.

There have been variations on these definitions, but the key point is
that it is pretty darn formal and there's a goodly hunk of testing
theory, especially as concerns integration testing, OO testing, and
regression testing, based on them.

As to the difference between integration testing and system testing.


System testing specifically goes after behaviors and bugs that are
properties of the entire system as distinct from properties
attributable to components (unless, of course, the component in
question is the entire system). Examples of system testing issues:
resource loss bugs, throughput bugs, performance, security, recovery,
transaction synchronization bugs (often misnamed "timing bugs").

What kinds of testing should be considered?

Black box testing - not based on any knowledge of internal design or code. Tests are based on
requirements and functionality.
White box testing - based on knowledge of the internal logic of an application's code. Tests are
based on coverage of code statements, branches, paths, conditions.
unit testing - the most 'micro' scale of testing; to test particular functions or code modules.
Typically done by the programmer and not by testers, as it requires detailed knowledge of the
internal program design and code. Not always easily done unless the application has a well-
designed architecture with tight code; may require developing test driver modules or test
harnesses.
incremental integration testing - continuous testing of an application as new functionality is
added; requires that various aspects of an application's functionality be independent enough to
work separately before all parts of the program are completed, or that test drivers be developed
as needed; done by programmers or by testers.
integration testing - testing of combined parts of an application to determine if they function
together correctly. The 'parts' can be code modules, individual applications, client and server
applications on a network, etc. This type of testing is especially relevant to client/server and
distributed systems.
functional testing - black-box type testing geared to functional requirements of an application;
this type of testing should be done by testers. This doesn't mean that the programmers shouldn't
check that their code works before releasing it (which of course applies to any stage of testing.)
system testing - black-box type testing that is based on overall requirements specifications;
covers all combined parts of a system.
end-to-end testing - similar to system testing; the 'macro' end of the test scale; involves testing
of a complete application environment in a situation that mimics real-world use, such as
interacting with a database, using network communications, or interacting with other hardware,
applications, or systems if appropriate.
sanity testing or smoke testing - typically an initial testing effort to determine if a new software
version is performing well enough to accept it for a major testing effort. For example, if the new
software is crashing systems every 5 minutes, bogging down systems to a crawl, or corrupting
databases, the software may not be in a 'sane' enough condition to warrant further testing in its
current state.
regression testing - re-testing after fixes or modifications of the software or its environment. It
can be difficult to determine how much re-testing is needed, especially near the end of the
development cycle. Automated testing tools can be especially useful for this type of testing.
acceptance testing - final testing based on specifications of the end-user or customer, or based
on use by end-users/customers over some limited period of time.
load testing - testing an application under heavy loads, such as testing of a web site under a
range of loads to determine at what point the system's response time degrades or fails.
stress testing - term often used interchangeably with 'load' and 'performance' testing. Also used
to describe such tests as system functional testing while under unusually heavy loads, heavy
repetition of certain actions or inputs, input of large numerical values, large complex queries to a
database system, etc.
performance testing - term often used interchangeably with 'stress' and 'load' testing. Ideally
'performance' testing (and any other 'type' of testing) is defined in requirements documentation or
QA or Test Plans.
usability testing - testing for 'user-friendliness'. Clearly this is subjective, and will depend on the
targeted end-user or customer. User interviews, surveys, video recording of user sessions, and
other techniques can be used. Programmers and testers are usually not appropriate as usability
testers.
install/uninstall testing - testing of full, partial, or upgrade install/uninstall processes.
recovery testing - testing how well a system recovers from crashes, hardware failures, or other
catastrophic problems.
failover testing - typically used interchangeably with 'recovery testing'
security testing - testing how well the system protects against unauthorized internal or external
access, willful damage, etc; may require sophisticated testing techniques.
compatability testing - testing how well software performs in a particular
hardware/software/operating system/network/etc. environment.
exploratory testing - often taken to mean a creative, informal software test that is not based on
formal test plans or test cases; testers may be learning the software as they test it.
ad-hoc testing - similar to exploratory testing, but often taken to mean that the testers have
significant understanding of the software before testing it.
context-driven testing - testing driven by an understanding of the environment, culture, and
intended use of software. For example, the testing approach for life-critical medical equipment
software would be completely different than that for a low-cost computer game.
user acceptance testing - determining if software is satisfactory to an end-user or customer.
comparison testing - comparing software weaknesses and strengths to competing products.
alpha testing - testing of an application when development is nearing completion; minor design
changes may still be made as a result of such testing. Typically done by end-users or others, not
by programmers or testers.
beta testing - testing when development and testing are essentially completed and final bugs
and problems need to be found before final release. Typically done by end-users or others, not by
programmers or testers.
mutation testing - a method for determining if a set of test data or test cases is useful, by
deliberately introducing various code changes ('bugs') and retesting with the original test
data/cases to determine if the 'bugs' are detected. Proper implementation requires large
computational resources.

What is Configuration management? Tools used?

Software configuration management (SCM) is the control, and the recording that are made to the
software and documentation throughout the software development cycle (SDLC). SCM covers the
tools and processes used to control, coordinate and track code, requirements, documentation,
problems, change requests, designs, tools, compilers, libraries, patches and changes made to
them, and to keep track of who males the changes.
Tools include Rational ClearCase, Doors, PVCS, CVS and many others.
Managemet which configures the software is called SCM. here configuration means arrangement
of the parts of the software..
software configuration management is the decipline for systematically controlling the changes
that take place during development.

1. SCM is the process of identifying and defining the ITEMS in the system.
2. controlling the change of these ITEMS throughout their life cycle.
3.recording and reporting the status of the ITEMS and change requests
4.verifying the completeness and correctness of ITEMS.

When should testing be stopped?


There can be no end to Testing.its a continuous process.But there can be some factors
influencing the span of testing process:
1.The Exit criteria are met
2.when project deadlines comes
3.when all the core functionality is tested
4.Bug rate falls down to certain level
5.Alpha and beta tesing period expires
6.when all the test cases are passed with certain level
7.all the test cases are executed with the certain percentage of pass.
8.All high seviority bugs are closed

What is good code?


a code which is
1.bug free
2.reusable
3.independent
4.less complexity
5.well documented
6. easy to chage

What are basic, core, practises for a QA specialist?


QA always monitors the project from beginning to end. if there is process violation takes places
then he takes a CAPA(corrective action and preventive action).
his work is preventive work.
he does his work by conducting reviews,meetings, walkthroughs usin check lists, issue lists.
Should we test every possible combination/scenario for a program?
Its not mandatory.There are several points to be considered here. Delivery time available,
personnel availability, complexity of the application under testing, availble resources, budget etc.
But we should conver to the maximum extent satisfying the primary functionalities which are
important and too much visible from the customer point of view.Exhaustive testing is not possible
and also a time consuming process and hence not recommended.Using boundary value analysis
techinque and equivalence class we can reduce the no of possible combination

Is an "A fast database retrieval rate" a testable requirement?


it depends on the end user requirement
if the end user required that data from the database must retrieved fast then it is a testable
requirement.
we can test data base retrieval time by using some complex and simple querries.

What issues come up in test automation, and how do you manage them?
Main issue is the frequent change request. If there are frequent changes in the system, as an
automation engineer, we need to take care of the changing objects and functionalities to update
the scripts.

What are 5 common problems in the software development process?

poor requirements - if requirements are unclear, incomplete, too general, and not testable, there
will be problems.
unrealistic schedule - if too much work is crammed in too little time, problems are inevitable.
inadequate testing - no one will know whether or not the program is any good until the customer
complains or systems crash.
featuritis - requests to pile on new features after development is underway; extremely common.
miscommunication - if developers don't know what's needed or customer's have erroneous
expectations, problems are guaranteed.

What are 5 common solutions to software development problems?

solid requirements - clear, complete, detailed, cohesive, attainable, testable requirements that
are agreed to by all players. Use prototypes to help nail down requirements. In 'agile'-type
environments, continuous coordination with customers/end-users is necessary.
realistic schedules - allow adequate time for planning, design, testing, bug fixing, re-testing,
changes, and documentation; personnel should be able to complete the project without burning
out.
adequate testing - start testing early on, re-test after fixes or changes, plan for adequate time for
testing and bug-fixing. 'Early' testing ideally includes unit testing by developers and built-in testing
and diagnostic capabilities.
stick to initial requirements as much as possible - be prepared to defend against excessive
changes and additions once development has begun, and be prepared to explain consequences.
If changes are necessary, they should be adequately reflected in related schedule changes. If
possible, work closely with customers/end-users to manage expectations. This will provide them
a higher comfort level with their requirements decisions and minimize excessive changes later on.
communication - require walkthroughs and inspections when appropriate; make extensive use
of group communication tools - e-mail, groupware, networked bug-tracking tools and change
management tools, intranet capabilities, etc.; insure that information/documentation is available
and up-to-date - preferably electronic, not paper; promote teamwork and cooperation; use
protoypes if possible to clarify customers' expectations.

What is software 'quality'?


Quality software is reasonably bug-free, delivered on time and within budget, meets requirements
and/or expectations, and is maintainable. However, quality is obviously a subjective term. It will
depend on who the 'customer' is and their overall influence in the scheme of things. A wide-angle
view of the 'customers' of a software development project might include end-users, customer
acceptance testers, customer contract officers, customer management, the development
organization's management/accountants/testers/salespeople, future software maintenance
engineers, stockholders, magazine columnists, etc. Each type of 'customer' will have their own
slant on 'quality' - the accounting department might define quality in terms of profits while an end-
user might define quality as user-friendly and bug-free.

What is SEI? CMM? CMMI? ISO? IEEE? ANSI? Will it help?

SEI = 'Software Engineering Institute' at Carnegie-Mellon University; initiated by the U.S. Defense
Department to help improve software development processes.
CMM = 'Capability Maturity Model', now called the CMMI ('Capability Maturity Model Integration'),
developed by the SEI. It's a model of 5 levels of process 'maturity' that determine effectiveness in
delivering quality software. It is geared to large organizations such as large U.S. Defense
Department contractors. However, many of the QA processes involved are appropriate to any
organization, and if reasonably applied can be helpful. Organizations can receive CMMI ratings
by undergoing assessments by qualified auditors.
Level 1 - characterized by chaos, periodic panics, and heroic
efforts required by individuals to successfully
complete projects. Few if any processes in place;
successes may not be repeatable.

Level 2 - software project tracking, requirements management,


realistic planning, and configuration management
processes are in place; successful practices can
be repeated.

Level 3 - standard software development and maintenance processes


are integrated throughout an organization; a Software
Engineering Process Group is is in place to oversee
software processes, and training programs are used to
ensure understanding and compliance.

Level 4 - metrics are used to track productivity, processes,


and products. Project performance is predictable,
and quality is consistently high.

Level 5 - the focus is on continouous process improvement. The


impact of new processes and technologies can be
predicted and effectively implemented when required.

Perspective on CMM ratings: During 1997-2001, 1018 organizations


were assessed. Of those, 27% were rated at Level 1, 39% at 2,
23% at 3, 6% at 4, and 5% at 5. (For ratings during the period
1992-96, 62% were at Level 1, 23% at 2, 13% at 3, 2% at 4, and
0.4% at 5.) The median size of organizations was 100 software
engineering/maintenance personnel; 32% of organizations were
U.S. federal contractors or agencies. For those rated at
Level 1, the most problematical key process area was in
Software Quality Assurance.

ISO = 'International Organisation for Standardization' - The ISO 9001:2000 standard (which
replaces the previous standard of 1994) concerns quality systems that are assessed by outside
auditors, and it applies to many kinds of production and manufacturing organizations, not just
software. It covers documentation, design, development, production, testing, installation,
servicing, and other processes. The full set of standards consists of: (a)Q9001-2000 - Quality
Management Systems: Requirements; (b)Q9000-2000 - Quality Management Systems:
Fundamentals and Vocabulary; (c)Q9004-2000 - Quality Management Systems: Guidelines for
Performance Improvements. To be ISO 9001 certified, a third-party auditor assesses an
organization, and certification is typically good for about 3 years, after which a complete
reassessment is required. Note that ISO certification does not necessarily indicate quality
products - it indicates only that documented processes are followed. Also see http://www.iso.ch/
for the latest information. In the U.S. the standards can be purchased via the ASQ web site at
http://e-standards.asq.org/
IEEE = 'Institute of Electrical and Electronics Engineers' - among other things, creates standards
such as 'IEEE Standard for Software Test Documentation' (IEEE/ANSI Standard 829), 'IEEE
Standard of Software Unit Testing (IEEE/ANSI Standard 1008), 'IEEE Standard for Software
Quality Assurance Plans' (IEEE/ANSI Standard 730), and others.
ANSI = 'American National Standards Institute', the primary industrial standards body in the U.S.;
publishes some software-related standards in conjunction with the IEEE and ASQ (American
Society for Quality).
Other software development/IT management process assessment methods besides CMMI and
ISO 9000 include SPICE, Trillium, TickIT, Bootstrap, ITIL, MOF, and CobiT.

Integration Testing :-
Integration testing is a logical extension of unit testing. In its simplest form, two units that have
already been tested are combined into a component and the interface between them is tested. A
component, in this sense, refers to an integrated aggregate of more than one unit. In a realistic
scenario, many units are combined into components, which are in turn aggregated into even
larger parts of the program. The idea is to test combinations of pieces and eventually expand the
process to test your modules with those of other groups. Eventually all the modules making up a
process are tested together. Beyond that, if the program is composed of more than one process,
they should be tested in pairs rather than all at once.

Integration testing identifies problems that occur when units are combined. By using a test plan
that requires you to test each unit and ensure the viability of each before combining units, you
know that any errors discovered when combining units are likely related to the interface between
units. This method reduces the number of possibilities to a far simpler level of analysis.

You can do integration testing in a variety of ways but the following are three common strategies:

The top-down approach to integration testing requires the highest-level modules be test and
integrated first. This allows high-level logic and data flow to be tested early in the process and it
tends to minimize the need for drivers. However, the need for stubs complicates test
management and low-level utilities are tested relatively late in the development cycle. Another
disadvantage of top-down integration testing is its poor support for early release of limited
functionality.
The bottom-up approach requires the lowest-level units be tested and integrated first. These units
are frequently referred to as utility modules. By using this approach, utility modules are tested
early in the development process and the need for stubs is minimized. The downside, however, is
that the need for drivers complicates test management and high-level logic and data flow are
tested late. Like the top-down approach, the bottom-up approach also provides poor support for
early release of limited functionality.
The third approach, sometimes referred to as the umbrella approach, requires testing along
functional data and control-flow paths. First, the inputs for functions are integrated in the bottom-
up pattern discussed above. The outputs for each function are then integrated in the top-down
manner. The primary advantage of this approach is the degree of support for early release of
limited functionality. It also helps minimize the need for stubs and drivers. The potential
weaknesses of this approach are significant, however, in that it can be less systematic than the
other two approaches, leading to the need for more regression testing.

Subject: What's the 'spiral model'?

Basically, the idea is evolutionary development, using the waterfall model for
each step; it's intended to help manage risks. Don't define in detail the
entire system at first. The developers should only define the highest
priority features. Define and implement those, then get feedback from
users/customers (such feedback distinguishes "evolutionary" from "incremental"
development). With this knowledge, they should then go back to define and
implement more features in smaller chunks.

Subject: What's a 'function point'?

Function points and feature points are methods of estimating the "amount of functionality"
required for a program, and are thus used to estimate project completion time.The basic idea
involves counting inputs,outputs, and other features of a description of functionality.

Web based Application Security Testing


1. Introduction
Security is one of the prime concerns for all the Internet applications. Often it is noticed
that during testing of the application, security doesn't get due focus and whatever
security testing is done it is mainly limited to security functionality testing as captured in
the requirements document. Any mistake at requirement gathering stage can leave the
application vulnerable to potential attacks. This paper presents an approach to handle
different factors that affect application security testing.
Application Security is under developers’ control but other security aspects like IT
security, Network Security, Infrastructure security etc are not under developers’ control.
Most organizations already have some degree of online security infrastructure like
firewalls, intrusion detection systems, operating system hardening procedures, etc. The
problem is that they often overlook the need to secure and verify the integrity of
internally developed applications and code pages against external attacks. This white
paper talks about application security for web based application.
2. Why Web Application Security Should be Part of Your Web Risk
Management Program
There are many reasons your organization should identify and address web application
security vulnerabilities as part of your web risk management program:
Reduce Cost of Recovery and Fixes -- Computer security attacks cost as much as $10
billion a year.3
Ensure Customer Trust -- Trust is a key component to customer adoption and retention.
• Encourage Website Adoption -- Consumers are still not adopting websites as a
preferred channel for doing business. The Tower Group cited that 26% of customers don’t
use online banking for security fears and another 6% do not due
to privacy issues.4
Maintain Competitive Advantage –- Many organizations are using trust as a key
competitive advantage and are leveraging customer fears to proactively implement
security and privacy programs to ease the uncertainty.
Reduce Cost of Manual and Outsourced Security Testing -– Many organizations today,
especially ones in regulated industries, test security using costly manual
processes that cannot address all potential website risks. Other organizations
spend millions on outsourced security assessment and ethical hacking resources.
Consider these statistics:

On average, there are anywhere from 5 to 15 defects per 1,000 lines of code.
• A 5-year Pentagon study concluded that it takes an average of 75 minutes to track down one defect.
Fixing one of these defects takes 2 to 9 hours each. That translates to 150 hours, or roughly $30,000, to clean
every 1,000 lines of code.
• Researching each of the 4,200 vulnerabilities published by CERT last year for just 10
minutes would have required 1 staffer to research for 17.5 full workweeks or 700 hours.
• Gartner Group estimates that a company with 1,000 servers can spend $300,000 to test
and deploy a patch; most companies deploy several patches a week.
3. Why Web Application Security is Important
The Internet has forever changed the way, a business gets done. Web-based
applications are enabling interaction among customers, prospects, and partners.
Unfortunately, many Web-based applications have inherent vulnerabilities and security-
oriented design flaws. No one on the Internet is immune from security threats. Internet-
based attacks exploit these weaknesses to compromise sites and gain access to critical
systems. In the race to develop online services, web applications have been developed
and deployed with minimal attention given to security risks, resulting in a surprising
number of corporate sites that are vulnerable to hackers. Prominent sites from a number
of regulated industries including financial services, government, healthcare, and retail,
are probed daily. The consequences of a security breach are great: loss of revenues,
damage to credibility, legal liability, and loss of customer trust.
Web applications are used to perform most major tasks or website functions. They
include forms that collect personal, classified, and confidential information such as
medical history, credit and bank account information and user satisfaction feedback.
Gartner Inc. has noted that almost 75 percent of attacks are tunneling through web
applications. Web application security is a significant privacy and risk compliance concern
that remains largely unaddressed.
Security awareness for Web-based applications is, therefore, essential to an
organization’s overall security posture.
Examples of vulnerabilities

S No Hacking attack What hackers use it for


1 Cookie Poisoning Identify theft/ Session Hijack
2 Hidden field Manipulation eShoplifting
3 Parameter tampering Fraud
4 Buffer Overflow Denial of Service/ C losure of
Business
5 Cross Site Scripting Identity Theft
6 Backdoor and Debug options Trespassing

7 Forceful Browsing Breaking and Entering


HTTP Response Splitting Phishing, Identity
Theft, and eGraffiting

9 Stealth Commanding Concealing Weapons


10 3r d Party Mis-configuration Debilitating a Site
11 XML and Web services New layers of attack vectors &
Vulnerabilities malicious use

12 SQL Injection Manipulation of DB information

3. Possible Security Threats


3.1. Threat One: Input Validation
files. The baseline is "not trust user input" and "not trust client side process results".
• Attacker can easily change any part of the HTTP request before submitting
o URL
o Cookies
o Form fields
o Hidden fields
o Headers
• Encoding is not encrypting
• Inputs must be validated on the server (not just the client)
Countermeasures
• Tainting (Perl)
• Code reviews (check variable against list of allowed values, not vice-versa)
• Application firewalls

3.2. Threat Two: Access Control


Access control is how an application grants access to content and functions, to some
users and not others. A web application's access control model is closely tied to the
content and functions that the site provides. If access control rules are implemented in
various locations all over the code, with a user falling into a member of groups or roles
with different abilities or privileges, access control flaws are very likely to happen.
• Usually inconsistently defined/applied
• Examples
o Insecure session IDs or keys
o Forced browsing past access control checks
o Path traversal
o File permissions – may allow access to config/password files
o Client-side caching
3.3. Threat Three: Broken Account and Session Management
If the application use the specification approved "user authentication infrastructures",
then password change control, password strength, password storage, protecting
credentials in transit, session ID protection, and trust relationship, are all taken care for
you.

• Weak authentication
o Password-only
o Easily estimable usernames (Admin., etc.)
o Unencrypted secrets are also insecure
• How to break in
o Guess password
o Reset password
o New password emailed by application
o Sniff password
• Backend authentication
o How database passwords are stored
o Trust relationships between hosts (IP address can be spoofed, etc.)

3.4. Threat Four: Cross-Site Scripting (XSS) Flaws


Cross-Site Scripting (XSS) attacks require the execution of Client-Side Languages
(JavaScript, Java, VBScript, ActiveX, Flash, etc.). XSS occurs when an attacker uses a
web application to send malicious code to a different end user. The first defense is input
validation, and the second defense is to HTML-encode all input when it is used as output.
This will reduce dangerous HTML tags to more secure escape characters.
Cross-site scripting is a relatively new approach that is being used to attack websites. It
involves disguising a script as a URL variable within a URL from a legitimate site, and
tricking a user into clicking on that URL, perhaps by sending it in an official looking e-
mail or using some other approach. If the site does not check URL variables, the script
will then execute in the user's browser. While this does not attack the website directly, it
can be used to run arbitrary code in a user 's browser, collect userids and passwords, or
anything else that a script can be used to do.
Common sources of these kinds of attacks are web mail applications and web-based
bulletin boards, but in some cases even web servers are also directly affected.
Attacker uses a trust application/ company to send malicious code to end-user
Attacker can “hide” the malicious code
o Unicode encoding
2 types of attacks
o Stored
o Reflected
Wide-spread problem!
Countermeasure: input validation
o Positive
o Negative: “< > ( ) # &”
o Don’t forget these: “&lt &gt &#40 &#41 &#35 &#38”

3.5. Threat Five: Buffer Overflow


A buffer overflow occurs when a program or process tries to store more data in a buffer
(temporary data storage area) than it was in tended to hold. Since buffers are created to
contain a finite amount of data, the extra information - which has to go somewhere - can
overflow into adjacent buffers, corrupting or overwriting the valid data held in them.

Buffer overflow is an increasingly common type of security attack on data integrity. In


buffer overflow attacks, the extra data may contain codes designed to trigger specific
actions, in effect sending new instructions to the attacked computer that could, for
example, damage the user's f iles, change data, or disclos e confidential information.
Buffer overflow attacks are said to have arisen because the C programming language
supplied the framework, and poor programming practices supplied the vulnerability.
Mostly affects web/application servers
Can affect apps/libraries too
Goal: crash the target application and get a shell
Buffer overflow example
o echo “vrfy `perl –e ‘print “a” x 1000’`” |nc www.targetsystem.com 25
o Replace this with something like this…
o char shellcode[] = “\xeb\xlf\x5e\x89\x76\x08…”
Countermeasures
o Keep up with bug reports
o Code reviews
o Use Java

3.6. Threat Six: Command Injection flaws


Majority of programming languages have syst em commands to aide their functionality.
When user input is not adequately checked, it is possible to exec uting operating system
commands. To test this vulnerability, test the input with all the OS commands that the
server box is set up to support and a defect can be filled for developers' investigation if
HTTP 200 ok with an OS command input comes.
Allows attacker to relay malicious code in form variables or URL
o System commands
o SQL
o Interpreted code (Perl, Python, etc.)
Many applications use calls to external programs
o Send mail
Examples
o Path traversal: “../”
o Add more commands: “; rm –r *”
o SQL injection
Countermeasures
o Taint all input
o Avoid system calls (use libraries instead)

Run application with limited privileges

3.7. Threat Seven: Error Handling Problems

All current exit vulnerability-scanning tools identify known vulnerabilities through known
web site response or error messages. Giving a generic error messages would take the
application away from vulnerability scanning radar screens. This can greatly reduce
script kids’ attacks.
Examples: stack traces, DB dumps
Helps attacker know how to target the application
Inconsistencies can be revealed too
“File not found” vs. “Access denied”
File-open errors
Need to give enough information to user w/o giving too much information to
attacker
Countermeasures
o Code review

Modify default error pages (404, 401, etc.)


3.8. Threat Eight: Insecure use of Cryptography

Web applications often use cryptographi c functions to protect information and


credentials. These functions, along with the code to integrate them, have proven difficult
to code properly, often resulting in weak protection.
Insecure storage of credit cards, passwords, etc.
Poor choice of algorithm (or invent your own)
Poor randomness
o Session IDs
o Tokens
o Cookies
Improper storage in memory
Countermeasures
o Store only what is must
o Store a hash instead of the full value (SHA-1)
o Use only vetted, public cryptography

3.9. Threat Nine: Web and Application server Misconfigurations

Most of networks have deployed some sort of security patch update servers. For
example, SUS or SMS. As part of deployme nt testing, a test en gineer needs to walk
through all the boxes to ensure that security patch update client software is installed and
started. It's also a good idea to run a pa tch update scan to ensure no security patches
are missing, which could be a patch missing on the patch server.
Tension between “work out of the box” and “use only what you need”
Developers web masters
Examples
o Un-patched security flaws (BID example)
o Misconfigurations that allow directory traversal
o Administrative services accessible
o Default accounts/ passwords
Countermeasures
o Create and use hardening guides
o Turn off all unused services
o Set up and audit roles, permissions, and accounts

o Set up logging and alerts

3.10. Threat Ten: Database Testing

The root cause of SQL injection is that an application uses dynamic SQL calls, which are
generated from users' input. SQL Injection is the ability to inject SQL commands into the
database engine through an existing application. SQL injection mainly exists in the
context of Web applications where:
(1) Web application has no, or poorly implemented, input validation

(2) The web application communicates with a database server and

(3) An attacker has access to the application through the Internet.

3.11. Threat Eleven: Privilege Elevation

Privilege elevation is a class of attacks where a hacker is able to increase his/her system
Privileges to a higher level than they should be. If successful, this type of attack can
result in a hacker gaining privileges as high as root on a UNIX system. An example of
such an attack can be found at http://www.ciac.org/ciac/bulletins/m-026.shtml. In this
example, authorized users of versions of OpenSSH earlie r than 3.0.2 could gain the
ability to run arbitrary code with super user privileges. Once a hacker is able to run code
with this level of privilege, the entire system is effectively compromised.

3.12. Threat Twelve: Identify Spoofing

Identity spoofing is a technique where a hacker uses the cred entials of a legitimate user
to gain access to an application or system . This can be a result of: a) users being
careless with their ids and passwords, b) id s and passwords being transmitted over a
network or stored on a server without encr yption, or c) users setting easy to guess
passwords. Once a hacker has possession of a user's credentials, he/she can login to the
application with all of the privileges normally assigned to that user. This threat can be
reduced or eliminated by re quiring the use of strong pa sswords and forcing frequent
password changes, by not storing or transmitti ng clear-text passwords, or by the use of
hardware tokens. The approach to be taken depends on the value of the data protected
by the id and password.

4. How do these Vulnerabilities Affect Your Customers


Your customers can be affected in a variet y of ways: from identity theft to session
hijacking to the compromise of confidenti al and private customer data. Cross-site
Scripting is one of the leading methods used in identity theft (and an obvious concern to
financial and healthcare institutions); it attacks the user via a flaw in the website that
enables the attacker to gain access to login and account data from the user. Many of the
phishing email-based schemes use cross-si te scripting and other application layer
attacks to trick users into giving up their credentials.
SQL injection is one of the main atta cks used when backend databases are
compromised. General consensus has pegged SQL Injection as the method used behind
the massive compromise of credit card numbers in February of last year. We still see
many cases where cookies aren't properly secu red, allowing an attacker to 'poison' the
cookie, hijack active sessions or manipulate hidden fields to defraud e-commerce sites.
As web applications become more pervasive and more complex, so do the techniques
and attacks hackers are using against them. Recent new vulnerabilities and attack
methods discovered or reported show an al arming trend toward attacks with multi-
faceted damages and even anti-forensics capabilities. This means hackers are using
more powerful attacks to cause significantly more damage, while at the same time
covering their tracks is becoming easier.

5. Security Testing: Approach


Identify security threats and map with specific application
Create Security Test Plan
Write Security Test Cases
Execute Security Test Cases after functional testing is completed
Analyze test results
Report security flaws to the developers
Track to Closure

6. Some common mistakes in application security


System Admin password is hard-coded in the application
Password is sent over the wire in unencrypted form
Database roles are not used
Permissions are given in an adhoc manner
Application roles are not used
Multiple applications run in a single machine
Stored procedure level security is almost missing
Stored procedures are not encrypted
Errors triggering sensitive information leak
SQL injections
Back doors and debug options: Cross-site scripting
Weak session management
Insecure use of cryptography

7. Recommended areas of security testing

S Test Area Description


No
1 Authentication User ID complexity, susceptibility to brute forcing,
lockout after x number of incorrect login attempts,
username harvesting, saving of passwords locally,
login error messages, password policy

2 Authorization Cookie use and complexity, caching, tracking


logic, susceptibility to session hijacking/ session
replay attacks

3 Information Leakage Review HTML page source code for: revision


history, comments, email addresses, internal host
information, hidden form fields, error messages

4 Field variable Buffer Overflow, SQL injection, cross site scripting,


Control system calls, URL re-writing

5 Session Time-out and No back button, cookie validation, multiple logins


Log-out allowed for single user, reusing older credentials
to gain access

6 Cache Control No sensitive information stored locally, no tracking


material stored that could replay session.

7 Server Side Web application execution environment, database


Application interaction, internal proxying, APIs, unauthorized
Logic command execution

8 Client Side Software Code obfuscation, client connection DOS, client


Vulnerabilities code reverse engineering susceptibility, client
function manipulation, client privilege escalation

9 Error Handling No 404 errors, no database or middleware errors


that reveal software used, no customized error
pages revealing useful information

10 Security of 3rd Party Single sign on, middleware or content control


Applications applications with published security flaws,
communication protocols used between
applications, secure configuration of software,
secured integration of software into the Web
environment.

11 Application Policy procedures, ports, login credentials, remote


Administration access security

12 Use of Encryption Obfuscation, password storing, key management/


revocation, scalability

8. Web Application Security Action Plan


A Web Application Security Process can be implemented using 3 key guidelines:
1. Understand: Perform security audits and defect testing throughout the Application
lifecycle. Production applications are an obvious first place to implement regular
audits and analysis to determine security and compliance risk to an organization. At
the same time, one must not forget that the application development lifecycle is the
breeding ground for the defects that cause the risks. Performing security testing
during the application development lifecycle at key points during the various stages
from develo pment to QA to Staging will reduce costs and significantly reduce your
online risk.
2. Communicate: After risks and security defects have been identified, it is imperative to get the
right information to the right stakeholder. Development needs to understand
what these vulnerabilities and compliance exposures are in development terms. This
means providing details around how the attack works and guidance on remediation.
There are several good sources both online and in security testing tools for developers.
Similarly, QA must be able to perform delta, trend and regression analysis on the
security defects just like they do for performance and functionality flaws. Using their
existing methods and metrics they, along with Product Management, can properly
prioritize and monitor the defect remediation process as well as accurately assess
release candidacy. Finally, with the ever-increasing number and scope of Government
and internal regulations and policies, teams from Security, Risk, Compliance and R&D
need to communicate and validate application risks against these very real business
drivers.
3. Measure: For any process to be successful, there needs to be criteria by which to
measure the successes or failures of the procedures implemented. Organizations use
trending and defect remediation analysis metrics to identify areas and issues to focus
on, i.e. there may be a certain security defect type that keeps cropping up which can
then be identified and dealt with through targeted education and training to recognize
repeated risks with a particular infrastructure product or vendor. Ultimately,
measuring and analyzing scan results will contribute to a reduction in liability and
risk brought about by implementing a web application security plan.

You might also like