You are on page 1of 5

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/220841054

Software Testing for Web-Applications Non-Functional Requirements

Conference Paper · January 2009


DOI: 10.1109/ITNG.2009.209 · Source: DBLP

CITATIONS READS
10 5,733

7 authors, including:

Breno Lisi Romano Gláucia Braga e Silva


Federal Institute of Education, Science and Technology of São Paulo Universidade Federal de Viçosa (UFV)
16 PUBLICATIONS   62 CITATIONS    16 PUBLICATIONS   64 CITATIONS   

SEE PROFILE SEE PROFILE

Ricardo Godoi Vieira Adilson Marques da Cunha


Instituto Tecnologico de Aeronautica Instituto Tecnologico de Aeronautica
11 PUBLICATIONS   43 CITATIONS    230 PUBLICATIONS   664 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

nonlinear control View project

ITNG: International Conference on Information Technology View project

All content following this page was uploaded by Alexandre C B Ramos on 04 April 2014.

The user has requested enhancement of the downloaded file.


Software Testing for Web-Applications Non-Functional Requirements

Breno Lisi Romano¹, Gláucia Braga e Silva¹, Henrique Fernandes de Campos¹, Ricardo Godoi
Vieira¹, Adilson Marques da Cunha¹, Fábio Fagundes Silveira² e Alexandre Carlos Brandão Ramos³
Brazilian Aeronautics Institute of Technology¹
São José dos Campos, São Paulo, Brazil
Federal University of São Paulo²
São José dos Campos, São Paulo, Brazil
Federal University of Itajubá³
Itajubá, Minas Gerais, Brazil

Abstract Hypertext Markup Language (HTML) files [1]. It happens


This paper tackles the use of software testing because these applications are not applied in critical
techniques for Web-Application non-functional sectors, so failures were not considered important.
requirements. It also shows load and performance testing Nowadays, Web-Applications can be found in areas
application in a case study. At the end, a successful new such as e-commerce, information diffusion, and
approach for the navigability testing for Web- cooperative work [2].
Applications is proposed. The complexity has increased due to its use in critical
areas which demand considerable attention with defect
Key Words: load and performance testing, prevention in order to avoid losses in companies. It
navigability testing, test metric, Web-Application. demands the use of methods, techniques and tools to
support Web-Applications testing, by providing quality
and reliability [3].
Therefore, it is important to consider software testing
1. Introduction for Web-Applications non-functional requirements. These
non-functional requirements were classified in two
Web-Applications coverage stands out the importance different categories, as shown in Table 1.
of non-functional requirements verification during tests.
This paper addresses two software testing category for Table 1. Categories of non-functional requirements
Web-Applications based on non-functional requirements Load and Performance Navigability
and corresponding test metrics. The first one describes the - Answer users request in - Do not have broken links
load and performance testing technique applied in a case appropriate time
study. The second category tackles the navigability testing - All pages should be
technique in order to propose an approach for its - Support simultaneous access reached from the main
execution, by suggesting some metrics to be used. page
- Depend on server and
The remaining of this article is organized as follows. technologies
The second section describes the background about Web-
Applications. The third shows related works. The load
and performance testing is addressed in the section four. 3. Related Works
The fifth describes a new approach for navigability
testing. Finally, the sixth and last section presents some The use of representation models makes the software
conclusions. system understanding easier. Conallen [1] has proposed
an UML (Unified Modeling Language) extension for
2. Background mapping Web pages.
Ricca and Tonnela [4] have dealt with performing
software testing in Web-Applications based on derived
Originally, Web-Applications have been providing
information from representation models. This work
information through text documents containing static
suggests a test approach based on class diagram, This test was produced in the WebLOAD 8.1.0 open-
exploring the test cases generation using structural and source tool from RadView with the following load
functional testing for Web-Applications. generator configuration: duration test equals to 10
Another approach for software testing for Web- minutes; and virtual clients linear progression increasing
Applications called interaction mutation is proposed by from 30 to 60. Figure 2 shows SUT behavior for the load
Lee and Offutt [5] that apply mutant analyses for data and performance testing.
interaction validation through eXtensible Markup
Language (XML) messages among Web-Application
components.
Ruffo et al. [6] show tools set and a process for
Web-Applications testing like load and performance
testing. However, they do not report their used metrics.
Next section presents the load and performance
testing applied in a specific case study.

4. Load and Performance Testing


An e-commerce application, considered as the System
Under Test (SUT), was used in the load and performance
testing being applied by including non-functional Figure 1. SUT behavior
requirements of this category (Table 1). In this test It was
simulated Web-Application simultaneous accesses by The main load and performance metrics used in this
virtual clients, with the sole purpose of checking Use Case Test are listed in Table 2.
infrastructure, hardware and server bandwidth issues [7].
The execution environment of this test has allowed Table 2. Metrics for load and performance testing [8]
Metric Description
tester to create a load generator by simulating multiple
Load Size The number of Virtual Clients running
virtual clients simultaneously accessing the SUT. This during the last reporting Interval.
load generator has supplied an excessive number of Throughput The average number of bytes per second
virtual clients over SUT with an information overload, in transmitted from the SUT to the Virtual
order to complete the stress and scalability testing. Figure Clients running this Test Script during
1 shows this test architecture. the last reporting interval. In other words,
this is the amount of the Response Data
Size (sum) divided by the number of
seconds in the reporting interval.
Receive Time The elapsed time between receiving the
first byte and the last byte.
Connection The time it takes for a Virtual Client to
Time connect to the SUT, in seconds. In other
words, the time it takes from the
beginning of the HTTP request to the
Transmission Control Protocol / Internet
Protocol (TCP/IP) connection.
Failed Hits The total number of times Virtual Clients
made HTTP requests but did not receive
the correct HTTP response from the SUT
during the testing.
DNS Lookup The time it takes to resolve the host name
Time and convert it to an IP by calling the
DNS server.
Failed The total number of times the Virtual
Rounds Clients started but did not complete the
Figure 1. Load and Performance Testing Test Script during the testing
Architecture [8] Failed The total number of times Virtual Clients
Connections tried to connect to the SUT but were
The defined test script (the use case test) was prepared unsuccessful, during the testing.
to verify the application behavior from simultaneous
insertions of items at shopping carts by virtual clients.
The behavior analysis based on established metrics in identifier, with the number one used to main page; a
this case study has shown that Web-Applications have Uniform Resource Locator (URL) page; and the page
been stable with simultaneous shoppings by 30 clients. relevance to Web-Application, with range from 1 to 5.
On the order hand, by increasing the number of clients to
60, the application has shown unstable.
This behavior demands adjustments in infrastructure,
hardware, and server bandwidth where the application is
hosted.

5. Navigability Testing
The second non-functional software testing category
for Web-Applications (Table 1) considers the navigability
as other important factor to these applications.
Algorithms used by search engines to rank Web-
Applications consider the navigability in order to sort up Figure 2. Navigability Testing Architecture
Web-Applications position.
A research produced by Ribeiro [9] emphasizes the From this input file, test performs a procedure to build
importance of an e-commerce being ranked on the first its navigability tree with the proposed previous
position in an executed search, obtaining profits until navigability metrics, by using graph theory.
400% above the other in fifth place. Although the development of an automated tool had
Within this context, this paper proposes a navigability not been considered in the scope of this research, it is
testing based on specification models related by Ricca and essential for the successful application of the proposed
Tonella [4] by using provided information from the Navigability Testing. Manual procedures for building of
classes’ diagrams. the navigability tree and calculating the proposed metrics
After navigability testing has been completed, all are not practicable mainly for complex Web-Applications.
results are produced to verify the Web-Application
behavior through analysis of proposed metrics, as shown 6. Conclusion
in Table 3. These metrics were defined by paper authors,
based on their experience, as best practices for software
With the measurement importance of non-functional
testing.
requirement facing the Web-Applications’ testing, this
research was considered essential mainly because it
Table 3. Proposed metrics to Navigability Testing
represents a starting point for improving quality,
Metric Description
Unreachable Pages Quantity of available pages in reliability and acceptance of this type of product.
server that cannot be reachable. Results from load and performance testing have
Not Found Pages Quantity of pages that return Not provided a hosting infrastructure assessment for Web-
Found Error (404 Error). Applications, evidencing their behaviors with
Pages Relevance Relationship between page simultaneous client access variations.
vs. Hits Sum in relevance and hits sum in pages This paper has also addressed a proposal for
Pages that were necessary to reach the navigability testing at Web-Applications together with
target page. related metrics, in order to optimize and improve its
Reachable Pages Quantity of available pages in ranking in search engines.
through Main Page server that cannot be reachable
Looking forward for future works, authors suggest
from the main page.
the development of an automated tool to perform the
Identifying Closed Identifying a cyclic sequence of
Cycles steps in which it is possible the proposed navigability testing and also applying it in a
return to the initial page. case study. Besides, they propose the use of Experimental
Software Engineering for appropriate result
The proposed Navigability Testing includes the measurements.
generation of navigability tree and the application of a
pre-defined metrics set (Table 3) through an input file in 7. References
XML format containing all Web-Application pages at
server, as illustrated in Figure 3. [1] J. Conallen. Building Web applications with UML. Addison-
The input file in XML format must include the Wesley, Boston, MA, EUA, 2. ed., out. 2002.
following information from Web-Applications page: an
[2] A. J. Offutt. Quality attributes of Web Software applications. [7] Y. Wu e A. J. Offutt. Modeling and testing web-based
IEEE Software, 19 (2): 25-32, mar. 2002. applications. Relatório Técnico ISE-TR-02-08, George Mason
University, Fairfax, VA, EUA, nov. 2002.
[3] M. E. Delamaro, J. C. Maldonado, M. Jino. Introdução ao
Teste de Software. Campus, 2007. [8] Comunidade para Testes de Performace de Aplicações Web
[4] F. Ricca e P. Tonella. Analysis and Testing of Web – WEBLOAD http://www.webload.org/ - visitada em
Applications. In: XXIII International Conference on Software 19/06/2008.
Engineering – ICSE’01, p.25-34, Washington, DC, EUA, mai.
2001. IEEE Computer Society. [9] M. Ribeiro. Quanto vale o primeiro lugar no Google. Mídia
Digital -
[5] S. C. L. Lee e J. Offutt. Generating test cases for xml-based http://www.midiadigital.com.br/index.php/2007/05/24/quanto-
web component interactions using mutation analysis. In: XII vale-o-primeiro-lugar-no-google/ - visitada em 19/06/2008.
International Symposium on Software Reliability Engineering –
ISSRE’01, p.200, Washington, DC, EUA, nov. 2001. IEEE
Computer Society.

[6] G. Ruffo, R. Schifanella, e M. Sereno. WALTy: A User


Behavior Tailored Tool for Evaluating Web Application
Performance. In: Network Computing and Applications - NCA
2004, p. 77-86. Third IEEE International Symposium.

View publication stats

You might also like