You are on page 1of 11

A Test Strategy for

Enterprise Integration
Points Testing
Author: Sai krishna
Creation Date:
Last Updated:
Version: Initial Draft
Introduction
Integration is a topic that can’t be ignored for enterprise applications, not only because
integration with external systems could be error prone, but also because they are hard to test.
This article introduces a commonly applicable testing strategy for integration points, which
improves the coverage, speed, reliability and reproducibility of testing, and thus could be used as
a reference for implementing and testing integration-heavy applications.

Types of Integration testing:

1.Contract Tests
An API represents a contract between 2 or more applications. The contract describes how to
interact with the interface, what services are available, and how to invoke them. This contract is
important because it serves as the basis for the communication

The first and most basic type of API tests are contract tests, which test the service contract itself
(Swagger, PACT, WSDL or RAML). This type of test validates that the contract is written correctly and can
be consumed by a client. This test works by creating a series of tests that pull in the contract and
validate that:

 the service contract is written according to specifications


 a message request and response are semantically correct (schema validation)
 the endpoint is valid (HTTP, MQ/JMS Topic/Queue, etc)
 the service contract hasn’t changed

2.Component Tests
Component tests are like unit tests for the API – you want to take the individual methods available in the
API and test each one of them in isolation. You create these tests by making a test step for each method
or resource that is available in the service contract.

The easiest way to create component tests is to consume the service contract and let it create the
clients. You can then data-drive each individual test case with positive and negative data to validate that
the responses that come back have the following characteristics:
 The request payload is well-formed (schema validation)
 The response payload is well-formed (schema validation)
 The response status is as expected (200 OK, SQL result set returned, or even an error if that’s
what you’re going for)
 The response error payloads contain the correct error messages
 The response matches the expected baseline. This can take two forms:
o Regression/diff - the response payload looks exactly the same from call to call (a top-
down approach where you essentially take a snapshot of the response and verify it
every time). This can also be a great catalyst to identify API change (more about that
later).
o Assertion - the individual elements in the response match your expectations (this is a
more surgical, bottom-up approach targeted at a specific value in the response).
 The service responds within an expected timeframe

3.Scenario Tests
Scenario testing tends to be what most people think about when they think about API testing. In this
testing technique, you assemble the individual component tests into a sequence, much like the example
I described above for the Amazon service.

There are two great techniques for obtaining the sequence:

1. Review the user story to identify the individual API calls that are being made.
2. Exercise the UI and capture the traffic being made to the underlying APIs.

Scenario tests allow you to understand if defects might be introduced by combining different data
points together.

4.Performance Tests

Performance testing is usually relegated to the end of the testing process, in a performance-specific test
environment. This is because performance testing solutions tend to be expensive, require specialized skill
sets, and require specific hardware and environments. This is a big problem because APIs have service
level agreements (SLAs) that must be met in order to release an application. If you wait until the very last
moment to do your performance testing, failures to meet the SLAs can cause huge release delays.

5.Security Tests
 Authentication: Identifying the end user with the key .
 Authorization: Providing identified user access to correct resources/data through an access
token using private & public key.
 Encryption: Hiding information from unauthorized access.
 Signatures: Ensuring information integrity, so as to check that API requests or response have not
been tampered within transit. Short lived and expires automatically after few seconds or
accessing it.

6.Omni –Channel Tests

Because of the multiple interfaces that applications interact with (mobile, web, APIs,
databases…), you will run into gaps in test coverage if you test any one of these in isolation,
missing the subtleties of the complex interactions between these interfaces.

7.Managing Change
Change is one of the most important indicators of risk to your application. Change can occur in many
forms, including:

 Protocol message format change for a service


 Elements added or removed from an API
 Underlying code change affecting the data format returned
 Re-architecture of a service to break it down into multiple parts (extremely prevalent as
organizations move to microservices)
Approach to JMS testing
JMS API supports two kinds of Messaging models (Programming models) to support Asynchronous
Messaging between Heterogeneous Systems.

 Point-To-Point Model(P2P Model)


 Publish-Subscribe Model (Pub/Sub Model)

Point to Point Messaging Model

Point-to-Point Messaging Model is also known as P2P Model. Below diagram shows typical Point-To-
Point Messaging model in any Messaging system.

Point-To-Point Testing:

 P2P Model uses “Queue” as JMS Destination


 In P2P Model, a JMS Sender or JMS Producer creates and sends messages to a Queue.
 JMS Queue is an administered object, which is created in a JMS Provider by Administrator.
 In P2P Model, a JMS Receiver or JMS Consumer receives and reads messages from a Queue.
 In P2P Model, a JMS Message is delivered to one and only one JMS Consumer.
 We can configure any number of JMS Senders and JMS Receivers to a particular queue. However,
any message should be delivered to one and only one Receiver.
 There is no timing dependency between JMS Sender and JMS receiver. That means the JMS
Receiver can consume the messages whether it is alive or not when the JMS Sender sent that
message.
 In this model, Destination stores messages till its consumed by Receiver.
In a publish/subscribe (pub/sub) product or application, clients address messages to a topic, which
functions somewhat like a bulletin board. Subscribers can receive information, in the form of messages,
from publishers. Topics retain messages only as long as it takes to distribute them to current subscribers.

Publish/Subscribe Messaging Model


Pub/Sub Messaging model is again divided into two categories:

 Durable Messaging Model: Durable Model is also known as Persistent Messaging Model. In this
model, Messages are stored in some kind of store in JMS Server until they are delivered to the
destination properly.
 Non-Durable Messaging Model: Non-Durable Model is also known as Non-Persistent Messaging
Model. In this model, Messages are not stored in the JMS Server.
 Each message can have multiple consumers.
 Publishers and subscribers have a timing dependency. A client that subscribes to a topic can
consume only messages published after the client has created a subscription, and the subscriber
must continue to be active in order for it to consume messages.

Pub and Sub Model Testing:

 Pub/Sub model uses Topic as JMS Destination.


 JMS Administrator uses JMS Provider Admin Console and configures all required
ConnectionFactory and Topics Objects in JMS Provider.
 JMS Publisher creates and publishes messages to Topics.
 JMS Subscriber subscribes to interested Topics and consumes all messages.
 Pub/Sub Messaging model has timing dependency. That means JMS Subscriber can consume
messages which are published to the Topic only after it subscribes to that Topic. Any messages
posted before its subscription or any messages posted when it is inactive, cannot be delivered to
that Consumer.
 Unlike P2P Model, in this model Destination does not store messages.

Note: In total PUB needs more time while testing ( 70%) and SUB is tested as per the target systems

business rules. ( 30%)

Bugs that occur in API Integration are:

 Missing or duplicate functionality

 Fails to handle error conditions gracefully

 Unused flags

 Not implemented errors

 Inconsistent error handling


 Performance

 Multi-threading issues

 Improper errors

9. Framework Implementation Prototype :


API Automation framework is divided in 3 layers

 SOAPUI Project is the user layer from where the Test Request is invoked which internally utilizes the
data source for different request inputs.

 Framework Layer is the logical layer in the framework where all the business logic, re-usable
libraries and the validation mechanism is built in which applies for the all the multiple request fired
from the presentation layer.

 Result Reporting is the physical layer where test outcomes are stored
The Ready API Automation Framework provides:

 Easy-to configure, reusable Ready API project file.


 Groovy: It includes Groovy script in the base project that generates signature for authentication
& successfully executes the API .It also reads & write data to and from Excel file/ txt file and
executes the test cases using the Ready API tool.
 Assertions: The test cases validate the functionality of the Web service or services by sending
the XML-formatted requests as specified in the test case and verify (through proper checkpoints
called assertions) the response accordingly with various types of assertions as per nature of the
request parameter & response
 Properties: It allow users to configure and use specific value in different test steps within Test
Suite. To specify the configuration values used within the Groovy test scripts, users can create
properties in Test Suite and at the test-case level with a base project. The values can be
referenced using either property transfer or property expansion mechanisms.

The Automation Framework has pre-defined properties that help the Groovy scripts to set
configurations with the values test engineers created in the Test Suite.

 JDBC: Using JDBC the data in the response is also checked and validated through the database.
 Test runner : The test runner allows to run Ready API tests and export results from command
line as well
 Reports/Logs: For every test run the logs is generated and stored in a specified location as
mention in the configuration. The log contain all the request and response on each run which
stored with time stamp. There is an excel report at the end confirm the coverage of the total API
executed and its overall pass/fail/not run status.
Techniques:
 Groovy script for data transfer.
 Running test suite usingtestrunner.bat.
 Storing the results dynamically using timestamp.
 Data checking through JDBC.
 Generating execution log and test summary report through custom code (Groovy script).

You might also like