You are on page 1of 13

WEB SERVICE SECURITY AND PERFORMANCE

Securing Web Service

1. Secure the transport layer


XML Web Services rely on IP and HTTP as a transport layer to
connect applications and associated resources to one another.
Robust XML Web Services security is built on a strong foundation of
transport layer security so that sensitive information cannot be
intercepted and read in transit. SSL VPNs are easy to deploy and
provide a flexible security model for securing extranets. Additionally,
the use of server certificates and client certificates are recommended
during authentication. Hardware-based accelerators are the preferred
way to secure the transport layer while maintaining high performance
for transactions.

2. Implement XML filtering


XML requires sophisticated processing to ensure that transactions
are known to be good before they penetrate deep into the enterprise.
XML filtering provides managers with a variety of functionality as
complex rule sets can be built around network level information,
message size, message content and other variables. Because filters
are XML-based, they are easily updated as new threats are detected.
Setting up simple filters based on message size or XML Digital
Signatures is an easy place to start. As application usage increases,
filtering based on content and other parameters enables the security
staff to implement sophisticated and granular business rules.

3. Mask internal resources


One sound security practice deployed by many today is the use of
Network Address Translation (NAT) to obscure internal IP addresses.
In addition to using NAT, one effective way to mask and protect
internal resources from external parties is to disallow direct TCP
connections between application servers and outside parties. By
using an XML proxy to rewrite URLs and other information otherwise
exposed by Web services, enterprises can quickly and simply hide a
significant amount of their internal configuration.

4. Protect against XML denial-of-service attacks


XML Denial of Service (XDoS) attacks may not be as popular as the
syn-flood attacks of the dotcom era, but they are more easily
launched and capable of much more damage. To protect against
XDoS, implement reasonable constraints for all incoming messages.
With the use of an XML security gateway as a proxy, network
managers can configure simple settings on message size, frequency
and connection duration. The goal is to allow access to resources
while simultaneously using XML filtering rules to reduce the “aperture
of entry” into the corporate network.

5. Validate all messages


Because XML is text-based and in many instances generated by
humans, there is significant room for error in message creation. One
simple step to prevent this problem is to use XML Schema Definitions
(XSD) to validate both inbound and outbound data. XSD is the
successor to Document Type Definitions (DTDs) because they are
more useful and extensible. This best practice reduces the risk of
security holes of unknown/undocumented fields or protocol features
that might otherwise compromise resources. In addition to performing
Schema Validation, managers should also check messages for XML
well-formedness, (during parsing), improper identity or lack of
resource references, protocol (e.g. SOAP) validity and other
message validity checks.

6. Transform all messages


By transforming all outbound XML messages, network managers
enable “XML Address Translation”: mapping between the private
internal data layout and the external one. This kind of application-
layer protection is easily implemented today using XSLT, one of the
most mature XML technologies. Using XSLT, businesses can
obscure internal schemas and object layouts from outside parties. As
the number of XML dialects and vocabularies increases, message
translation will become a key first step in processing any application
request. Because standards are nascent, XSLT is a key asset as it
enables an enterprise to simultaneously support varying message
formats and standards.

7. Sign all messages


By signing each outgoing message, the sender can create a secure
audit trail by logging each message with a signature that can be
verified post-transaction. Because each log entry is signed, their
contents cannot be modified or altered and the sender gains non-
repudiation protection. While signing and verifying every incoming
and outgoing message may seem processing-intensive, use of a
hardware appliance avoids the performance bottlenecks that
accompany software-based solutions.

8. Timestamp all messages


Enterprises can augment non-repudiation capabilities by using the
Network Time Protocol (NTP) to synchronise all XML network nodes
to a single authoritative reference time source. This simple step adds
timestamps to all incoming and outgoing messages. When used with
XML Digital Signatures, network managers now have a
cryptographically secure timestamp that enhances non-repudiation
capabilities by being able to definitively prove at what time a given
transaction took place.

9. Encrypt message fields


XML Encryption requires one to parse the XML transaction, and then
select the section(s) to encrypt/decrypt and finally perform a set of
processing-intensive XML and crypto operations. Because both
crypto and XML processing are very resource-intensive, deploying
both XML encryption and its companion, XML digital signature, can
have a significant performance impact on high-transaction
applications. Consolidating some of the functions onto an easy-to-
manage secure network device that can encrypt/decrypt or sign/verify
XML transactions on their way through the network helps centralise
control and reduce administrative hassles.

10. Implement secure auditing


The importance of auditing cannot be underestimated. While many
network managers rely on syslog for creating audit trails, this alone it
is not totally secure. By using a combination of XML Digital
Signatures and time stamping, a manager can quickly and easily
create secure e-business transaction logs that can be used for non-
repudiation. In many instances, legal requirements demand that the
logging technology used is secure and verifiable.
Custom Token Authentiction

In order to achieve our security goals, such as authorization and


authentication, using a specifically designed framework
like Spring Security may be the best solution.

“Spring Security is a framework that focuses on providing both


authentication and authorization to Java applications. Like all
Spring projects, the real power of Spring Security is found in
how easily it can be extended to meet custom requirements.” –
Spring Security

However, sometimes implementing a specific authentication


logic to keep the application simple might be necessary.

The system we are going to present will allow us to choose


whether or not to protect an API. Moreover, we assume that
every valid authentication token identifies a particular user. The
token is in a specific header or cookie and is used by
authentication logic to extract a user whose data will be
automatically passed to a protected API’s function body.

Let’s see how custom token-based authentication can be


achieved in Spring Boot and Kotlin.

1. Defining a Custom Annotation


To choose whether or not an API should be protected by the
authentication system, we are going to use a custom-defined
annotation. This annotation will be used to mark a parameter of
type User to define whether or not the API is protected. The
instance of the particular user identified by the token is
automatically retrieved and can be used inside the API function
body.

Let’s see how a custom Auth annotation can be defined:

2. Defining Authentication Logic


Authentication logic should be placed in a specific component,
which we are going to call AuthTokenHandler. The purpose of this
class is to verify if the token is valid and extract its related user.
This can be achieved in many ways. We are going to show two
different verification approaches.

Using a custom DAO:

Calling an external API:

In both cases, when the token is missing or not valid, a


custom AuthenticationException is thrown. In this case, the protected
API should respond with “401 Unauthorized.”

“The HTTP 401 Unauthorized client error status response code


indicates that the request has not been applied because it lacks
valid authentication credentials for the target resource.” —
MDN web docs

To achieve this, a class marked with @ControllerAdvice can be


used as follows:

3. Retrieving the Token


To allow Spring Boot to automatically look for the token in the
headers or cookies when the custom Auth annotation is
identified,
an AuthTokenWebResolver implementing HandlerMethodArgumentResolve
r has to be defined.

Let’s assume that the authentication token can be placed in a


header or cookie called authToken. The retrieving logic can be
implemented as follows:

4. Configuring Spring Boot


Now, we have to define a custom class for the configurations.
This way, Spring Boot will be able to use the
custom Auth annotation as designed.

For everything to work, we need to add the previously


defined AuthTokenWebResolver to the default argument resolvers. This
can be achieved by harnessing
the WebMvcConfigurationSupport class.
“[WebMvcConfigurationSupport] is typically imported by
adding @EnableWebMvc to an application @Configuration class. An
alternative more advanced option is to extend directly from
this class and override methods as necessary, remembering to
add @Configuration to the subclass and @Bean to
overridden @Bean methods.” — Spring’s official documentation

We are going to define a @Configuration class that


extends WebMvcConfigurationSupport:

When using WebMvcConfigurationSupport, do not forget that we have to


deal with CORS configurations. Otherwise, our APIs might not
be reachable as expected.

5. Putting It All Together


Now, it is time to see how Auth annotation can be used to make
an API work only with authenticated users. This can be easily
achieved by adding a User type parameter marked
with Auth annotation to the chosen Controller API function:
An API protected by custom token-based authentication

HTTP Basic Authentiction


HTTP authentication
HTTP provides a general framework for access control and authentication. This page
is an introduction to the HTTP framework for authentication, and shows how to restrict
access to your server using the HTTP "Basic" schema.
The general HTTP authentication framework
RFC 7235 defines the HTTP authentication framework, which can be used by a
server to challenge a client request, and by a client to provide authentication
information.

The challenge and response flow works like this:

1. The server responds to a client with a 401 (Unauthorized) response status and
provides information on how to authorize with a WWW-Authenticate response header
containing at least one challenge.
2. A client that wants to authenticate itself with the server can then do so by including
an Authorization request header with the credentials.
3. Usually a client will present a password prompt to the user and will then issue the
request including the correct Authorization header.

In the case of a "Basic" authentication like shown in the figure, the


exchange must happen over an HTTPS (TLS) connection to be secure.

Proxy authentication
The same challenge and response mechanism can be used for proxy authentication.
As both resource authentication and proxy authentication can coexist, a different set
of headers and status codes is needed. In the case of proxies, the challenging status
code is 407 (Proxy Authentication Required), the Proxy-Authenticate response header
contains at least one challenge applicable to the proxy, and the Proxy-
Authorization request header is used for providing the credentials to the proxy server.
Access forbidden
If a (proxy) server receives valid credentials that are inadequate to access a given
resource, the server should respond with the 403 Forbidden status code.
Unlike 401 Unauthorized or 407 Proxy Authentication Required, authentication is
impossible for this user.

Authentication of cross-origin images


A potential security hole recently been fixed by browsers is authentication of cross-
site images. From Firefox 59 onwards, image resources loaded from different origins
to the current document are no longer able to trigger HTTP authentication dialogs
(bug 1423146), preventing user credentials being stolen if attackers were able to
embed an arbitrary image into a third-party page.

Character encoding of HTTP authentication


Browsers use utf-8 encoding for usernames and passwords.

Firefox once used ISO-8859-1, but changed to utf-8 for parity with other browsers and
to avoid potential problems as described in bug 1419658.

WWW-Authenticate and Proxy-Authenticate headers


The WWW-Authenticate and Proxy-Authenticate response headers define the
authentication method that should be used to gain access to a resource. They must
specify which authentication scheme is used, so that the client that wishes to
authorize knows how to provide the credentials.

The syntax for these headers is the following:

WWW-Authenticate: <type> realm=<realm>

Proxy-Authenticate: <type> realm=<realm>

Copy to Clipboard

Here, <type> is the authentication scheme ("Basic" is the most common scheme
and introduced below). The realm is used to describe the protected area or to
indicate the scope of protection. This could be a message like "Access to the staging
site" or similar, so that the user knows to which space they are trying to get access to.

Authorization and Proxy-Authorization headers


The Authorization and Proxy-Authorization request headers contain the credentials to
authenticate a user agent with a (proxy) server. Here, the <type> is needed again
followed by the credentials, which can be encoded or encrypted depending on which
authentication scheme is used.
Authorization: <type> <credentials>

Proxy-Authorization: <type> <credentials>

Copy to Clipboard

Authentication schemes
The general HTTP authentication framework is used by several authentication
schemes. Schemes can differ in security strength and in their availability in client or
server software.

The most common authentication scheme is the "Basic" authentication scheme,


which is introduced in more detail below. IANA maintains a list of authentication
schemes, but there are other schemes offered by host services, such as Amazon
AWS. Common authentication schemes include:

Basic
See RFC 7617, base64-encoded credentials. More information below.

Bearer
See RFC 6750, bearer tokens to access OAuth 2.0-protected resources

Digest
See RFC 7616, only md5 hashing is supported in Firefox, see bug 472823 for
SHA encryption support

HOBA
See RFC 7486, Section 3, HTTP Origin-Bound Authentication, digital-signature-
based

Mutual
See RFC 8120

AWS4-HMAC-SHA256
See AWS docs

Basic authentication scheme


The "Basic" HTTP authentication scheme is defined in RFC 7617, which transmits
credentials as user ID/password pairs, encoded using base64.

Security of basic authentication


As the user ID and password are passed over the network as clear text (it is base64
encoded, but base64 is a reversible encoding), the basic authentication scheme is
not secure. HTTPS/TLS should be used with basic authentication. Without these
additional security enhancements, basic authentication should not be used to protect
sensitive or valuable information.

Restricting access with Apache and basic authentication


To password-protect a directory on an Apache server, you will need a .htaccess and
a .htpasswd file.

The .htaccess file typically looks like this:

AuthType Basic

AuthName "Access to the staging site"

AuthUserFile /path/to/.htpasswd

Require valid-user

The .htaccess file references a .htpasswd file in which each line consists of a
username and a password separated by a colon (:). You cannot see the actual
passwords as they are hashed (using MD5-based hashing, in this case). Note that
you can name your .htpasswd file differently if you like, but keep in mind this file
shouldn't be accessible to anyone. (Apache is usually configured to prevent access
to .ht* files).

aladdin:$apr1$ZjTqBB3f$IF9gdYAGlMrs2fuINjHsz.

user2:$apr1$O04r.y2H$/vEkesPhVInBByJUkXitA/

Restricting access with nginx and basic authentication


For nginx, you will need to specify a location that you are going to protect and
the auth_basic directive that provides the name to the password-protected area.
The auth_basic_user_file directive then points to a .htpasswd file containing the
encrypted user credentials, just like in the Apache example above.

location /status {

auth_basic "Access to the staging site";

auth_basic_user_file /etc/apache2/.htpasswd;

}
Access using credentials in the URL
Many clients also let you avoid the login prompt by using an encoded URL containing
the username and the password like this:

https://username:password@www.example.com/

The use of these URLs is deprecated. In Chrome, the username:password@ part in


URLs is even stripped out for security reasons. In Firefox, it is checked if the site
actually requires authentication and if not, Firefox will warn the user with a prompt
"You are about to log in to the site “www.example.com” with the username
“username”, but the website does not require authentication. This may be an attempt
to trick you."

See also
• WWW-Authenticate
• Authorization
• Proxy-Authorization
• Proxy-Authenticate
• 401, 403, 407
OAuth-Performance
• As larger providers started using OAuth 1.0, the community realized
that the protocol had several limitations that made it difficult to scale
to large systems. OAuth 1.0 requires state management across
different steps and often across different servers. It requires
generating temporary credentials which are often discarded unused,
and typically requires issuing long lasting credentials which are less
secure and harder to manage.

• In addition, OAuth 1.0 requires that the protected resources


endpoints have access to the client credentials in order to validate the
request. This breaks the typical architecture of most large providers
in which a centralized authorization server is used for issuing
credentials, and a separate server is used for handling API calls.
Because OAuth 1.0 requires the use of the client credentials to verify
the signatures, it makes this separation very hard.

• OAuth 2.0 addresses this by using the client credentials only when
the application obtains authorization from the user. After the
credentials are used in the authorization step, only the resulting
access token is used when making API calls. This means the API
servers do not need to know about the client credentials since they
can validate access tokens themselves.

You might also like