You are on page 1of 110

HELSINKI UNIVERSITY OF TECHNOLOGY

Department of Electrical and Communications Engineering

Raymond Philip Causton

Smart card usage for authentication in web


single sign-on systems

This thesis has been submitted for official examination for a Master of Science
degree in electrical engineering on February 25, 2002. Espoo, Finland.

Supervisor __________________________________________
Professor Teemupekka Virtanen

Instructor __________________________________________
Kari Lehtinen MSc
TEKNILLINEN KORKEAKOULU
Sähkö- ja tietoliikennetekniikan osasto

Raymond Philip Causton

Toimikorttien käyttö tunnistautumisessa web-palvelujen


kertakirjautumisjärjestelmiin

Diplomi-insinöörin tutkintoa varten tarkistettavaksi jätetty diplomityö

Valvoja __________________________________________
Professori Teemupekka Virtanen

Ohjaaja __________________________________________
FM Kari Lehtinen

ii
HELSINKI UNIVERSITY ABSTRACT OF
OF TECHNOLOGY MASTER’S THESIS

Author: Raymond Philip Causton

Name of the Thesis: Smart card usage for authentication in web single sign-on

systems

Date: February 25, 2002 Number of pages: 110

Faculty: Department of Electrical and Communications Engineering

Professorship: T-110 Communications software

Supervisor: Professor Teemupekka Virtanen

Instructor: Kari Lehtinen MSc


The misuse of the Internet has increased the need for a centralised access
control and strong authentication methods. Because web-services have become
increasingly distributed, and the need for both a centralised administration and a
single sign-on solution have become even more relevant. The instigator of this
study Elisa Communications Corporation (Elisa) participates in the Pro FINEID
–project that promotes the use of FINEID -certificate based digital identity in
dealing with the authorities and e-business services.

The aim of this study is to select the best access control and administration
solution for protecting the extranet of the Elisa Research Centre, based on the
requirement specification dictated by the author. The basic requirements for
both single sign-on and smart card support are required from all candidate
products.

This study contains a market review of available commercial solutions from


which some were selected for further testing by the author in the Elisa Research
Centres laboratory. The winning candidate is described in more detail later,
accompanied by the evaluation details as per the requirement specification.

Both the tested, and many other products were able to satisfy the single sign-on
requirement, but only one of the tested products was able to fulfil successfully
the basic requirement of FINEID smart card utilisation for strong
authentication.

Keywords:
Authentication, single sign-on, security, smart card, certificate, X.509

iii
TEKNILLINEN KORKEAKOULU DIPLOMITYÖN
TIIVISTELMÄ

Tekijä: Raymond Philip Causton

Työn nimi: Toimikorttien käyttö tunnistautumisessa web-palvelujen

kertakirjautumisjärjestelmiin

Päivämäärä: 25.2.2002 Sivumäärä: 110

Osasto: Sähkö- ja tietoliikennetekniikan osasto

Professuuri: T-110 Tietoliikenneohjelmistot

Valvoja: Professori Teemupekka Virtanen

Ohjaaja: FM Kari Lehtinen


Internetin väärinkäytösten lisääntyessä on ilmennyt tarve keskitetylle
pääsynvalvonnalle ja vahvalle tunnistamiselle. Web-palveluiden muuttuessa
yhä hajautetummiksi palveluiksi on syntynyt tarve kertakirjautumiselle ja
keskitetylle hallintakoneistolle. Tutkielman toimeksiantaja Elisa
Communications on mukana ProHST –hankkeessa, jonka tarkoituksena on
edistää henkilön sähköisen tunnistamisen ja –asioinnin mahdollisuuksia.

Tämän tutkielman tavoitteena on valita sopivin extranet-palvelun


pääsynvalvonta- ja hallintajärjestelmä Elisan tutkimuskeskuksen käyttöön
tutkijan tekemän vaatimusmäärittelyn pohjalta. Perusvaatimuksina
järjestelmälle ovat sen kertakirjautumispohjaisuus ja sen kyky hyödyntää
toimikortteja vahvan identiteetin tunnistuksen aikaan saamiseksi.

Tutkielma sisältää katsauksen markkinoilla olevista kaupallisista tuotteista,


joista vaatimusmäärittelyä parhaiten vastaavat tuotteet valittiin tutkijan
testattavaksi Elisan tutkimuskeskuksen laboratorioon. Tutkielmassa esitellään
tarkempi arvio vain valitun tuotteen vastaavuudesta asetettuihin vaatimuksiin.

Kaikki testatut ja myös useat muista kaupallisista pääsynhallintatuotteista


kykenevät kertakirjautumiseen, mutta testatuista tuotteista vain yksi kykenee
todellisuudessa toteuttamaan asetetun kynnysehdon, eli hyödyntämään
suomalaista HST-korttia ja siihen tallennettuja varmenteita.

Avainsanat:
Tunnistus, kertakirjautuminen, tietoturva, toimikortti, varmenne, X.509

iv
FOREWORD

This thesis was carried out at the Elisa Communications Corporation’s Research
Centre.

I would like to thank my fiancée Katri Talja and my mother Pirkko for their
toleration and encouragement that has enabled me to complete this thesis. I would
also like to extend my thanks to my father, Darryl Causton, for his suggestions
and corrections that considerably improved the final manuscript.

Espoo, February 25, 2002

__________________________________________
Raymond Causton

v
TABLE OF CONTENTS

Abstract................................................................................................................. iii

Tiivistelmä ............................................................................................................ iv

Foreword.................................................................................................................v

Table Of Contents ................................................................................................ vi

List Of Figures...................................................................................................... ix

List Of Tables ....................................................................................................... xi

Abbreviations And Acronyms ........................................................................... xii

1 Introduction .......................................................................................................1
1.1 Background and objectives ..........................................................................1
1.2 Problem statement........................................................................................2
1.3 Thesis organisation.......................................................................................3
1.4 Definition of central terms and concepts .....................................................4
2 Background Infrastructure ..............................................................................7
2.1 Cryptography ...............................................................................................7
2.1.1 Hashing and message authentication .......................................................7
2.1.2 Symmetric cryptography..........................................................................9
2.1.3 Asymmetric cryptography .....................................................................10
2.1.4 Public-key digital signatures..................................................................11
2.1.5 Key exchange algorithms.......................................................................12
2.2 Public-key infrastructure............................................................................12
2.2.1 Smart cards ............................................................................................13
2.2.2 The X.509v3 certificate .........................................................................15
2.2.3 The LDAP directory ..............................................................................22
2.2.4 PKI structure ..........................................................................................23
2.2.5 Trust models ..........................................................................................25
2.2.6 Cross-certification..................................................................................26
3 Authentication Protocols And Methods ........................................................28
3.1 User ID / password authentication .............................................................28
3.1.1 Basic authentication ...............................................................................28

vi
3.1.2 Digest authentication .............................................................................29
3.1.3 One-time passwords...............................................................................30
3.2 Biometric authentication ............................................................................32
3.3 Symmetric key based cryptographic authentication ..................................33
3.3.1 ISO/IEC 9798-2 timestamp based unilateral authentication..................33
3.3.2 ISO/IEC 9798-2 nonce based mutual authentication.............................34
3.4 Public-key certificate based cryptographic authentication ........................35
3.4.1 Unilateral authentication protocols ........................................................35
3.4.2 Mutual authentication protocols ............................................................36
3.5 Network authentication systems ................................................................39
3.5.1 Kerberos system architecture.................................................................40
3.5.2 Operation of the Kerberos environment ................................................41
3.5.3 Pitfalls of Kerberos V ............................................................................43
3.5.4 Public-key cryptography extensions to Kerberos V ..............................43
4 Single Sign-On Architecture ..........................................................................45
4.1 Different single sign-on architectures ........................................................45
4.1.1 Native plug-in SSO agent ......................................................................46
4.1.2 Helper application agent ........................................................................47
4.1.3 Reverse proxy architecture ....................................................................47
4.2 Reduced sign-on architectures ...................................................................48
4.3 Single sign-on on the web ..........................................................................49
4.3.1 Credential passing methods ...................................................................50
4.3.2 Simple WebSSO scenario......................................................................51
4.3.3 Rejection points .....................................................................................52
4.4 Generalised single sign-on model ..............................................................52
5 Risk-Analysis Based Requirement specification ..........................................54
5.1 Raw requirements.......................................................................................55
5.2 Confidentiality and integrity ......................................................................55
5.2.1 Encrypted communication paths............................................................56
5.2.2 Secure key management ........................................................................56
5.2.3 Cryptographic access tickets..................................................................56
5.2.4 Alternative method of ticketing .............................................................56
5.2.5 Pluggable authentication method support..............................................57
5.2.6 Strong authentication support ................................................................57
5.2.7 Transaction non-repudiation ..................................................................57
5.2.8 Fine grained access policy enforcement ................................................58
5.3 Availability.................................................................................................58
5.3.1 Transaction atomicity ............................................................................58
5.3.2 Component redundancy .........................................................................59
5.3.3 Disaster recovery ...................................................................................59
5.3.4 Incremental scalability...........................................................................59
5.3.5 Load balancing features .........................................................................60
5.3.6 Multi-platform availability ....................................................................60
5.4 Accountability and audit ............................................................................60
5.4.1 Centralised management framework integration ...................................61
5.4.2 Delegated administration .......................................................................61
5.5 Other features .............................................................................................62
5.5.1 External application server support .......................................................62

vii
5.5.2 Future support for global authenticators ................................................62
5.5.3 Remote administrations .........................................................................63
5.5.4 Graphical UI for administration.............................................................63
5.5.5 Database locations and supported formats.............................................63
5.6 Summary of requirements ..........................................................................64
6 Commercial single sign-on product survey...................................................65
6.1 Computer Associates International: eTrust Single Sign-On ......................65
6.2 CyberSafe: TrustBroker .............................................................................65
6.3 DataLynx: Guardian...................................................................................66
6.4 Entrust: GetAccess .....................................................................................66
6.5 Evidian: AccessMaster SSO ......................................................................66
6.6 Tivoli: SecureWay Policy Director............................................................66
6.7 Netegrity: SiteMinder.................................................................................66
6.8 RSA: ClearTrust.........................................................................................67
6.9 Proginet: SecurPass Sync...........................................................................67
6.10 Unisys: Single Point Security..............................................................67
6.11 Feature summary of surveyed products...............................................67
6.12 Selection process of the products to be tested.....................................68
7 Evaluation Of The Selected Product .............................................................69
7.1 Evaluation background...............................................................................69
7.2 Netegrity SiteMinder 4.51..........................................................................69
7.2.1 Supported platforms...............................................................................70
7.2.2 SiteMinders functional components ......................................................70
7.2.3 SiteMinder’s architecture.......................................................................72
7.3 The SiteMinder test bench .........................................................................75
7.3.1 Structure of the sites to be protected......................................................77
7.3.2 The SiteMinder graphical administration user interface........................79
7.3.3 Product evaluation to requirement specification....................................83
7.3.4 Evaluation results in table format ..........................................................87
7.3.5 Problems encountered in testing ............................................................88
8 Conclusions ......................................................................................................90
8.1 Evaluation results .......................................................................................90
8.2 Future trends ..............................................................................................92
Bibliography .........................................................................................................94

viii
LIST OF FIGURES

Figure 2.1 The hashing process................................................................................8


Figure 2.2 MAC constructed from a hash-function .................................................9
Figure 2.3 Symmetric encryption process..............................................................10
Figure 2.4 Asymmetric encryption/decryption process .........................................11
Figure 2.5 Digital signature creation and verification process ..............................12
Figure 2.6 Depiction of a smart card [13]..............................................................13
Figure 2.7 The beginning of a certificate, DN shown............................................17
Figure 2.8 The middle section of the same certificate, CRL Distribution Point
shown................................................................................................18
Figure 2.9 The end section of a certificate, Key Usage attributes shown..............19
Figure 2.10 The FINEID CRL on 5.12.2001 .........................................................21
Figure 2.11 Windows 2000 certificate path validation logic [18] .........................21
Figure 2.12 A sample PKI environment ................................................................24
Figure 2.13 Two adjacent PKI domains.................................................................25
Figure 2.14 Cross-certified PKI domains with multi-path trust.............................26
Figure 3.1 Simplest occurrence of basic authentication ........................................29
Figure 3.2 Simplified Digest Authentication .........................................................30
Figure 3.3 Unilateral authentication with pre-shared key K and timestamp .........34
Figure 3.4 Mutual authentication with nonces and pre-shared key .......................34
Figure 3.5 Simplified public-key based client authentication [25, p.384].............35
Figure 3.6 Unilateral authentication with PKC and synchronised time.................36
Figure 3.7 Mutual authentication with PKC’s .......................................................37
Figure 3.8 X.509 mutual authentication with synchronised time ..........................38
Figure 3.9 X.509 mutual authentication with nonces ............................................39
Figure 3.10 The Kerberos general architecture [10, p.81] .....................................40
Figure 3.11 Kerberos V authentication steps .........................................................42
Figure 4.1 Native plug-in architecture ...................................................................46
Figure 4.2 External agent architecture ...................................................................47
Figure 4.3 Reverse proxy architecture ...................................................................48
Figure 4.4 Reduced sign-on architecture ...............................................................49
Figure 4.5 Cookie handling in the WebSSO environment.....................................50
Figure 4.6 Illustration of multi-host SSO in the web environment........................51

ix
Figure 7.1 Netegrity SiteMinder components and interactions .............................71
Figure 7.2 The SiteMinder architecture overview .................................................72
Figure 7.3 SiteMinder X.509 client authentication process...................................75
Figure 7.4 SiteMinder test setup ............................................................................76
Figure 7.5 The public jump page from netra3.rc.elisa.fi........................................78
Figure 7.6 SiteMinder administration console front page......................................79
Figure 7.7 The SiteMinder administration console................................................79
Figure 7.8 The agent configuration menu..............................................................80
Figure 7.9 The user directories ..............................................................................80
Figure 7.10 Directory configuration ......................................................................81
Figure 7.11 Realms in the S/SSO policy domain...................................................81
Figure 7.12 Rules for the ptiaa-realm ....................................................................82
Figure 7.13 SiteMinder policy setup dialog...........................................................82
Figure 7.14 Certificate mapping in SiteMinder .....................................................83
Figure 8.1 Suggested S/SSO architecture ..............................................................91

x
LIST OF TABLES

Table 1.1 Abbreviations and acronyms................................................................ xiii


Table 2.1 ASN.1 encoded X.509v3 certificate syntax [16] ...................................16
Table 2.2 Possible values for the KeyUsage attribute [16]....................................19
Table 2.3 ASN.1 encoded X.509v2 CRL syntax [16] ...........................................20
Table 3.1 A typical one-time password sheet ........................................................31
Table 3.2 Alternative hexadecimal OTP-key representations ...............................32
Table 5.1 Raw requirement specification ..............................................................55
Table 6.1 Supported features in surveyed products ...............................................68
Table 7.1 Correspondence of the requirement specification to SiteMinders
features..............................................................................................87

xi
ABBREVIATIONS AND ACRONYMS
AAA Authentication, Authorisation and Auditing
ACF/2 IBM Access Control Facility 2
ACL Access Control List
AES Advanced Encryption Standard
ANSI American National Standards Institute
API Application Program Interface
AS Authentication Server
ASN.1 ASN.1 DER Encoding is a Tag, Length, and Value Encoding System
ASP Active Server Pages
ATM Automatic Teller Machine
CA Certification Authority
CBC Cipher Block Chaining
CCITT Consultative Committee for International Telegraph and Telephone
CGI Common Gateway Interface
CRL Certificate Revocation List
CVS Concurrent Versions System
DCE Distributed Computing Environment
DER Distinguished Encoding Rules
DES Digital Encryption Standard
DN X.509 Distinguished Name
DSA Digital Signature Algorithm
DSS Digital Signature Standard, see also DSA
EJB Enterprise Java Bean
EMV Europay, Mastercard and Visa
EU European Union
FINEID Finnish Electronic Identity
GSS-API Generic Security Service API
GUI Graphical User Interface
HMAC Hash Message Authentication Code
HTTP Hypertext Transfer Protocol
IAA Identification, Authentication and Authorisation
IEC International Electrotechnical Commission
IETF Internet Engineering Task Force
ISO International Standards Organisation
ITSEC Information Technology Security Evaluation Criteria

xii
ITU-T International Telecommunication Union - Telecommunication
JSP Java Server Pages
LAN Local Area Network
LDAP Lightweight Directory Access Protocol
MAC Message Authentication Code
MD5 Message Digest 5
MIT Massachusetts Institute of Technology
NIST National Institute of Standards and Technology
ODBC Open Database Connectivity
OS Operating System
OTP One-Time Password
PIN Personal Identity Number
PKC Public-Key Certificate
PKI Public-Key Infrastructure
PKIX Public Key Infrastructure for X.509 Certificates (IETF)
PSE Personal Secure Environment
RA Registration Authority
RACF Resource Access Control Facility
RFC Request For Comments
RIPEMD-160 RIPE Message Digest 160
RPC Remote Procedure Call
RSA Rivest-Shamir-Adleman, a public-key crypto algorithm
RSO Reduced Sign-On
S/SSO Secure Single Sign-On
SC Smart Card
SDK Software Development Kit
SHA Secure Hash Algorithm
SIM Subscriber Identity Module
SSL Secure Sockets Layer
SSO Single Sign-On
TCB Trusted Computing Base
TGS Ticket Granting Server
TGT Ticket Granting Ticket
TLS Transport Layer Security
URI Unified Resource Identifier
URL Unified Resource Locator
USB Universal Serial Bus
WAP Wireless Application Protocol
WebSSO Single Sign-On for WWW-Services
VPN Virtual Private Network
WWW World Wide Web
X.500 The ITU Specified Directory
X.509 Certificate Structure Standard
Table 1.1 Abbreviations and acronyms

xiii
1 INTRODUCTION

1.1 Background and objectives

Originally, computer systems were very open in the sense that they were stand-
alone machines with physical access control to decide who may access the data
stored within the computing environment. With the arrival of terminal
connections to mainframes, it became a necessity to develop the multi-user
environment. The need for simple access control solutions to limit user access to
various resources was subsequently developed. This worked well with plain user-
id/password pair in order to login to systems, because computers were still scarce
and there were only a limited number of users. In time, local area networks began
to connect the computers to each other resulting in multiple systems that require
authentication. The number of passwords one had to memorise began to grow.

Today, with the Internet and global connectivity to various computing systems
together with the abundance of computers in a typical corporate network, the
number of different user-ids and passwords has grown tremendously. People
using these systems are faced with the difficulty of learning multiple credentials to
different systems off by heart. Furthermore, so as not to over simplify the issue,
every user normally has to change their passwords at least twice a year. The
passwords are made long and difficult to remember, because well-administered
computer systems enforce strict password quality requirements. Passwords are
easily misplaced or forgotten when the number of credentials the users has to use
grows. It also consumes precious working time when one has to login to multiple
systems manually, because it requires some seconds to remember and type in the
user-id/password combination on each system when access is needed. It has also
been noticed that login time increases with every failed authentication attempt [1].

Another problem with multiple computer systems is that of management. When


the number of systems and users grow, the task of keeping track of authorised
users and the termination of no longer authorised persons becomes unbearable
without good tools to automate the process of adding users to, and deleting users

1
from, all of these systems. A single sign-on infrastructure provides a solution to
these two problems.

Single sign-on is enabling technology for reducing the number of passwords one
has to use daily when using heterogeneous computing platforms and services. One
may think of single sign-on as a safe that holds the keys to all other resources that
one needs to access. This safe is special, in the sense that one only has to open the
safe with one key and all the other keys will automatically open the locks they
have access to without user intervention.

An ideal SSO solution tries to achieve single authentication to the entire


computing environment, and preferably also central user management and access
control.

Unfortunately, a single sign-on infrastructure is useless if the authentication


process is flawed or weak. Only, if the users are authenticated using so called
“strong authentication methods” i.e. cryptographically enhanced authentication
processes like public-key based authentication can one be sufficiently certain of
the identity of the communicating subject. To enhance the security of the
certificate based authentication method it is suggested that only smart cards are
used for storage of the private key. If there is a possibility that the SSO
infrastructure is communicating with a rogue party impersonating a legitimate
user, then all the enforced policies and access control measures are useless. It is
exactly as the mantra of computer security says, “Every systems security is as
weak as its weakest link [2].”

Therefore, one has to keep in mind the layered approach of computer security
infrastructure, where every successive layer of security is built on top of the
assumed invulnerable lower layer. This is extremely relevant, because nowadays
the complexity of systems has grown so much that it has become impossible to
verify the security of an entire system. Therefore, today all pieces of software are
built somewhat modularly to allow focusing quality assurance on a well-defined
sub-component of the whole one at a time. This leads to the aforementioned,
layered approach.

1.2 Problem statement

In a traditional environment, every service requires authentication, and since they


have no central authority, they are trusted to make the authentication decision on
their behalf. In a pure SSO environment, the user is issued a "certificate of
identity" validated once by the SSO infrastructure and all subsequent
identifications are done by presenting this certificate to the services that request
validation. This means a considerable reduction in the number of times one needs

2
to type in ones authentication credentials during a typical computing session
where one uses two to three applications, which require some sort of
identification.

Unfortunately, there is no international SSO infrastructure standard as of


November 2001. Because of this, every operating system has its own proprietary
authentication and authorisation methods and processes. This is unfortunate, as
the most natural and secure place for single sign-on functionality is to reside in the
lowest possible level of an operating system – the kernel.

Most modern operating systems use username/password combinations; the best


ones use pluggable authentication modules, which enable just about any sort of
authentication method to be used, and finally, some legacy Unixes use proprietary
authentication systems such as RACF, ACF/2 or Top Secret for authentication [1].

Due to the payload that legacy systems have to support, any general SSO
infrastructure needs to support and trust these non-native authentication methods
and implement a RSO-like system for mapping SSO access certificates to legacy
systems authentication methods, while at the same time appearing transparent to
the end-user. This functionality may be obtained using application proxies, -plug-
ins or scripting hosts.

The objective of this thesis is to evaluate what is the current state of strong
authentication in single sign-on platforms, and to formulate a recommendation of
an AAA -architecture for use in the Elisa Research Centre. The target computing
environment is heterogeneous, and there is a strong need for centralised extranet
user management.

Therefore, the primary issue addressed in this study is to identify the requirements
necessary for a successful S/SSO -infrastructure, identify and evaluate conforming
products and select the best product for deployment.

1.3 Thesis organisation


In chapter 1, the background for this study is introduced and the objectives are
laid down. The problem is also described together with the definition of some
central terms

Technical background information is provided to the reader in chapter 2. In this


chapter, technologies that are infrastructure components to single sign-on
solutions are briefly presented.

3
In chapter 3, different authentication methods are explained. This includes basic
authentication, both symmetric and asymmetric cryptography based strong
authentication methods as well as the most widespread network authentication
model, Kerberos.

In chapter 4, the general single sign-on agent architectures are discussed and the
distinction between web- and legacy single sign-on and reduced sign-on are
explained with an emphasis on the possibilities the web model has to offer to
facilitate strong authentication and access ticket operations.

In chapter 5, the requirements, specified by the author at Elisa’s Research Centre,


for an S/SSO application are presented with accompanying justifications for the
criteria.

In chapter 6, a market survey of single sign-on products currently on the market is


presented.

In chapter 7, the best product and its test set-up is described in detail, and its
compliance to the requirement criteria from chapter 5 is evaluated.

Finally, in chapter 8, this thesis work is summarised and conclusions are made.

1.4 Definition of central terms and concepts

Here the meanings of some central terms that I will use in this thesis are
explained.

Software architecture
The architecture of a software system is the set of interfaces through which its
functions are accessed, and the set of protocols with which it communicates with
other systems. [3, p.5]

Platform
The term is used to mean the service- or computing environment. The collection
of computing resources runs the single sign-on services and the host operating
system.

Protected service
The resources of a service are protected by the secure single sign-on
infrastructure.

Client

4
Client is a program that establishes connections for sending requests. [4]

Client environment
This is the computing platform of the person requesting access to protected
resources from the authentication and single sign-on infrastructure. This
includes all applications and devices that are required to access the protected
services.

Server
A server is an application program that accepts connections in order to service
requests by sending back responses to the clients. Any given program may be
capable of being both a client and a server; in this thesis, the use of these terms
refers only to the role being performed by the program for a particular
connection, rather than to the program's capabilities in general. Likewise, any
server may act as an origin server, proxy, gateway, or tunnel, switching
behaviour depending on the nature of each request. [4]

Proxy
Proxy is an intermediary program that acts as both a server and a client for
making requests on behalf of other clients. Requests are serviced internally or by
passing them on, with possible translation, to other servers. A proxy MUST
implement both the client and server requirements of this specification. A
"transparent proxy" is a proxy that does not modify the request or response
beyond what is required for proxy authentication and identification. A "non-
transparent proxy" is a proxy that modifies the request or response in order to
provide some added service to the user agent, such as group annotation services;
media type transformation, protocol reduction, or anonymity filtering. Except
where either transparent or non-transparent behaviour is explicitly stated, the
HTTP proxy requirements apply to both types of proxies. [4]

User agent
User Agent is the client, which initiates a request. These are often browsers,
editors, spiders (web-traversing robots), or other end user tools. [4]

Single sign-on
SSO is the concept of using a single credential to gain access to all computing
resources both locally and on the network. The used credential may include the
use of passwords, tokens, certificates and other authentication methods for initial
authentication.

Reduced sign-on
RSO is the concept of reducing the burden of signing onto multiple systems with
different credentials. This, however, is not single sign-on, as typically the user

5
has to have multiple credentials even though he is using a reduced sign-on
environment, which controls those credentials in everyday use.

Authentication
Authentication is the verification process to compare an electronically stored set
of identification data supposedly unique to a given user, with the same data that
the user inputs as their unique identifier, for comparison with the stored version.
If the comparison is found to be true, then that user is authenticated as correct,
so he can then be granted access rights (i.e. given authorisation) appropriate to
that user. [4]

Authorisation
A generally accepted definition of Authorisation is "the granting of access rights
to a subject (for example a user or a program)." [5, p.1]

Audit
Audit in this paper means the process of collecting transaction data to log-files
for later analysis in case of a system malfunction, breach or other unusual
circumstance. Detailed data of the systems usage history may be required for
later analysis.

Identification, Authorisation, Access Control


The IAA abbreviation describes what an SSO system is supposed to provide to
the network – a service where the user is identified, authorised and subsequent
access control is based on the previous authorisation.

Authorisation, Access Control, Accounting


The AAA abbreviation describes the complete computing platform in which
SSO participates. The accounting part takes care of logging, auditing functions
and possibly gathering billing information.

6
2 BACKGROUND INFRASTRUCTURE
In this chapter, technologies that play a significant role in building and operating a
single sign-on infrastructure and certificate based authentication systems are
described. These include short introductions to cryptography, the public-key
infrastructure, and its subcomponents. The content of this chapter strives to build
on each previous topic. The relevant cryptographic methods are described first
because these terms will be used throughout the entire study. Next building on the
general understanding of cryptography, the public-key infrastructure and its
components are described. PKI is essential for smart card authentication to
function, because certificates would not exist without PKI.

2.1 Cryptography

Cryptography plays a central role in modern authentication and single sign-on-


systems. It is used in various ways from generating certificates, signatures,
protecting data and traffic to storing credentials locally in a secure manner. The
most common cryptographic transformations are briefly explained below.

2.1.1 Hashing and message authentication

The hash –functions are used for generating unique “fingerprints” of an arbitrary
amount of data. This fingerprint is then utilised in digital signatures as the
representative of the actual document that is signed. These fingerprints are also
used for integrity checking purposes in the message authentication form.

Hashing is the process of using a one-way mathematical function on a message M


to render it into a unique fixed-size hash code as is shown in “Figure 2.1 The
hashing process” from which the original object cannot be deduced in any way
easier than simply guessing the original. The hash code is a function of every
single bit of M, H (M), and therefore if any bit in M changes the correct hash code
will change. [6, p.253, 7, p.429]

7
Lorem ipsum dolor sit amet, consectetuer
adipiscing elit, sed diam nonummy nibh
euismod tincidunt ut laoreet dolore magna
aliquam erat volutpat. Ut wisi enim ad minim
veniam, quis nostrud exerci tation ullamcorper
suscipit lobortis nisl ut aliquip ex ea commodo
consequat. Duis autem vel eum iriure dolor in
hendrerit in vulputate velit esse molestie
consequat, vel illum dolore eu feugiat nulla
facilisis at vero eros et accumsan et iusto odio
dignissim qui blandit praesent luptatum zzril
delenit augue duis dolore te feugait nulla
facilisi. Nam liber tempor cum soluta nobis
eleifend option congue nihil imperdiet doming id
quod mazim placerat facer possim assum. Typi
non habent claritatem insitam; est usus Data
legentis in iis qui facit eorum claritatem.
HASH(Data) FF:2A:45:12:DE:12
Investigationes demonstraverunt lectores
legere me lius quod ii legunt saepius. Claritas
est etiam processus dynamicus, qui sequitur
mutationem consuetudium lectorum. Mirum est
notare quam littera gothica, quam nunc
putamus parum claram, anteposuerit litterarum
formas humanitatis per seacula quarta decima
et quinta decima. Eodem modo typi, qui nunc
nobis videntur parum clari, fiant sollemnes in
futurum.

Figure 2.1 The hashing process

Mathematically the significant property of hash-functions is collision resistance


[6, p.259-260, 7, p.429]. This means that if a hash-function is collisions free no
two different objects can hash to an identical hash-value. Because of which the
hash of a message M, H (M) can act as a “fingerprint” -like unique identifying
value of the message M.

Hashes are typically used for storing passwords, message authentication and
verifying the signatures of electronic documents. Three typical hash algorithms
used are Secure Hash Algorithm (SHA), Message Digest 5 (MD5) and RIPEMD-
160 [6, p.272-293, 7, p.436-439,442-445].

A hash code can be used to provide message authentication i.e. integrity checking
by appending the hash code as redundant data to the message M. Using a plain
hash of the data is in itself insufficient, and therefore, a secret component needs to
be added to the hashed message. These hash-functions are called keyed-hashes or
message authentication functions [6, p.243, 7, p.455,]. This is displayed in “Figure
2.2 MAC constructed from a hash-function” below.

8
Lorem ipsum dolor sit amet, consectetuer
adipiscing elit, sed diam nonummy nibh
euismod tincidunt ut laoreet dolore magna
aliquam erat volutpat. Ut wisi enim ad minim
veniam, quis nostrud exerci tation ullamcorper
suscipit lobortis nisl ut aliquip ex ea commodo
consequat. Duis autem vel eum iriure dolor in
hendrerit in vulputate velit esse molestie
consequat, vel illum dolore eu feugiat nulla Password
facilisis at vero eros et accumsan et iusto odio
dignissim qui blandit praesent luptatum zzril
delenit augue duis dolore te feugait nulla
facilisi. Nam liber tempor cum soluta nobis
eleifend option congue nihil imperdiet doming id
quod mazim placerat facer possim assum. Typi
non habent claritatem insitam; est usus Data
legentis in iis qui facit eorum claritatem.
HASH(Data,password) FF:2A:45:12:DE:12
Investigationes demonstraverunt lectores
legere me lius quod ii legunt saepius. Claritas
est etiam processus dynamicus, qui sequitur
mutationem consuetudium lectorum. Mirum est
notare quam littera gothica, quam nunc
putamus parum claram, anteposuerit litterarum
formas humanitatis per seacula quarta decima
et quinta decima. Eodem modo typi, qui nunc
nobis videntur parum clari, fiant sollemnes in
futurum.

Figure 2.2 MAC constructed from a hash-function

This differs from conventional hash-functions in that a secret key ‘K’ is appended
to the message M so that MAC = H (M, K) and an attacker cannot re-compute the
hash from a modified message M’ simply by MAC’ = H (M’). Because he does
not know the secret key K, this is impossible. MAC algorithms can be built based
on conventional hash-functions or symmetric encryption algorithms with
modifications to make them one-way. Popular MAC algorithms are HMAC [7,
p.293, 8] and the Data Authentication Algorithm, FIPS PUB 113 [7, p.252].

2.1.2 Symmetric cryptography

With symmetric cryptography, the cryptographic function uses the same key for
encryption and decryption as visualised in “Figure 2.3 Symmetric encryption
process” and thus it is said to be symmetric. Symmetric cryptography is
sometimes called shared-secret or secret key cryptography because the
encryption/decryption key has to be shared between all parties authorised to
access the enciphered data.

Symmetric cryptography is used for the encryption of bulk traffic because it is


less demanding on computer resources than public-key based cryptography [6,
p.164, 7, p.416, 469]. In this study, symmetric cryptography is utilised for all
traffic encryption tasks between clients and servers, and in access ticket
encryption.

9
Plain text
Cipher text
Plain text

sdajhdkjahsdklah3lm4#¤"345l
Lorem ipsum dolor sit amet, Lorem ipsum dolor sit amet,
k4hj65klj45h6kj45n6m45n62-
consectetuer adipiscing elit, sed consectetuer adipiscing elit, sed
5.m,6n43%&N3456n3.45k6h#
diam nonummy nibh euismod diam nonummy nibh euismod
Sender ¤%&HJ#:¤%K6h#¤:%M;&HL: Receiver
tincidunt ut laoreet dolore tincidunt ut laoreet dolore
K¤%H&LKJ¤%H&LK%H&¤%
magna aliquam erat volutpat. Ut magna aliquam erat volutpat. Ut
H&J¤%H_&J#¤%H&:#¤H%&:
wisi enim ad minim veniam, quis Password Password wisi enim ad minim veniam, quis
K%JH&:#45-
nostrud exerci tation nostrud exerci tation
634k.56#_¤%6#¤%k6öL%&_¤
ullamcorper suscipit lobortis nisl ullamcorper suscipit lobortis nisl
%Lj7_:JY;:JTYJ4:&K/
ut aliquip ex ea commodo ut aliquip ex ea commodo
j56.k7j5.67j5.6k7j¤%K6j.5kj7.5
consequat. Duis autem vel eum consequat. Duis autem vel eum
6j7.%/J%J/%&:/
iriure dolor in hendrerit in iriure dolor in hendrerit in
j45&:7j56.j.6rk7j45.6k7j6.k7jas
vulputate velit esse molestie vulputate velit esse molestie
ytav
consequat, vel illum dolore eu Data Symmetric Data Symmetric consequat, vel illum dolore eu
j6.w45765.7567£$€@£$€@£
feugiat nulla facilisis at vero eros Cryptographic Algorith Cryptographic Algorith feugiat nulla facilisis at vero eros
$€2345lö345lk34ö€@€@€€sd
et accumsan et iusto odio et accumsan et iusto odio
fkasdjfklajsdlfk$£$ks,sdfaslhfa
dignissim qui blandit praesent dignissim qui blandit praesent
jsdhflkjs.,zxmcnadscFDFw454
luptatum zzril delenit augue duis luptatum zzril delenit augue duis
tsam,.nvi<zoejrwTA%656h45-
dolore te feugait nulla facilisi. dolore te feugait nulla facilisi.
645&&(/
Nam liber tempor cum soluta Nam liber tempor cum soluta
)7nknnvljzxnvjsndfmnamfcDN
nobis eleifend option congue nobis eleifend option congue
VM:SFNV:DFMNGSgjhfshn5
nihil imperdiet doming id quod nihil imperdiet doming id quod
6knskhf.kfK:JSDLKJGFLGJD
mazim placerat facer possim mazim placerat facer possim
FMNBF34k.56#_765.7567£$€
assum. Typi non habent assum. Typi non habent
@£$€@£$€2345lö345lk34ö€
claritatem insitam; est usus le claritatem insitam; est usus le
@€@€€sdfkasdjfklajsdlfk$£$k
s,sdfaslhfajsdhflkjs.,zxmcnads
cFDFw454tsam,.nvi<zoejrwTA

Figure 2.3 Symmetric encryption process

The downside of symmetric cryptography is the problem of key management, [6,


p.164, 7, p.48] and the trustworthiness of those involved with the shared secret.
William Stallings [6] remarks that public-key cryptography also requires a set of
protocols for key distribution, and therefore is not a panacea for key distribution
problems.

The most common symmetric algorithms in use today are the Data Encryption
Standard (DES) and 3DES – a variant of DES with a triple length encryption key.
More recent symmetric encryption algorithms include IDEA, Blowfish, Two Fish,
Rijndael, etc. Rijndael won the contest to become the Advanced Encryption
Standard [9], for the US National Institute of Standards and Technology. All of
these newer algorithms use longer encryption keys than DES, which makes them
much harder to compromise if no vulnerabilities in the algorithms themselves are
uncovered.

2.1.3 Asymmetric cryptography

Asymmetric cryptography, also known as public-key cryptography, is


fundamentally different from symmetric cryptography. The difference between
these two is in the level of secrecy of the encryption keys.

In symmetric cryptography, it is of primary concern to keep the encryption key


secret. In asymmetric cryptography, one has two different keys, the public key,
which can be revealed to anyone and the secret key that is personal and must be
kept secret [6, p.173, 7, p.467] as seen in “Figure 2.4 Asymmetric
encryption/decryption process”.

10
Plain text

Cipher text Plain text

sdajhdkjahsdklah3lm4#¤"345l
Lorem ipsum dolor sit amet, Lorem ipsum dolor sit amet,
k4hj65klj45h6kj45n6m45n62-
consectetuer adipiscing elit, sed consectetuer adipiscing elit, sed
5.m,6n43%&N3456n3.45k6h#
diam nonummy nibh euismod diam nonummy nibh euismod
¤%&HJ#:¤%K6h#¤:%M;&HL: Receiver
tincidunt ut laoreet dolore tincidunt ut laoreet dolore
K¤%H&LKJ¤%H&LK%H&¤%
magna aliquam erat volutpat. Ut magna aliquam erat volutpat. Ut
H&J¤%H_&J#¤%H&:#¤H%&:
wisi enim ad minim veniam, quis PIN-code wisi enim ad minim veniam, quis
K%JH&:#45-
nostrud exerci tation nostrud exerci tation
634k.56#_¤%6#¤%k6öL%&_¤
ullamcorper suscipit lobortis nisl ullamcorper suscipit lobortis nisl
%Lj7_:JY;:JTYJ4:&K/
ut aliquip ex ea commodo ut aliquip ex ea commodo
j56.k7j5.67j5.6k7j¤%K6j.5kj7.5
consequat. Duis autem vel eum consequat. Duis autem vel eum
Sender 6j7.%/J%J/%&:/
iriure dolor in hendrerit in iriure dolor in hendrerit in
j45&:7j56.j.6rk7j45.6k7j6.k7jas
vulputate velit esse molestie vulputate velit esse molestie
ytav
consequat, vel illum dolore eu Data Asymm.Algorith Data Asymm.Algorithm consequat, vel illum dolore eu
j6.w45765.7567£$€@£$€@£
feugiat nulla facilisis at vero eros EPKB(data) DPrKB(cipher text) feugiat nulla facilisis at vero eros
$€2345lö345lk34ö€@€@€€sd
et accumsan et iusto odio et accumsan et iusto odio
fkasdjfklajsdlfk$£$ks,sdfaslhfa
dignissim qui blandit praesent dignissim qui blandit praesent
jsdhflkjs.,zxmcnadscFDFw454
luptatum zzril delenit augue duis luptatum zzril delenit augue duis
tsam,.nvi<zoejrwTA%656h45-
dolore te feugait nulla facilisi. dolore te feugait nulla facilisi.
645&&(/
Nam liber tempor cum soluta Nam liber tempor cum soluta
)7nknnvljzxnvjsndfmnamfcDN
nobis eleifend option congue nobis eleifend option congue
VM:SFNV:DFMNGSgjhfshn5
nihil imperdiet doming id quod nihil imperdiet doming id quod
6knskhf.kfK:JSDLKJGFLGJD
mazim placerat facer possim mazim placerat facer possim
FMNBF34k.56#_765.7567£$€
assum. Typi non habent assum. Typi non habent
@£$€@£$€2345lö345lk34ö€
claritatem insitam; est usus le claritatem insitam; est usus le
@€@€€sdfkasdjfklajsdlfk$£$k
s,sdfaslhfajsdhflkjs.,zxmcnads
cFDFw454tsam,.nvi<zoejrwTA

Figure 2.4 Asymmetric encryption/decryption process

The main difference is that anyone can now send you securely encrypted material,
which can be decrypted only with your private key – the public key is not able to
decrypt the encrypted data – only encrypt it.

These two keys are connected to each other mathematically in such a way that one
cannot deduce the other without knowledge of the original generation prime
numbers used to create the key-pair. The RSA algorithm is the most popular and
widely used public-key algorithm on the market, named after its inventors Ron
Rivest, Adi Shamir and Leonard Adleman. [6, p.173, 7, p.467]

2.1.4 Public-key digital signatures

The digital signature algorithm, DSA, was developed by the NSA and became the
digital signature standard, DSS, of NIST in 1994. One of the main reasons DSA
was chosen over the de facto standard RSA was the royalty freeness of DSA vs.
the RSA patents and royalty requirements [7, p.486]. It was originally thought
possible to generate only digital signatures and therefore be exempt from export
restrictions. This assumption was shown to be in some cases false as demonstrated
in the book Applied Cryptography [7, p.490-491]. A digital signature is made by
encrypting the hash of the document to be signed with the signer’s secret key and
by appending the resulting value to the document as shown in “Figure 2.5 Digital
signature creation and verification process” below. This signature can be verified
by calculating the same hash-function of the document and comparing this with
the signed hash-value after decrypting it with the signer’s public-key.

11
Plain text Signed data

Lorem ipsum dolor sit amet,


Lorem ipsum dolor sit amet, consectetuer adipiscing elit,
consectetuer adipiscing elit, sed sed diam nonummy nibh
diam nonummy nibh euismod euismod tincidunt ut laoreet
tincidunt ut laoreet dolore Sender dolore magna aliquam erat
magna aliquam erat volutpat. Ut volutpat. Ut wisi enim ad
wisi enim ad minim veniam, quis PIN-code minim veniam, quis nostrud
nostrud exerci tation exerci tation ullamcorper
ullamcorper suscipit lobortis nisl suscipit lobortis nisl ut aliquip
ut aliquip ex ea commodo ex ea commodo consequat.
consequat. Duis autem vel eum Duis autem vel eum iriure Receiver Result
iriure dolor in hendrerit in dolor in hendrerit in vulputate
vulputate velit esse molestie velit esse molestie consequat,
consequat, vel illum dolore eu Data Asymm.Algorith vel illum dolore eu feugiat nulla Data Asymm.Algorithm
Signature valid / invalid
feugiat nulla facilisis at vero eros EPrKA(H(data)) facilisis at vero eros et DPKA(signed data)
et accumsan et iusto odio accumsan et iusto odio
dignissim qui blandit praesent dignissim qui blandit praesent
luptatum zzril delenit augue duis luptatum zzril delenit augue
dolore te feugait nulla facilisi. duis dolore te feugait nulla
Nam liber tempor cum soluta facilisi. Nam liber tempor cum
nobis eleifend option congue soluta nobis eleifend option
nihil imperdiet doming id quod congue nihil imperdiet doming
mazim placerat facer possim id quod mazim placerat facer
assum. Typi non habent possim assum. Typi non
claritatem insitam; est usus le habent claritatem insitam; est
usus le

HASH = D3:E4:GF:23:AC:FF

SIGN = as"44d%f6/
asdaASD#"s23F66jhj5349/
&¤csdfasdk

Figure 2.5 Digital signature creation and verification process

The RSA algorithm can also be used for digital signatures and nowadays it may
be used without royalties, so there are not many reasons left to use DSA anymore
as its signature verification is slower than RSA’s [7, p.483-486].

2.1.5 Key exchange algorithms

With key exchange algorithms, one can negotiate a symmetric encryption key for
bulk data encryption in such a way that an eavesdropper cannot deduce the key
from public information transmitted over the network in the key exchange
negotiation. A very successful key exchange algorithm is the Diffie-Hellman key
exchange algorithm [6, p.190, 7, p.513]. There also exist algorithms that
implement this with public-key encryption utilising certificates [10, p.38].

2.2 Public-key infrastructure

The public-key infrastructure is the collection of services needed for successful


deployment and use of digital X.509 certificates stored in smart cards, tokens or
file containers. “The purpose of a PKI is to support the user services based on
public key certificates for security [11].”

A public-key certificate is primarily used for generation of digital signatures, and


identification based on a cryptographic challenge-response authentication
protocol. The usage of PKC’s for authentication is a natural progression from
basic username/password authentication towards legally binding strong
identification over a computer network.

This chapter is segmented in the following logical order: first, the smart card i.e.
the secure container for the secret key issued by the CA is described. Then the
X.509 certificate and revocation list profiles are explained in order to provide the

12
reader with a general understanding of certificates. Once the certificate and the
immediate storage device have been described, it is necessary to expand the
discussion to include the directory. This is used for publishing the public part of a
certificate, which enables other people to use the cryptographic services provided
by the RSA public-key stored in the certificate. In addition, the revocation lists are
published in similar public directories for on-line determination of the validity of
a certificate. When combined, these technical components form a PKI framework.
The chapter also includes a brief discussion of the administrative entities of a PKI
such as the certificate and registration authorities as well as inter-domain trust
relationship management in the PKI models.

2.2.1 Smart cards

To provide secure storage for digital identification credentials it is advisable to


use hardware tokens called smart cards.

A smart card is the size of a credit card or a smaller SIM card, and it is equipped
with an embedded microcircuit as shown below in “Figure 2.6 Depiction of a
smart card [13]”, which contains memory and a microprocessor together with an
operating system for memory control. The smart card is a secure storage location
for secret information. [12] This is called the personal secure environment, PSE.

Figure 2.6 Depiction of a smart card [13]

Contact smart cards must be inserted into a smart card reader.


They have a small gold plate about 13 mm in diameter on the front, instead of a
magnetic strip on the back like a credit card. When the card is inserted into a
smart card reader, it makes contact with electrical connectors that transfer data to
and from the chip. [13]

13
The card can store data such as personal information, money, or other information
whose alteration or disclosure might be risky. The card may also store encryption
keys, which the card user can use for tasks such as key exchange, network
identification or digital signature. [12, 14]

The smart card can also be used for decoding encrypted messages and for digital
signatures. The card makes it possible to avoid reading the key into the computer
for software encryption, which means that the risk of disclosure is considerably
reduced. [12, 14]

The basic technical properties of smart cards are


• Tamper-resistance
• Random Access Memory for data storage
• Processing capabilities (e.g. cryptographic co-processors)

Tamper-resistance means that if an unauthorised party obtains a smart card it is


very unlikely that he will be able to discover the contents of the microchip
embedded into the card. Resistance to tampering with the card is achieved in
various ways starting from physical obfuscation of data paths [12] on the chip and
ending with evaluated secure card operating systems. Setec’s SetCOS™ was the
world’s first card operating system to receive security evaluation at the E4 High
level in accordance with the international ITSEC Criteria in 1996 [15].

The security of the smart card is based on both logical and physical security.
Logical security means that the card does not leaks out information that should be
kept secret. The logical security of the smart card is controlled by a secure
operating system. [12]

Physical security is related to the structure of the smart card chip. The aim is to
make an unauthorised examination of the chip impossible, or at least very
expensive. Address and data lines that logically belong together are intermingled
in different layers. Phantom transistors are embedded in the circuitry to make
examination more difficult. Upper and lower limits for clock frequency hinder the
examination of the circuitry. [12]

Some tokens might have physical self-destruction mechanisms, but these are
devices mainly used by armies and intelligence services [2].

From a usability aspect the most interesting properties of a smart card are
• Personal
• Portability
• Easy to understand and use by the layman

14
It is worth making the distinction between a smart card and a memory card. While
they are similar in appearance, a memory card is only suitable for disposable data
storage. Memory cards are in general use as phone cards or other disposable
means of payment. [12]

A drawback to memory cards is their weak security. Unlike smart cards, they have
no operating system that controls memory usage, and their use may be subject to
the use of a card PIN-code. However, the protection of the memory card is not as
strong and versatile as in a smart card. [12]

In this thesis, smart cards are utilised to provide a secure, portable and strong
authentication method, which can be used in a heterogeneous computing
environment.

2.2.2 The X.509v3 certificate

Certificates may be used in a wide range of applications and environments


covering a broad spectrum of interoperability goals and a broader spectrum of
operational and assurance requirements. [16]

In this chapter, the IETFs PKIX working groups’ definition of Internet operable
profile for X.509v3 certificates and CRLs is introduced based on RFC 2459:
“PKIX X.509v3 certificate profile and X.509v2 CRL profile”.

2.2.2.1 Definition of a certificate


Users of a public key shall be confident that the associated private key is owned
by the correct remote subject (person or system) with which an encryption or
digital signature mechanism will be used. This confidence is obtained with public
key certificates, which are data structures that bind public key values to subjects.
The binding is certified by having a trusted CA digitally sign each certificate. The
CA may base this decision upon technical proof of possession or by determining
the applicant’s identity by classic identification methods in person. A certificate
has a limited valid lifetime, which is indicated in its signed contents. Because a
certificate-using client can independently check a certificate’s signature and
timeliness, certificates can be distributed via untrustworthy communications and
server systems, and can be cached in unsecured storage in certificate-using
systems. [16]

2.2.2.2 Certificate versions


ITU-T X.509 (formerly CCITT X.509) or ISO/IEC/ITU 9594-8, which was first
published in 1988 as part of the X.500 directory recommendations, defines a
standard certificate format [17]. The certificate format in the 1988 standard is

15
called the version 1 (v1) format. When X.500 was revised in 1993, two more
fields were added, resulting in the version 2 (v2) format. ISO/IEC/ITU and ANSI
X9 developed the X.509 version 3 (v3) certificate format. The v3 format extends
the v2 format by adding a provision for additional extension fields. Particular
extension field types may be specified in standards or may be defined and
registered by any organisation or community. In June 1996, standardisation of the
basic v3 format was completed. [16] As of May 3, 2001 X.509 version 4 (v4) has
been available as a draft specification from ISO/IEC/ITU.

2.2.2.3 Internet certificate profile


The X.509 v3 certificate basic syntax is as follows in “Table 2.1 ASN.1 encoded
X.509v3 certificate syntax [16]” encoded in the 1988 ASN.1 syntax. For signature
calculation, the certificate is encoded using the ASN.1 distinguished encoding
rules (DER) as described in ITU-T X.208.

Certificate ::= SEQUENCE {


tbsCertificate TBSCertificate,
signatureAlgorithm AlgorithmIdentifier,
signatureValue BIT STRING }

TBSCertificate ::= SEQUENCE {


version [0] EXPLICIT Version DEFAULT v1,
serialNumber CertificateSerialNumber,
signature AlgorithmIdentifier,
issuer Name,
validity Validity,
subject Name,
subjectPublicKeyInfo SubjectPublicKeyInfo,
issuerUniqueID [1] IMPLICIT UniqueIdentifier OPTIONAL,
-- If present, version shall be v2 or v3
subjectUniqueID [2] IMPLICIT UniqueIdentifier OPTIONAL,
-- If present, version shall be v2 or v3
extensions [3] EXPLICIT Extensions OPTIONAL
-- If present, version shall be v3
}

Version ::= INTEGER { v1(0), v2(1), v3(2) }

CertificateSerialNumber ::= INTEGER

Validity ::= SEQUENCE {


notBefore Time,
notAfter Time }

Time ::= CHOICE {


utcTime UTCTime,
generalTime GeneralizedTime }

UniqueIdentifier ::= BIT STRING

SubjectPublicKeyInfo ::= SEQUENCE {


algorithm AlgorithmIdentifier,
subjectPublicKey BIT STRING }

Extensions ::= SEQUENCE SIZE (1..MAX) OF Extension

Extension ::= SEQUENCE {


extnID OBJECT IDENTIFIER,
critical BOOLEAN DEFAULT FALSE,
extnValue OCTET STRING }

Table 2.1 ASN.1 encoded X.509v3 certificate syntax [16]

16
As a concrete example, a certificate obeying this profile issued by the Elisa
Communications Corp. test CA is presented here:

Figure 2.7 The beginning of a certificate, DN shown

In “Figure 2.7 The beginning of a certificate, DN shown” the Subject attribute aka
Distinguished Name or DN is highlighted. It is noteworthy that the DN –attribute
is actually an aggregate of multiple attributes as can be seen in this picture.
Namely, it consists of the attributes CN, G, SN, Serial Number, I, OU, O and C.
These stand for
• CN = Common Name
• G = Given Name
• SN = Surname
• Serial Number = a unique serial number inside the CA who issued this
certificate
• I = Initial
• OU = Organisation Unit
• O = Organisation
• C = Country.

17
There could be other attributes and the contents of these fields differ from subject
and CA to another.

Figure 2.8 The middle section of the same certificate, CRL Distribution Point
shown

In “Figure 2.8 The middle section of the same certificate, CRL Distribution Point
shown” the CRL distribution point is highlighted because this enables the client
software to validate the certificates validity when one attempts to use a certificate.
If the CRL check is omitted or impossible, the trust model of a PKI is broken and
one cannot trust the certificate.

18
Figure 2.9 The end section of a certificate, Key Usage attributes shown

Lastly, the KeyUsage attribute is highlighted in “Figure 2.9 The end section of a
certificate, Key Usage attributes shown,” to show the constraints for the usage of
this particular certificate. As can be deduced from the above constraints this
certificate is used for client authentication with digitalSignature, keyAgreement
and dataEncipherment usage attributes as opposed to the second certificate on the
example smart card. Its KeyUsage field states as the constraint NonRepudiation. It
is therefore usable only for digital signature generation. Other possible values of
the KeyUsage-attiribute are listed below in “Table 2.2 Possible values for the
KeyUsage attribute [16]”:

KeyUsage ::= BIT STRING {


digitalSignature (0),
nonRepudiation (1),
keyEncipherment (2),
dataEncipherment (3),
keyAgreement (4),
keyCertSign (5),
cRLSign (6),
encipherOnly (7),
decipherOnly (8) }

Table 2.2 Possible values for the KeyUsage attribute [16]

19
2.2.2.4 The X.509v2 certificate revocation list
One goal of this X.509 v2 CRL profile is to foster the creation of an interoperable
and reusable Internet PKI. CRLs may be used in a wide range of applications and
environments covering a broad spectrum of interoperability goals and an even
broader spectrum of operational and assurance requirements. This profile
establishes a common baseline for generic applications requiring broad
interoperability. The profile defines a baseline set of information that can be
expected in every CRL. In addition, the profile defines common locations within
the CRL for frequently used attributes as well as common representations for
these attributes.

The X.509 v2 CRL syntax is as follows in “Table 2.3 ASN.1 encoded X.509v2
CRL syntax [16]”. For signature calculation, the data that is to be signed is ASN.1
DER encoded. ASN.1 DER encoding is a tag, length, and value encoding system
for each element.

CertificateList ::= SEQUENCE {


tbsCertList TBSCertList,
signatureAlgorithm AlgorithmIdentifier,
signatureValue BIT STRING }

TBSCertList ::= SEQUENCE {


version Version OPTIONAL,
-- if present, shall be v2
signature AlgorithmIdentifier,
issuer Name,
thisUpdate Time,
nextUpdate Time OPTIONAL,
revokedCertificates SEQUENCE OF SEQUENCE {
userCertificate CertificateSerialNumber,
revocationDate Time,
crlEntryExtensions Extensions OPTIONAL
-- if present, shall be v2
} OPTIONAL,
crlExtensions [0] EXPLICIT Extensions OPTIONAL
-- if present, shall be v2
}

Table 2.3 ASN.1 encoded X.509v2 CRL syntax [16]

As can be seen from this ASN.1 notation of the syntax of the CRL, a CRL
consists of a signed list of serial numbers of all revoked certificates with
corresponding revocation time stamps and possibly a reason code, if it is of
version 2.

Below in “Figure 2.10 The FINEID CRL on 5.12.2001” the certificate revocation
list of Dec 5 2001 of the FINEID directory is shown as Microsoft Internet
Explorer presents it.

20
Figure 2.10 The FINEID CRL on 5.12.2001

The CRL may be stored and distributed via a directory (LDAP, X.500), a web or
ftp server accessible via http or ftp or some other mechanism to provide the client
with similar check-up capabilities such as OCSP as defined in [16].

2.2.2.5 Certification path validation


Certification path validation -procedures are based on section 10 of the X.509v3
specification. Certification path validation verifies the validity and path of trust of
the users and issuers certificates. The path processing proceeds as shown below in
“Figure 2.11 Windows 2000 certificate path validation logic [18]”. The ‘basic
constraint’ and ‘policy constraint’-extensions allow the certification path
processing logic to automate the decision making process. [16]

Figure 2.11 Windows 2000 certificate path validation logic [18]

21
With path validation logic, the validity of a certificate is checked against all
possible revocation reasons from multiple sources including CRL-lookups. There
can also be constraints on a certificate imposed by certification authority policies
that can be effective on a certificate from higher up in the certification path.

2.2.2.6 Storage of secret keys


Secret keys can be stored either in an encrypted container-file or on hardware
tokens like a smart cards or USB-tokens. These two primary storage methods
differ mainly in the PSE offered by the storage container. With storage files, some
security is usually offered using conventional encryption to protect the secret
keys. Unfortunately, regardless of the storage, protecting a secret key stored in a
file can easily be duplicated, whereas hardware tokens are usually tamper- and
replication resistant.

The natural advantage of file-based certificate storage is the versatility of the


format [19]. While it does not require any additional equipment to use, and it is
easily transportable, it is a weak PSE.

A token-device on the other hand, while usually requiring additional reading


devices, offers greater protection of the secret keys. This offsets the versatility of a
file-based solution [19].

Stealing a secret key is considerably harder with cryptographic tokens, because


the keys are stored into a tamper-resistant area of the token [19]. They are stored
in such a way that they cannot be copied or read directly, so one has to steal the
physical token. With file-based containers, one can easily copy the secret-key
container onto a floppy disk without risk of detection. Once the container is
copied, an off-line attack can be mounted against its security measures without
alerting the owner to the loss of the secret keys.

As long as the certificate and its encryption keys are not bound to a person
biometrically, one can only presume that the person using a certificate is the
person who he claims to be, since anyone can use a certificate if the PIN code has
been compromised and the token or container has been stolen. Such biometric
solutions are becoming available on the market.

2.2.3 The LDAP directory

A directory is in effect, a server or a distributed set of servers that maintain an


information database of people. The stored information includes user names,
network address, titles, addresses and other information about the user. [6, p.341]

22
The certificates are stored in directories similar to the telephone directories for
easy access. The X.500 series of recommendations defines a very complex
directory protocol-suite and its structure. The LDAP directory was created as a
simpler alternative with support for the central X.500 features.

This imposes a large burden on the directory access protocol since it should be as
universal as possible, while at the same time enabling wide scalability to support
numerous daily enquiries.

The directory itself can consist of various types of databases so long as it supports
the LDAP query protocol. [19] Typical commercial LDAP directories include the
Netscape Directory Server, Oracle and IBM DB2 databases with LDAP front-
ends.

An LDAP directory can be queried with all modern web browsers like the
Microsoft Internet Explorer and Netscape Navigator as well as some LDAP
browsers specially built for this function.

2.2.4 PKI structure

A typical public-key infrastructure consists of


• A Certificate Authority (CA) who issues the certificates
• A Registration Authority (RA) who authenticates the requestor before it
forwards the request to the CA, who will then issue a certificate for the
requestor.
• An LDAP directory for storage of issued certificates, public-keys and
Certificate Revocation List (CRL) information
• A token manufacturer and a pre-personaliser are also required if they are
used for certificate storage. [11, 20]

23
Certification Policy (CP) and
Certification Practice Statements (CPS) Hardware Token
Manufacturer

PDA

LDAP LDAP
CRL Public Keys

Root Certification Authority A

Subordinate Subordinate Subordinate


Certification Authority A1 Certification Authority A2 Certification Authority A3

Registration
Authority A2-1

Person Person Person Person Person


Server Server Server Server

Person Person
Server Server

Figure 2.12 A sample PKI environment

As can be seen in the above “Figure 2.12 A sample PKI environment” there is a
single root CA, three subordinate CA’s and one RA. Here the root CA normally
would be operated by a corporation’s IT administration, while the subordinate
CA’s would represent the various departments, who take care of local user
administration on behalf of the corporation’s data administration.

Usually, it is regarded as good practice to refrain from issuing end-entity


certificates directly by the root CA. It is better to build a hierarchy of subordinate
CA’s below the root CA to take care of daily administration. [20]

In particular, for a PKI to be successful all parts have to be reliable and the root-
certificate has to be in trustworthy hands, since the PKI model is built on trust,
and that the root-certificate is non-corruptible.

24
2.2.5 Trust models

The two primary trust models in use today are the strictly hierarchical model and
the networked model. These models differ from each other in the fundamentals of
the hierarchy structure.

2.2.5.1 Strictly hierarchical trust model


In the strict trust model, there is only one root of a hierarchy. Here every entity
trusts the integrity of the root certificate and each other because they all have a
trust path available via the trusted root. This is depicted in “Figure 2.12 A sample
PKI environment”. Here the only root is “Root CA A” and all other entities in the
hierarchy below the root are subordinates to the same root. [20, 21]

2.2.5.2 Distributed trust model


In the distributed trust model, there are multiple adjacent PKI’s with their
respective roots and subordinates. Here every PKI is a strict hierarchical PKI for
its domain, but having multiple independent PKI’s it makes this a distributed
architecture. This is depicted in “Figure 2.13 Two adjacent PKI domains” below.
[20] Inter-domain trust relationships are established by means of cross-
certification in this distributed trust model.

Certification Policy (CP) and Certification Policy (CP) and


Certification Practice Statements (CPS) Hardware Token Certification Practice Statements (CPS)
Manufacturer

PDA

LDAP LDAP
CRL Public Keys

LDAP LDAP
CRL Public Keys

Root Certification Authority A Root Certification Authority B

Subordinate Subordinate Subordinate Subordinate Subordinate Subordinate


Certification Authority A1 Certification Authority A2 Certification Authority A3 Certification Authority B1 Certification Authority B2 Certification Authority B3

Registration
Authority A2-1

Person Person Person Person Person Person Person Person Person Person Person
Server Server Server Server Server Server Server Server Server

Person Person
Server Server

Figure 2.13 Two adjacent PKI domains

25
2.2.6 Cross-certification

In the strictly hierarchical trust model, there is a single root for all PKI
participants and there is only one PKI. While this simplifies matters, it is
unfortunately an impractical architecture from a policy perspective. A problem
arose from having a distributed trust model with multiple PKI’s that did not trust
each other per default via a common root. To resolve the problem the concept of
cross-certification was developed.

When two PKI’s wish to interoperate with each other they cross-certify each
other, which occurs when the CA’s in question digitally sign a cross-certificate,
which in practice is a certificate that contains the counterparts’ public keys [6,
p.343-344, 20].

When a client from below the cross-certification point tries to verify a certificate
from another PKI domain, it seeks a route to check the validity and trust
relationship by finding a trusted path via the cross-certificate.
Certification Policy (CP) and Certification Policy (CP) and
Certification Practice Statements (CPS) Hardware Token Certification Practice Statements (CPS)
Manufacturer

PDA

LDAP LDAP
CRL Public Keys

LDAP LDAP
CRL Public Keys

Cross Certification
Root Certification Authority A Root Certification Authority B

n
icatio
Certif
Cross

Subordinate Subordinate Subordinate Subordinate Subordinate Subordinate


Cross Certification
Certification Authority A1 Certification Authority A2 Certification Authority A3 Certification Authority B1 Certification Authority B2 Certification Authority B3

Registration
Authority A2-1

Person Person Person Person Person Person Person Person Person Person Person
Server Server Server Server Server Server Server Server Server

Person Person
Server Server

Figure 2.14 Cross-certified PKI domains with multi-path trust

In “Figure 2.14 Cross-certified PKI domains with multi-path trust” three possible
cross-certification paths are illustrated. In the first version, the root CA’s cross-
certify each other and subsequently both PKI domains have a trusted path between
one another. A somewhat limited variation of the previous case can be seen as the
arrow between CA A2 and root CA B. Here all of PKI domain B has a trusted
path to all entities registered below CA A2 and vice versa. The most limited
version of cross-certification described here is the arrow between CA A3 and CA
B1. Now only the entities registered below CA A3 and CA B1 trust each other
and the rest of the PKI domains A and B remain alien to one another.

26
All of these cross-certification paths may coexist in peace. Usually though if the
roots cross-certify each other, all subordinate cross-certificates should be deleted
for the sake of clarity.

27
3 AUTHENTICATION PROTOCOLS AND
METHODS
This chapter is supposed to give the reader a general understanding that there
exists multiple methods of authentication, some more secure than others. There
are some de facto standards in authentication such as basic authentication i.e.
using a user inputted username and password combination. In this chapter,
different authentication methods are described in order of complexity and
strength. The most interesting authentication method concerning this thesis is of
course the public-key certificate based authentication scheme described below.

3.1 User ID / password authentication

Historically, the use of plain username and password combinations has been the
most common method of authentication. Today, it is still the most prevalent
authentication method in existence, and its demise is nowhere in sight. It is an
outdated, insecure method, but easy to implement with minimal requirements on
the user-terminals with regard to equipment and software. Due to this and large
legacy payload, it will still be in use well into the 21st century.

3.1.1 Basic authentication

Basic authentication means a plaintext username/password pair, which is inputted


into a dialogue box, a form or prompted for this information. Then the textual
information is checked against a database of correct answers, which one stores in
plaintext or in some hashed form in a text file or in a database within the
authorisation system.

28
Password
Terminal Server Database
Req(password, UID)

Enter UserID and


Password Prompt(UID)
Resp(password, UID)

Send(UID)

Prompt(Password)

Send(Password)

Auth(OK)

Figure 3.1 Simplest occurrence of basic authentication

In “Figure 3.1 Simplest occurrence of basic authentication”, the simplest way of


authenticating a user ID and a password is depicted. Here the server asks the user
to identify himself and after the user has supplied identification information, his
user ID, the server prompts for his password and checks whether it exists on the
system, and if the password matches the one stored in its internal password
database. If there is a match, the user is granted access to the server.

In a UNIX setting this would be the /etc/passwd or /etc/shadow file, which


contains the pairs “username” in plaintext, and “password” in hashed form H
(passwd, salt). “Salt” is a padding of bytes appended to the real password before
hashing it to increase the number of possible hashes for one single password to
make dictionary based password guessing attacks more time consuming.

In the web environment the HTTP protocol specifies a challenge-response


framework for a web server to request authentication credentials from a client by
sending the client a “401 Unauthorised” error message. The user-agent may
respond to this challenge with an Authorisation-header field with the http-request.
[22]

This information is then sent unencrypted over the network to the service for
authentication. If the given credentials are correct, the client is given access i.e.
authenticated and authorised to access the service. When the identity of the client
is established access control decisions can be made as dictated by the access
policy of the service.

3.1.2 Digest authentication

Like basic authentication, digest access authentication verifies that both


communicating parties share a secret (a password); unlike basic authentication,

29
this verification can be done without sending the password unscrambled, which is
the biggest drawback of basic authentication. [22]

Terminal Password
Server
Database
Req(H(password), UID)

Enter UserID and


Password Prompt(UID) Resp(H(password), UID)

Send(UID)

Prompt(Password)

Send(H(Password))

Auth(OK)

Figure 3.2 Simplified Digest Authentication

Digest authentication operates much the same way as basic authentication as is


illustrated in “Figure 3.2 Simplified Digest Authentication” above, but with the
slight modification that only hashes of the password are transported over the
Internet instead of a plaintext password.

The digest scheme issues challenges using a nonce value. A valid response
contains a checksum of the username, password, a given nonce value, the HTTP
method and the requested URI. This way, the password is never transmitted
unscrambled over the Internet. [22]

The basic authentication scheme is not considered a secure method of user


authentication since the user name and password are passed over the network as
plaintext. The authors of RFC 2617 specifically make a note that while digest
authentication is better than basic authentication, it is not infallible, and much
stronger methods are available with systems like Kerberos and public key based
methods.

3.1.3 One-time passwords

This is a special case of basic authentication where the password changes every
time one authenticates to a service and none of the passwords is reusable. When
doing OTP-authentication the server retains the correct passwords in a secure
index so that when one authenticates only the next unused password is valid. This
protects the authentication process from replay attacks in which an eavesdropper
has recorded previous network traffic and discovered the username/password pair
and attempts to log into the protected service using these credentials.

30
There are two entities in the operation of the OTP one-time password system. The
generator must produce the appropriate one-time password from the user's secret
pass-phrase and from information provided in the challenge from the server. The
server must send a challenge, which includes the appropriate generation
parameters to the generator. The one-time password must be verified, stored and
needs to correspond to the sequence number. [23]

This requires that the server does not contain any compromising secret
information while the seed, sequence number and last used key are all public data
and non-compromising given that the secure hash function used to generate the
password-sequence is non-invertible.

The OTP system generator passes the user's secret pass-phrase, along with a seed
received from the server as part of the challenge, through multiple iterations of a
secure hash function, which produces a one-time password. After each successful
authentication, the number of secure hash function iterations is reduced by one,
which generates a sequence of unique passwords. The server verifies the one-time
password received from the generator by computing the secure hash function
once, and comparing the result with the previously accepted one-time password.
[23]

The generator on the other hand must be reliable and secure as it contains the
secret generation key with which it computes the required number of hashes to
generate the correct password.

A historically successful one-time password scheme is detailed in RFC 1760 "The


S/KEY One-Time Password System". The “One-time password system” detailed
in RFC2289 is the successor of S/Key.

In the OTP system the password is coded as six human readable words as shown
in “Table 3.1 A typical one-time password sheet” to encode the 64 bit long
password into an more easily typed version [23] the standard dictionary for this
encoding is documented in the S/Key RFC 1760. The password may also be
encoded in hexadecimal as is presented in “Table 3.2 Alternative hexadecimal
OTP-key representations” or a non-intersecting localised dictionary may be
generated on a server as defined in [23] appendix B.

81: GAP LUSH BUNT LIAR ARAB BYTE


82: COW HIKE CAM SIN USES ONLY
83: TELL SWIM DIME HUG FORE FINK
84: GALE NOAH DORA BERG TUM SWAY
85: TILE VAIN BUCK KEEN SALT DAB
86: OWN FEND FLEA FOUL HOOD RAIN
87: SOFA BARD KIRK IRE JUDE DAM
88: WILD FOOL FORT CLAD GLIB PHI

Table 3.1 A typical one-time password sheet

31
The alternative way of representing the keys in hexadecimal is shown below:

3503785b369cda8b 0x3503785b369cda8b
e5cc a1b8 7c13 096b 0xe5cca1b87c13096b
C7 48 90 F4 27 7B A1 CF 0xc74890f4277ba1cf

Table 3.2 Alternative hexadecimal OTP-key representations

In the above hexadecimal representation, the first column illustrates the different
acceptable forms, and the right-hand column the correct interpretation of the
password. This example illustrates the requirement of total ignorance of white
spaces [23].

There are a few variants of OTP like the time-based variant of SecurID™ from
RSA Security Inc., which is a hardware token that generates new passwords every
60 seconds that are valid only for that period of time. In the SecurID™ scheme
the token and the authentication server are clock-synchronised and seeded with
the same secret start value. [24] Once the system has been activated, the stream of
“random” numbers the token generates is identical on all similarly primed tokens.
Usually, this random number is appended to a static personal secret for added
security, so that the possession of the token in itself does not permit access to
protected resources.

3.2 Biometric authentication

Biometric authentication uses biometric data to form a biometric template of the


biological feature measured by the particular biometric method. This could be the
positions of curves and junctions of a fingerprint, the pigment structure of the iris
or voiceprint data of a certain sentence.

If the person can provide a good enough sample of the measured biometric, which
closely match a previously recorded biometric template, the identity of the person
authenticated is guaranteed.

The best-known biometric identification method is the finger scan, while less
common ones include iris scans, voiceprint, facial images and hand geometry
scans.

Biometrics has a very interesting future but it is not widely used today for various
reasons [25]. A possible application of biometrics could be the replacement of
conventional PIN-codes on smart cards with finger scans. This could occur with
the integration of the card reader device and the finger scanner to build a TCB for
smart card and biometric operations.

32
3.3 Symmetric key based cryptographic authentication

All cryptographic authentication methods are so called strong authentication


methods. A very secure class of methods are the multi-factor authentication
systems where two or more authentication systems are used in conjunction for
final authentication.

Authentication can be established by using pre-established symmetric


cryptographic keys and various ways of exchanging key material for a session
key. These symmetric methods are described here mainly because the Kerberos
system, which is described later, is based on symmetric cryptography.

The ISO/IEC 9798-2 standardises multiple mechanisms for entity authentication.


Two of the more interesting are described here.

There are two major classes of cryptographic protocols based on the reliance on
synchronised time between the communicating parties’ computers. Timestamp
based protocols rely heavily on synchronised time, and subsequently succeed in
creating the required security associations with fewer messages than non-
synchronised algorithms. This is a nice feature, but the price for the simplification
of the protocol is obtaining reliable clock synchronisation between all
communicating parties. This problem is solved by using nonces i.e. random
numbers generated by the communicating parties and passed between the parties
in the protocol messages. Both methods are used to enhance resistance against
replay attacks by an eavesdropper replaying old messages. By tracking the
freshness of issued nonces or the timestamps the communicating parties are able
to deduce if a message is fresh or not.

3.3.1 ISO/IEC 9798-2 timestamp based unilateral authentication

When performing a unilateral authentication within an environment with


synchronised clocks it is possible to use the minimal authentication procedure
shown below in “Figure 3.3 Unilateral authentication with pre-shared key K and
timestamp”:

33
Personal
Terminal Server

EKab(TA,B)
Enter key Kab

Figure 3.3 Unilateral authentication with pre-shared key K and timestamp

The key Kab must be pre-shared and the clocks of both terminal A and server B
must be synchronised for this to work. Server B can verify the validity of the
authenticator by verifying the freshness of the timestamp TA and that his own
identifier ‘B’ is in the encrypted message. If both conditions are met, B can be
certain that the message originated from A. [10, p.36]

3.3.2 ISO/IEC 9798-2 nonce based mutual authentication

In this example, mutual authentication can be achieved without clock


synchronisation due to the use of nonces.

Personal
Terminal A Server B

Enter key Kab NB

EKab(NA,NB,B)

EKab (NB,NA)

Figure 3.4 Mutual authentication with nonces and pre-shared key

In “Figure 3.4 Mutual authentication with nonces and pre-shared key” the
authentication process does not rely on synchronised time, but still uses pre-
shared symmetric encryption keys. For this to work server B can check the
identity of client A by first checking that the nonce NB matches that sent in phase
1, and that its identifier ‘B’ is in the encrypted message. Client A establishes

34
server B’s identity by checking that the nonces NA and NB are reversed and match
those previously transmitted in phase 3. [10, p.36]

3.4 Public-key certificate based cryptographic authentication

Public-Key Certificate (PKC) based authentication is a cryptographically


augmented process of exchanging data encrypted with a persons public-key,
which can only be decrypted by the corresponding private key of the said person.
If this piece of data is returned to the sender encrypted with the sender’s public
key, it signifies that the original receiver had to be able to decrypt the particular
piece of data before re-encryption, and therefore, has possession of the secret key.
This is the general authentication type class of interest in this thesis work, because
this occurs between the client and server when strong authentication happens.

Here three different variations of the PKC based authentication protocols are
described. These are a good representation of PKC based authentication protocols
and progress from unilateral authentication with and without synchronised time to
mutual authentication with and without synchronised clocks. The most interesting
version of these protocols is described in “X.509 mutual authentication with
nonces” subchapter.

3.4.1 Unilateral authentication protocols

In client authentication, the client is authenticated unilaterally to the server.

Figure 3.5 Simplified public-key based client authentication [25, p.384]

In ”Figure 3.5 Simplified public-key based client authentication [25, p.384]”,


simple client authentication procedure is depicted. The server sends a random
number called a nonce-value to the client, which is called the challenge. In
response to the challenge, the client encrypts ‘ECsk(Nonce)’ the nonce with its
secret key and returns this to the server. To verify the identity of the client the
server needs to obtain the said users public key from a public directory, decrypt

35
the encrypted nonce ‘DCpk(ECsk(Nonce))’ and verify the value of the decryption.
The result should be the same nonce-value originally sent to the person to be
authenticated. Nonce is a number that is never used more than once [10, p.36].

This process is crude and does not reflect reality in the implementation details. A
more precise version is presented in the ISO/IEC 9798-3.

Using timestamps and unilateral authentication, the ISO/IEC-way constructs a


single data package in client A and sends it to server B as shown in “Figure 3.6
Unilateral authentication with PKC and synchronised time”:

Personal
Terminal A Server B

Enter PIN C A (PK A ), TA , B, SPK -1 (TA , B)


A

Figure 3.6 Unilateral authentication with PKC and synchronised time

Here client A sends his certificate CA, timestamp TA, destination identifier B and
the signature of both TA and B. When server B receives this message, the validity
of the certificate and its signature is verified with the given public-key. If the
signature is valid, it means that the server is communicating with client A since it
is the only entity in possession of the correct secret-key. [10, p.37]

3.4.2 Mutual authentication protocols

When both communicating parties wish to authenticate each other, it is called


mutual authentication. This is accomplished by adding two additional steps into
the one-sided processes described above forming a three-way handshake protocol.

3.4.2.1 ISO 9798-3 mutual authentication with nonces


With public-key certificates, mutual authentication without clock synchronisation
is obtainable for example with the ISO 9798-3 suggested method, as shown below
in “Figure 3.7 Mutual authentication with PKC’s”:

36
Personal
Terminal A Server B

Enter PIN NB

C A (PK A ), N A , B, S PK -1 (N A , N B , B)
A

C B (PK B ), A, S PK -1 (N B , N A , A)
B

Figure 3.7 Mutual authentication with PKC’s

With this method server B sends a nonce NB to client A. Client A attaches his
public-key certificate CA to the response message, with a new nonce NA and the
destination identifier B, and digitally signs the triplet (NA, NB, B). This triplet
ensures that server B can check the integrity of the message, and that A is capable
of using the associated private-key. On receipt, B will check the validity of A’s
certificate and subsequently verify the signature SPKa(NA,NB,B) with the public-
key PKA.

To complete the mutual authentication server B must now send its certificate CB,
the destination identifier ‘A’ and the signed triplet (NB, NA, A). Client A now
checks the validity of B’s certificate and verifies the signature SPKb(NB,NA,A).

If both signatures are valid, the communicating participants can communicate


with each other, because knowledge of the secret-key is required for successful
signature operations. [10, p.37]

3.4.2.2 ITU-T X.509 mutual authentication with synchronised clocks


In the protocol depicted in “Figure 3.8 X.509 mutual authentication with
synchronised time”, security is based on the timestamps for validity time and
forced delay detection as well as on the nonces.

37
Personal
Terminal A Server B

Enter PIN C A (PK A ), TA , N A , B, data, S PK -1 (TA , N A , B, data)


A

C B (PK B ), TB , N B , A, N A , data' , S PK -1 (TB , N B , A, N A , data' )


B

Figure 3.8 X.509 mutual authentication with synchronised time

Client A sends a message containing its certificate CA, a timestamp TA, a nonce
NA, the destination identifier B, some data like a session key and a signature over
all these elements. Server B must verify the certificates validity, the timestamps
freshness and the signature. If all these agree, A is authenticated to B.

To authenticate to A, server B must send it its certificate CB, a timestamp TB, a


nonce NB, the destination identifier A, the nonce NA, arbitrary data and a signature
over these data fields. On receiving the reply, A performs the same checks as B in
the previous phase. If the checks are successful, B is authenticated to A.

If the data-field is used to transmit an encrypted session key EPK B (k AB ) , its validity
can be confirmed by both parties by decrypting it with their respective private-
keys and validating it against the signatures. [10, p.37]

3.4.2.3 X.509 mutual authentication with nonces


To enable operation in an environment without trusted and synchronised time
there is a three-way authentication protocol defined in the X.509 specification that
relies only on nonces depicted below in “Figure 3.9 X.509 mutual authentication
with nonces”.

38
Personal
Terminal A Server B

C A (PK A ), N A , B, data, S PK -1 (N A , B, data)


Enter PIN A

C B (PK B ), N B , A, N A , data' , S PK -1 (N B , A, N A , data' )


B

N B , B, S PK -1 (N B , B)
A

Figure 3.9 X.509 mutual authentication with nonces

This differs from the timestamp-utilising version by requiring an extra step vs. not
relying on timestamps. Otherwise, this operated the same as above, but instead of
checking for the freshness of the timestamps, A must check that the nonce NA
received in phase 2 is the same as that transmitted to B in phase 1 and similarly B
must check NB in phase 3 vs. for the one transmitted in phase 2. [10, p.38]

3.5 Network authentication systems

In this subchapter, the Kerberos V network authentication system is described as a


prime example of a network AAA-service that enables single sign-on
functionality on a wide scale. Similar systems are OSF Distributed Computing
Environment DCE and Sesame, a semi-commercial European contender to
Kerberos, developed as part of the EU RACE-programme. The development work
was undertaken by ICL, Siemens and Bull, who are the licence holders.

The Massachusetts Institute of Technology (MIT) built a scalable network


authentication infrastructure in project Athena in the mid 80’s [10, p.79]. The
result from this project was called Kerberos. It is currently at version 5 and it is
specified in RFC 1510.

The stated goals of the Kerberos system were 1) to allow a user to single sign-
onto the network and 2) to protect the authentication information, making it more
difficult for an impostor to impersonate a legit user. [10, p.80]

The first published version of Kerberos was version 4, but this is now considered
insecure, so only the Kerberos v5 system is described here. Kerberos is a trusted
third party authentication service [7, p.566]. Notice that Kerberos does not attempt
to implement the authorisation or auditing functions.

39
Kerberos V’s most notable use today is within Windows 2000 as its domain
authentication method of choice. Microsoft has added to the Kerberos V
authentication protocol some Windows specific additions, and two missing heads
of Cerberus – authorisation and auditing ones. [10, p.363-366, p.427] Microsoft
also supports the public-key initial authentication extension to Kerberos V, which
is currently available as an Internet Draft from the IETF.

3.5.1 Kerberos system architecture

The Kerberos system makes some assumptions about the operating environment.
These are
• Synchronised, reliable clocks
• The client computer is trusted by its user
• The security server is always on-line
• The servers are stateless
• Because of the RSA patent only symmetric cryptography is used
• The time that the user client’s password is available must be minimised
[10, p.80-81]

The Kerberos functional components are depicted below in “Figure 3.10 The
Kerberos general architecture [10, p.81]”:

Authentication Server (AS)

Client Computer

User Ticket Granting Server (TGS)

Application Server (S)

Figure 3.10 The Kerberos general architecture [10, p.81]

These components are described in the following paragraphs.

40
3.5.1.1 The client computer
Client computers are regarded insecure, because the user has full control over
them. The client is usually a general-purpose computer with a Kerberos enabled
OS and applications installed. [10, p.81]

3.5.1.2 The authentication server


The authentication servers’ role is to decide whether a user is who he claims to be
i.e. authenticating the user. It also acts as an exchanger, exchanging weak secrets
(user ID and password) to a strong secret (a cryptographic ticket). With this ticket,
the user can prove his identity to the ticket granting server (TGS). [10, p.81]

3.5.1.3 The ticket granting server


Once the user has obtained a ticket granting ticket (TGT) from the AS, he is able
to request an authenticator and a session key to a specific service S from the TGS.
The TGS is able to authenticate the user based on his authenticator received with
the TGT. [10, p.81-82]

3.5.1.4 The application server


The application server can provide a multitude of services to the client, once the
client is authenticated with the application server. This is done also vice versa, if
mutual authentication is requested. The messages between them can be protected
cryptographically providing confidentiality and integrity. [10, p.82]

3.5.2 Operation of the Kerberos environment

The operation of the Kerberos V protocol suite is described in “Figure 3.11


Kerberos V authentication steps” with illustration of signal sequences.

41
A, TGS , RL C , N C , E K A (TC )

A , E K AS −TGS ( A , C , TGS , T s , T e , k C − TGS )


Client Computer
E K A ( k C − TGS , T s , T e , N C , TGS )
User Authentication Server (AS)

S , RL C( 1 ) , N C( 1 ) , E K C − TGS ( C , T C(1 ) )
E K AS − TGS ( A , C , TGS , T s , T e , k C − TGS )

A, E K TGS − S ( A, C , TGS , Ts(1) , Te(1) , k C − S )


E K C −TGS ( k C − S , Ts(1) , Te(1) , N C(1) , S )
Ticket Granting Server (TGS)

E K TGS − S ( A, C , TGS , Ts(1) , Te(1) , k C − S )


E k C − S (C , TC( 2 ) , SN C − S )

E k C − S (TC( 2 ) , SN C − S )
Application Server (S)

Figure 3.11 Kerberos V authentication steps

When a user wants access to the application server S, he needs to authenticate to


the authentication server AS with the first message exchange. In the
authentication request KRB-AS-REQ (1) –message, the client sends the AS its
identity A, the destination identity TGS, the lifetime request RLC, a nonce NC and
a pre-authenticator EKa(TC).

The AS replies with the KRB-AS-REP (2) -message and provides the client with a
session key package kC-TGS, and a ticket granting ticket for accessing the TGS.

After receipt of the TGT and the client-TGS session key, the client may proceed
to request a ticket for accessing server S from the TGS. This is achieved by
sending the KRB-TGS-REQ (3) –message with the appropriate nonce,
timestamps, TGT and authenticator to the TGS.

The client receives the session key for the service S in the KRB-TGS-REP (4) –
message. With this key, the client is able to form an authenticator for the KRB-
AP-REQ (5) –message with the ticket obtained previously. It may also transmit a
sequence number SNC-S for use with the KRB-SAFE and KRB-PRIV messages.

The application server S responds to this with an authenticator in the KRB-AP-


REP (6) –message.

42
After the conclusion of the initial authentication client C and application server S
may negotiate confidentiality and integrity on the communication. This occurs by
using the previously mentioned KRB-SAFE and –PRIV –messages. [6, p.323-
340, 10, p.86-90]

3.5.3 Pitfalls of Kerberos V

There are a few shortcomings in the Kerberos V system. Some of them are listed
below [10, p.89-90]
• Kerberos V is vulnerable to password guessing attacks
• Kerberos relies on client security
• The confidentiality and integrity of Kerberos implementations are
compromised by a known attack, if both DES-CBC and DES-MAC –
modes are used simultaneously.
• Kerberos is still based on symmetric cryptography. Therefore, it does not
scale well to large inter-realm environments.
• Kerberos does not provide non-repudiation services
• Kerberos lacks access control features completely

3.5.4 Public-key cryptography extensions to Kerberos V

The IETF’s Kerberos working group has proposed an extension to the Kerberos V
authentication service to support the use of public-key certificates in user
authentication in the Internet draft “draft-ietf-cat-kerberos-pk-init-15” and inter-
realm authentication in the Internet draft “draft-ietf-cat-kerberos-pk-cross-08”.

3.5.4.1 The PK Init extension


PKINIT enables access to Kerberos-secured services based on initial
authentication utilising public key cryptography. PKINIT utilises standard public
key signature and encryption data formats within the standard Kerberos messages.
The basic mechanism is as follows: The user sends an AS-REQ message to the
KDC as before, except that if the user uses public key cryptography in the initial
authentication step, his certificate and a signature accompany the initial request in
the pre-authentication fields. Upon receipt of this request, the KDC verifies the
certificate and issues a ticket granting ticket (TGT) as before, except that the
encPart from the AS-REP message carrying the TGT is now encrypted utilising
either a Diffie-Hellman derived key or the user's public key. This message is
authenticated utilising the public key signature of the KDC. [10, p.426-427, 26]

43
3.5.4.2 The PK Cross-realm extension

The basic operation of the PKCROSS protocol is as follows:


1. The client submits a request to the local KDC for credentials pertaining to the
remote realm. This is just a typical cross realm request that may occur with or
without PKCROSS.
2. The local KDC submits a PKINIT request to the remote KDC to obtain a
"special" PKCROSS ticket. This is a standard PKINIT request, except that
PKCROSS flag (bit 9) is set in the kdc-options field in the AS_REQ. Note
that the service name in the request is for pkcross/realm@REALM instead of
krbtgt/realm@REALM.
3. The remote KDC responds as per PKINIT, except that the ticket contains a
TicketExtension, which contains policy information such as lifetime of cross
realm tickets issued by KDC_l to a client. The local KDC must reflect this
policy information in the credentials it forwards to the client. Call this ticket
XTKT_ (l,r) to indicate that this ticket is used to authenticate the local KDC to
the remote KDC.
4. The local KDC passes a ticket, TGT_(c,r) (the cross realm TGT between the
client and remote KDC), to the client. This ticket contains in its
TicketExtension field the ticket, XTKT_ (l,r), which contains the cross-realm
key. The TGT_(c,r) ticket is encrypted using the key sealed in
XTKT_(l,r). (The TicketExtension field is not encrypted.) The local KDC may
optionally include another TicketExtension type that indicates the hostname
and/or IP address for the remote KDC.
5. The client submits the request directly to the remote KDC, as before.
6. The remote KDC extracts XTKT_ (l,r) from the TicketExtension in order to
decrypt the encrypted part of TGT_(c,r). [27]

44
4 SINGLE SIGN-ON ARCHITECTURE
Single sign-on is a paradigm, which by utilising authentication, authorisation and
auditing functions as well as protocols for dissemination of access control
information, the client is provided with universal identification with a single
authentication event.

The fundamental problem a single sign-on solution solves is the problem of


forwarding the authentication credential from one service to another in a secure
manner. This should hold true even though the client is regarded entirely
untrustworthy and actively attacking the security measures.

This chapter attempts to clarify the different ways an S/SSO solution can be built
accompanied by a discussion of different approaches to the infrastructure when
operating in a pure WebSSO environment vs. a more traditional S/SSO
environment of heterogeneous legacy computing resources.

4.1 Different single sign-on architectures

There are various ways of accomplishing single sign-on functionality on a


network of computers. The most prominent way of obtaining SSO capabilities is
by utilising a client-agent-server architecture.

The client is a distrusted component, capable of completing the authentication


protocols and storing the access control ticket issued to it by the server if the
authentication was successful. A typical client would be using a web browser
acting on behalf of the user as a user-agent.

The agent is situated near the protected service acting as a gatekeeper, consulting
the server for authentication and authorisation decisions as well as supplying it
with audit-data. The agent is a small piece of code that effectively can say ‘yes’ or
‘no’ to resource requests based on the authorisation information provided by the
server, and forward the acceptable ones to the service and the responses back to
the client.

45
The heart of the system is the server, which provides the back-end processing
capabilities with support for different authentication methods, user databases,
policy evaluation capabilities and audit logging and processing functions.

The three most common ways of situating the agent will be explained shortly.

4.1.1 Native plug-in SSO agent

In this model, the protected resource, i.e. the software service, is modern enough
to support natively authentication method plug-ins. This enables the smooth
addition of the service into the SSO infrastructure. The concept is visualised in
“Figure 4.1 Native plug-in architecture”:

Request Authentication / Service to be protected


Send Requested Data
Pass Validated
Identity to Application Authentication Plug-In
via internal interfaces Interface
Request Service/
Send Authenticatin Data
Basic Auth SecurID Auth PKI Auth
Plug-In Plug-In Plug-In

Figure 4.1 Native plug-in architecture

The process that authentication and authorisation follow in this kind of


architecture is described below:

1) The resource request is received by the authentication module and it is then


passed to an appropriate authentication plug-in for further processing.
2) The authentication module performs its native authentication procedure and if
it finds a valid identity match, the plug-in passes the extracted credentials to
the service with a “valid” response, indicating the status of the authentication
process.
3) After this, the application trusts the plug-ins evaluation ability of
authentication, and begins serving the requests made by the client.

If the used plug-in is a SSO enabled one, it will consult a policy database for the
authenticated clients access rights, and act as a policy enforcer between the
protected service and the client. It also has to be able to propagate the
authorisation credentials from one service to another in order to enable SSO
functionality. [28]

46
What distinguishes this from the next method is the involved binding with a
specified application by integrating directly into the applications internal
authentication and authorisation logic and API as a plug-in.

4.1.2 Helper application agent

This differs from the previous agent with regards to the location of the agent in
the system. In this instance, the agent is situated in the same physical computer,
but it is not as indigenous to the application as a plug-in. It has to take over the
applications data connection paths and redirect them through itself to be able to
intercept the data connections before they reach the application. After
interception, the agent operates in much the same way as the plug-in agent does,
with the exception of how the authentication data is transmitted to the application.
This difference is depicted below in “Figure 4.2 External agent architecture”:

Figure 4.2 External agent architecture

The most notable difference is the requirement for the ability to pass
authentication information to the application. This approach is usually used with
applications that are too old to support plug-ins, which are hosted on a common
enough OS platform that agents are available for it. Usually, the external
authentication data passing mechanism employed uses the native basic
authentication facility with the agent on behalf of the client. [29]

4.1.3 Reverse proxy architecture

With this method shown in “Figure 4.3 Reverse proxy architecture” the
functionality of the agent is situated in an external computer that routes traffic
from the public network to the private network, which is shared between the
proxy and the service. The logic of this model differs only slightly from the above
external agent model. Nevertheless, it has quite a different implementation
structure. [30]

47
Figure 4.3 Reverse proxy architecture

This solution is required if the service is not able to use either plug-ins or there
does not exist an agent that will run on the OS platform of the service. In this
case, the agent is running in its own computing environment and masquerading as
a server to the client and as a client to the service only communicating with its
peers via the networks. Here the public side of the proxy-agent is what is seen to
the outside world and the real services are mapped to its virtual directory
hierarchy.

4.2 Reduced sign-on architectures

Reduced sign-on means that an RSO -application controls key management on


behalf of the user and uses them when prompted by the various services the client
wishes to access. This differs only slightly from single sign-on but the difference
is fundamental. [31]

The fundamental difference between these two is that in a pure single sign-on
model the chosen authentication method is used for all services, whereas in RSO,
the agent residing in the clients’ computer may know of multiple different
authentication methods and apply these when needed. To clarify this distinction
”Figure 4.4 Reduced sign-on architecture” is provided below:

48
Request Authentication / Service to be protected
Send Requested Data

Authentication Plug-In
Interface
Request Service/
Send Authenticatin Data
Pass Validated
Use Stored Basic Basic Auth
Identity to Application
Credentials Plug-In
via internal interfaces

Request Authentication / Service to be protected


Send Requested Data
Use Stored SecurID
RSO Agent
Credentials Authentication Plug-In
Request Service/ Interface
Send Authenticatin Data
Pass Validated
SecurID Auth Identity to Application
via internal interfaces

Use Stored SSO


Credentials

Request Authentication / Service to be protected


Send Requested Data

Authentication Plug-In
Interface
Request Service/
Send Authenticatin Data
Pass Validated
PKI Auth Identity to Application
via internal interfaces

Figure 4.4 Reduced sign-on architecture

The location of the RSO agent is in the client not the service. This approach is
quite clumsy, as every end-system has to have explicit support in the client agent
to be able to utilise this architecture.

In this example, using a different authentication method for every system


emphasises the differentiation to proper SSO solutions. The user will authenticate
himself to the RSO agent once and the RSO agent authenticates him to all
subsequently accessed protected resources in the RSO domain.

In an SSO environment, the user authenticates to a central authority and receives a


universal (in the SSO domain) credential of identity that all the services are able
to check.

4.3 Single sign-on on the web

The web environment is a natural platform for native plug-in agents since most
modern web-servers support pluggable authentication modules and run on a few
OS platforms that are generally well supported. In addition, the web paradigm
provides native support for convenient identity ticket transfer from one service to
another via the cookie mechanism.

49
4.3.1 Credential passing methods

When the user requests a www-page via his browser, he in effect makes an
HTTP-request for the resource defined in his browsers URL. When this request is
received at the www-server, it begins a process described below.

If the server is not single sign-on enabled, the server just sends the requested data
back to the requestor when the request reaches the server data port. There might
be some active access controls, which may tell the server whether the requestor is
permitted to access the requested data based on identity, IP-address or hostname.
Apart from this, the server has no way of knowing whether to send the requested
data or to skip re-identification, if the user has previously been authenticated.

Browser Agent
Agent/Server #1
Resource Request

Authentication Request Co
ok
ieG
en
era
Authentication Response tio
Personal n
Terminal
Set Authorisation Cookie

Resource Response

Resource Request
Enter
authentication Authentication and
information Authorisation Database
Browser Agent
on
Resource Request alid
ati
ieV
ok

Cookie Request
Co

Cookie Response

Renew Authorisation Cookie


Agent/Server #2
Resource Response

Resource Request

Figure 4.5 Cookie handling in the WebSSO environment

With single sign-on capabilities in the server, the user is authenticated after his
initial data request and a single sign-on ticket is either issued to the users browser
in the form of a cookie or encoded in the URLs of the html-document send to him
per his initial request. This is clarified in “Figure 4.5 Cookie handling in the
WebSSO environment”.

50
This ticket usually contains an encrypted certificate of validity that the www-
server or the SSO agent software on that server checks every time the user makes
additional requests for data from the server.

With the cookie mechanism, any server in the same cookie-domain can always
check the content of any cookie previously issued by any one of the other servers,
automatically re-authenticating the user for every subsequent request without user
intervention.

With URL-embedded tickets, the cryptographic ticket is embedded in every


hyperlink from the initial, and all subsequent pages, so the ticket is passed to the
server as part of the data request. This is not as flexible as the cookie-based
method because it only permits the user a single sign-on to those resources
reachable via the encoded URLs on the page. You cannot do large-scale SSO with
these because if you visit another site outside the SSO realm and return to another
resource in the realm the authentication information is lost.

Either these tickets have a certain predefined period of validity after which time
the user is required to re-authenticate, or the server may automatically refresh the
ticket by reissuing the cookie or the URL encoding.

4.3.2 Simple WebSSO scenario

The authentication method can vary from one implementation to another, and it
may be anything from simple user-id/password combinations to retinal scan based
biometric authentication information. The only limit is imagination. For
simplicity’s sake, only two protected servers are used in the example described in
“Figure 4.6 Illustration of multi-host SSO in the web environment”.

Web server and


SSO agent
Authentication and authorisation
queries and responces

Initial request and


authentication trafic Policy and
authorisation data
Single sign-on to second server requests
and authentication check
Client
Additional requests
and responces Policy and user database
Policy Server

Authentication and authorisation


queries and responces
Web server and
SSO agent

Figure 4.6 Illustration of multi-host SSO in the web environment

When a client requests access to a protected resource, his authentication status is


first checked, and if it is found lacking, it is redirected to the authentication
service by the SSO agent. When the authentication procedure is successfully

51
completed, the SSO service issues an access ticket, which is relayed to the client
by the SSO agent and the original request is only served, if the clients’
authorisation is sufficient.

All subsequent access requests are served based on the initial status- and
authorisation check. Usually, the ticket is valid for a specified period and when it
expires, the client must re-authenticate. This is not normally a problem as the
ticket is renewed automatically if the session is actively in use, and therefore, it
does not visibly expire while actively browsing.

The access policy is maintained by the policy service, and usually, the policy
check results are cached at the agent temporarily to lighten the load of the policy
server.

With the single sign-on framework acting in the background, this scheme can
easily be extended to cover hundreds of services by installing these agents on
every www-server that is to be protected centrally.

4.3.3 Rejection points

There are two obvious places of rejection, with authentication and authorisation.
One must understand that authentication and authorisation are two entirely
different things. One may be a legal user of one part of a www-service, while at
the same time be unauthorised to use another. All S/SSO products that were tested
implement authentication and authorisation as two separate services.

4.4 Generalised single sign-on model

In the web environment, SSO is easier to implement than in most other


environments because the web paradigm includes the concept of cookies, which
act as the perfect ticket issuance/storage mechanism.

For a more general implementation of SSO, a complete array of plug-ins and


helper applications must be developed for every platform and application to bring
these non-native services to existing applications.

Some products on the market implement SSO capabilities on various systems like
IBM Global Sign-On and RSA Keon Desktop. These are unfortunately very
dependent on an array of natively supported applications, and therefore are not
very flexible to use.

52
One of the best-integrated SSO-like authentication mechanisms is the Microsoft
Windows 2000 domain authentication mechanism with which they use Kerberos
V as the basis for network authentication with rudimentary support for PKI
enabled applications. It is unfortunate that the Windows 2000 model is bound too
tightly to this operating system for wider acceptance in a heterogeneous
computing environment. [10, p.363-366]

53
5 RISK-ANALYSIS BASED REQUIREMENT
SPECIFICATION
In this chapter, the requirements for an S/SSO infrastructure are developed by the
author and Elisa Research Centre/IT Security R&D team. The Research Centre’s
opinions are used for guidance by the Elisa’s Data Administration Division in
their decision-making and evaluation process.

This requirement specification is built from a risk-analysis perspective to provide


the reader with a greater understanding of the risks associated with a requirement
item.

The deployment environment of this infrastructure is a multi-site operation with a


completely heterogeneous client-server environment in the web-arena. Because of
this, some requirements become more crucial than others. Examples of these are
both geographical and logical decentralisation, incremental scalability and
requirements enforced by law on telecom operators such as the obligation of
diligence and the Population Registration Act.

Because this thesis concentrates on the security of the SSO products themselves, it
is natural to point out here that even the best software security cannot compensate
for non-existent physical security measures. Therefore, I hope the reader
appreciates that something as critical as a centralised authentication and
authorisation system must be physically protected with at least the same care as
other critical systems like payroll computers and e-commerce machinery. In this
thesis, proper location security is assumed.

A proposal has been discussed by the EU parliament to ban or at least limit the
use of cookies [32], which might impose severe limitations on the usability of
current S/SSO solutions. In anticipation of this legislation, additional
requirements for alternative identity dissemination methods were introduced.

54
5.1 Raw requirements

The raw requirement list is presented in the following table and the risks related to
each requirement are separately discussed in the following subchapters.

Raw requirement specification

Encrypted communication paths


Secure key management
Cryptographic access tickets
Alternative method of ticketing
Pluggable authentication method support
Strong authentication support
Transaction non-repudiation
Fine grained access policy enforcement
Transaction atomicity
Component redundancy
Disaster recovery
Incremental scalability
Load balancing features
Multi-platform availability
Centralised management framework integration
Delegated administration
External application server support
Future support for global authenticators
Remote administration
Graphical UI for administration
Database locations and supported formats
Table 5.1 Raw requirement specification

These requirements cover the most general requirements for a networked


authentication and authorisation system. A much more complex model could
easily be compiled but for this task, these requirements were sufficient.

5.2 Confidentiality and integrity

This subchapter discusses the justification of each raw requirement shown above
which is related to confidentiality and integrity of data transferred over the
network or stored on hosts.

55
5.2.1 Encrypted communication paths

Risk: An attacker is able to forge, modify or inject false information into the
AAA-process while the data is in transit.

Justification: This becomes a crucial point of failure if the AAA –infrastructure


needs to use insecure communications over the Internet to operate.

Solution: All communication paths between the S/SSO systems components and
between the clients and the S/SSO system must be encrypted using methods that
both guarantee confidentiality and integrity.

5.2.2 Secure key management

Risk: An attacker is able to gain access credentials because of insecure key


handling in the SSO framework.

Justification: If the encryption keys are compromised, it will ruin the security of
the entire platform.

Solution: All key handling should be done using open and proven key
management methods.

5.2.3 Cryptographic access tickets

Risk: An attacker is able to forge his access ticket in his cookie store to reflect
elevated privileges.

Justification: All data that is stored on a client’s computer should be regarded as


having originated from an untrustworthy source and being subject to limitless
tampering on the behalf of the client.

Solution: All cookies submitted to the client for S/SSO credential passing must be
encrypted with strong algorithms and a key known only to the S/SSO system.
Preferably, this key should be randomly selected for every session to minimise the
risk of brute force and dictionary attacks on the key.

5.2.4 Alternative method of ticketing

Risk: The EU legislation bans the use of cookies

56
Justification: This legislation is currently a proposal [32] to ban the use of
“secret” cookies in the EU parliament for inclusion into European law. If the
Finnish lawmakers implement this act in an inappropriate manner, cookies may be
banned altogether.

Solution: An alternative method for credential passing is required.

5.2.5 Pluggable authentication method support

Risk: A secure authentication method today might become volatile in the future.

Justification: If this risk occurs, it should be easy to switch to another secure


authentication method without affecting the entire infrastructure or disrupting the
service. It is an expensive process to re-customise the applications using this
infrastructure if the authentication API is not general enough to support pluggable
back-end authentication methods.

Solution: Dynamic, pluggable authentication methods in the back-end system as


well as open API’s for custom authentication method implementation should be
available from the manufacturer.

5.2.6 Strong authentication support

Risk: An attacker is able to penetrate the authentication methods either by brute


force, dictionary attacks or other penetration methods.

Justification: If the authentication method is susceptible to attack, forgery or


impersonation, the whole infrastructure becomes useless since it relies on the
imperviousness of the underlying authentication to provide precise identity
information.

Solution: The product must support strong two-factor authentication methods,


which will foil most conventional attacks on authentication methods like guessing
passwords, social engineering, brute force and dictionary attacks.

5.2.7 Transaction non-repudiation

Risk: Without non-repudiation, a rogue customer can claim not to have conducted
the contested transaction.

57
Justification: When implementing an S/SSO infrastructure like this, it is likely to
have commercial services rely on the identification framework provided. When
money is involved in the services, fraudulent usage will always follow.

Solution: This risk can be mitigated by implementing compulsory non-


repudiation services into these kinds of services to protect both the S/SSO and the
commercial service provider.

5.2.8 Fine grained access policy enforcement

Risk: A valid user is able to gain unauthorised access to data.

Justification: Too general object controls may result in over privileging users that
need to access some information under control, but not all information the access
privilege grants.

Solution: It requires fine-grained access policy enforcement in order to build just


the right sized access groups with a ‘least privilege’-approach. A sufficiently
granular approach could reach down to the level of individual objects e.g. files or
pictures, but not necessarily all the way down to paragraph or method levels.

5.3 Availability

This subchapter discusses the justification of each raw requirement above which
is related to availability features of the authentication framework. In this instance,
“good availability” should be considered to be in the 99.999% annual reliability
window.

5.3.1 Transaction atomicity

Risk: 1) An attacker is capable of disrupting an AAA-transaction and forcing the


system into an undefined state. 2) If the S/SSO infrastructure crashes for any other
reason and a partly committed change to its configuration is stored in the database
resulting in immediate crash upon restart.

Justification: By being able to drive the system into an undefined state, it might
be possible to exploit the resulting state for access permission elevation or other
misuse of the system. Additionally, if the system fails for any reason atomic
transactions guarantee that the databases are always in a well-defined state.

Solution: All database transactions must be atomic.

58
5.3.2 Component redundancy

Risk: A component failure will result in denial of service.

Justification: The failure of the S/SSO system will cause system wide denial of
service to all users of the system. If a single component failure can render the
entire system useless, it must be possible to add redundancy to this component to
avoid the huge risk involved in its failure. In comparison, it is cheap to build
redundant computers to having all corporate operations halted because of the
resulting denial of service.

Solution: Components that are in the high-risk area of functionality must be


capable of operating in a clustered fail-over environment. With multiple
concurrent systems with fail-over facilities, the risk of complete failure of that
subsystem is greatly reduced.

5.3.3 Disaster recovery

Risk: A computing facility is attacked by a natural catastrophe or human caused


incident rendering it unavailable.

Justification: If the whole system is located in the same physical facility, a risk
exists that the entire platform will be wiped out in one great accident, resulting in
denial of service. Elisa’s Business and Disaster Recovery Plan requires at least a
“Hot Site”-backup system and strongly suggests having fully redundant systems
in geographically separate locations.

Solution: It should be possible to divide the infrastructure into at least two


different physical locations for added security and availability.

5.3.4 Incremental scalability

Risk: The number of transactions outgrows the systems capabilities. This could
mean in practice that a single S/SSO infrastructure must be able to handle at least
the population of Finland ~5.000.000 clients in expanded configurations.

Justification: If the S/SSO system is a commercial success, the user base might
begin to grow at an unexpected rate, possibly resulting in partial denial of service,
because the server back-end simply cannot process the numerous transactions.

Solution: The product must have a well-defined, incremental growth path to


extreme user bases and transaction counts to satisfy the requirements for a large

59
user base. Incremental scalability means that the S/SSO platform must be capable
of accepting new modules to the existing framework without disrupting the
current services on-line.

5.3.5 Load balancing features

Risk: The infrastructure is unable to handle all requests on time.

Justification: When the load on the servers grows, it is more economical to have
all servers operate in a load-balancing configuration instead of having dedicated
backup systems waiting idle for the operational system to fail. This way the best
of both worlds is realised in having on-line backup systems available and at the
same time optimising the load distribution of the infrastructure.

Solution: The fail-over nodes must be able to operate in a load-balancing


configuration as well as the fail-over nodes themselves.

5.3.6 Multi-platform availability

Risk: The solutions platform (OS or hardware) becomes obsolete or goes out of
business and leaves Elisa Communications without support for future growth and
services.

Justification: When tied to a single platform it is possible that a bankrupt vendor


could harm the overall operation of the production system. In another scenario,
there might be substantial cost savings available if there are available alternatives
like the free operating systems (Linux and FreeBSD vs. Microsoft Windows and
IBM AIX) or cheaper hardware platforms (Intel vs. Sun).

Solution: The product is required to support these cheaper and more available
platforms before accepting it into production. In addition, the roadmaps for these
future product lines should be checked to ensure later supportability.

5.4 Accountability and audit

This subchapter discusses the justification of each raw requirement above which
is related to data collection mechanisms, which provide adequate data for both
internal and external audit purposes.

60
5.4.1 Centralised management framework integration

Risk: 1) The product uses its own administration tools and user databases and
consequently goes out of sync with the main user databases and policies. In
addition, additional administration costs are generated by requiring multiple
administrative staff.

2) If support for the required management infrastructure is absent, it will result in


high costs for customisation work needed to utilise the framework.

3) Becoming part of a centralised management framework also brings unexpected


security considerations into the S/SSO systems overall security model, because
the framework must also be audited for possible flaws and misinteractions with
the S/SSO infrastructure.

Justification: It is unnecessarily complicated to keep two concurrent


infrastructures in synchronisation with each other’s users, policies and objects.
This might result in needless delays in the policy or user data propagation to both
systems if they do not share databases with each other. This also enables the use
of centralised audit-facilities in the management framework.

Solution: There should be a plug-in to the centralised management framework for


administration of the S/SSO infrastructure with its user- and policy management
tools. In addition, the interaction of these two components needs to be audited and
the security of the management framework should be considered in conjunction to
the S/SSO security model evaluation.

5.4.2 Delegated administration

Risk: The administrator has too much power and exploits that power maliciously.

Justification: It has been discovered over the years that too much power in too
few hands can wreak havoc on the most secure systems. This has been the case in
the UNIX and Windows operating systems and has resulted in the ideal of
delegated administrative privileges, where no single person has complete power
over the system. This is also called duty separation i.e. the administrator who can
modify account information cannot modify the audit trail of those modifications
and vice versa.

It can also provide new business opportunities by enabling complex owner-bearer-


operator-user relationships and role separation. This affords possibilities of
providing the platform as an ASP-solution to customers.

61
Solution: Require delegated administration capabilities to be supported by the
products. The minimal delegation granularity should be at the user role level, but
a more granular approach would be appreciated.

5.5 Other features

This subchapter discusses the justification of each raw requirement shown above,
which is related to things, which do not fit into any of the preceding categories.

5.5.1 External application server support

Risk: The solution is too limited, since it can only be used to interoperate with
web servers, and not the back-end application servers.

Justification: If the solution does not include methods to pass the authentication
information to the back-end processing servers, it will limit the application
programmer’s possibilities. In addition, it might trigger unexpected failures in the
back-end applications by interfering in their communication with the web servers.

Solution: Require native support for at least IBM WebSphere, Bea WebLogic,
and Allaire ColdFusion.

5.5.2 Future support for global authenticators

Risk: Ubiquitous mobile computing requires a network-based authentication


method and it will be based on certificates on SWIM-cards. As the S/SSO-
infrastructure is going to be used as this authentication service provider, it has to
support such duty separation between the mobile handset and the network.

Justification: As the telecommunication market converges with mobile and PDA


markets, this will bring immense pressure to provide global authentication
services into every personal communicator device.

Solution: Support plans for both Microsoft Passport and Liberty Alliance global
authentication framework integration should be made available. Plans for how the
S/SSO product will position itself in this global market – as a service provider
himself of as a proxy for these global players – have to be evaluated.

62
5.5.3 Remote administrations

Risk: The system needs attention from the administrator and he is unavailable to
see to it locally.

Justification: Usually, the daily administration does not require console activity.
Simple parameter changes should be doable remotely with suitable tools, because
it is uneconomical to dispatch administrators to every remote system console.
When a proper remote management system is in place, a few administrators can
handle the entire systems administrative tasks from a centralised location.

Solution: The product itself provides a user interface that can be securely
accessed from a remote location e.g. a Java-based administration console or a web
front-end to the administration software.

5.5.4 Graphical UI for administration

Risk: The administrator does not understand the logic of the UI provided to him.

Justification: Many software products fail to provide readability, clearness and


usability of the UI. If the UI is too difficult to learn, this usually results in a
reluctance to perform the necessary administration tasks on time.

Solution: Make sure the UI is reasonably simple and straightforward for the
administration personnel to understand and use. If the UI is too difficult to use,
collaboration with the solution provider is needed to make it more suitable for the
administrators.

5.5.5 Database locations and supported formats

Risk: The product stores data in a strange format that cannot be understood by
any other system.

Justification: If a product uses a proprietary database for storing its critical


information, it could complicate or even prevent the backup process and sharing
of the data.

Solution: Require that the system either uses open database access methods like
LDAP or ODBC or has good import/export tools available for automated database
synchronisation between different systems in a common format.

63
5.6 Summary of requirements

These requirements are mainly based on current issues, the experiences of the
author and his knowledge of general systems failure and security aspects. The
grouping follows the traditional computer security categorisation into
confidentiality/integrity, availability, accountability/audit and miscellaneous
features. This requirement list may need to be augmented after the laboratory
benchmarking to test the requirements in practice.

The most critical feature classes in this list are firstly the confidentiality/integrity
requirements, because without both, the entire operation is in danger. Secondly,
the availability features as they dictate how trustworthy a service is. The other
requirement classes are less sensitive, because the basic operation is secured by
the above requirements.

64
6 COMMERCIAL SINGLE SIGN-ON PRODUCT
SURVEY
In this subchapter, many single sign-on products currently available on the market
are enumerated, and the way in which they implement capabilities similar to
single sign-on will be briefly commented on. This part of the study is based
entirely on marketing data from the vendors’ web sites, so no guarantees as to the
accuracy of the claims can be given by the author.

These products were chosen from the results of an extensive web search originally
conducted in December 2000, and updated in January 2002. More than half of the
original products have been withdrawn from the market or have since been
acquired by a rival company. Therefore, only ten currently available products are
introduced in random order below.

6.1 Computer Associates International: eTrust Single Sign-


On

This single sign-on module was previously known as the Unicenter TNG Single
Sign-On option. It provides single sign-on functionality to both web and legacy
environments, and is part of the eTrust family of security management products
from CAI. [33]

6.2 CyberSafe: TrustBroker

CyberSafe’s TrustBroker is a legacy single sign-on product that supports


Windows NT/2000 and UNIX login functionalities. It includes support for basic,
Kerberos and PKI-based authentication methods and it is expandable via GSS-
API. UNIX systems supported by the product include Solaris, HP-UX, Tru64,
AIX and MVS.

Web single sign-on functionality could be implemented if it is customised with


the supplied WebSDK. [34]

65
6.3 DataLynx: Guardian

Guardian is a reduced sign-on solution, which provides password synchronisation


across multiple operating systems and applications. The main supported platforms
are UNIX and Windows NT. [35]

6.4 Entrust: GetAccess

GetAccess is a single sign-on solution for the web arena. It provides single sign-
on support with both basic and strong authentication for web clients. It is available
for Windows NT/2000 and Solaris on the servers and any web browser for the
client. This product was previously known as EnCommerce Get Access. [36]

6.5 Evidian: AccessMaster SSO

AccessMaster SSO is a reduced sign-on solution which stores and controls the
multiple passwords on behalf of the user. The user is automatically authenticated
to all systems he accesses by the RSO agent. It supports basic and smart card
authentication to the client desktop agent. It supports Windows 9x/NT/2000,
Solaris, AIX and Linux login automation. It was originally know as BullSoft
AccessMaster. [37]

6.6 Tivoli: SecureWay Policy Director

Policy Director is a reverse proxy type web single sign-on solution. It provides
virtual directory mapping of resources at the proxy level, and enables centralised
user management via either its own management console or integration with
Tivoli User Manager of the Tivoli Framework. Supported authentication methods
are basic, DCE, token and smart card authentication. It is supported on Windows
NT/2000, Solaris and AIX platforms. A standard web browser is needed for the
client environment. [38]

6.7 Netegrity: SiteMinder

SiteMinder is a web single sign-on solution, which provides authentication, and


authorisation services to web and application servers. Supported authentication
methods include basic, form-, token-based and smart card authentication. It is
supported on Windows NT/2000 and Solaris. Any cookie enabled web browser is
required as the client environment. [39]

66
6.8 RSA: ClearTrust

ClearTrust is a web single sign-on solution, which was previously known as


Securant ClearTrust. It is an agent-server model based access control solution, and
supports basic, token and smart card authentication. It is supported on Windows
NT/2000, HP-UX, Solaris, AIX and Tru64 platforms. [40]

6.9 Proginet: SecurPass Sync

SecurPass is a reduced sign-on solution for password synchronisation between


Windows NT/2000 and UNIX environments. It supports Windows 9x/NT/2000,
Solaris, HP-UX, AIX, VMS, OS/390, AS/400 and NetWare. [41]

6.10 Unisys: Single Point Security

SPS is more of a toolkit approach to single sign-on, but it still provides both
legacy and web single sign-on capabilities. It supports basic, token and smart card
authentication with integral support for a wealth of different user databases. SPS
is available on Windows NT/2000, VMS, Netware, Tandem Himalaya, Tru64,
AIX, HP-UX and Solaris. [42]

6.11 Feature summary of surveyed products

In the following “Table 6.1 Supported features in surveyed products”, the above
feature descriptions are summarised in tabular form for easier feature comparison.

Supported Supported Supported


Single Sign- Authentication Operating Systems
on Method Methods
RSO

Legacy SSO

Web SSO

Basic Auth

Token Auth

Smart Card Auth

Windows NT/2000

Solaris

Other Unix

Netware

Products

eTrust SSO X X X X X X X X
TrustBroker X X X X X X X X
Guardian X X X X X X
GetAccess X X X X X X
Access Master X X X X X X

67
Supported Supported Supported
Single Sign-on Authentication Operating Systems
Method Methods

RSO

Legacy SSO

Web SSO

Basic Auth

Token Auth

Smart Card Auth

Windows NT/2000

Solaris

Other Unix

Netware
Products

Policy Director X X X X X X X
SiteMinder X X X X X X X
ClearTrust X X X X X X X
SecurPass Sync X X X X X X
SPS X X X X X X X X X
Table 6.1 Supported features in surveyed products

From this table you will notice that there are only a few full featured products in
this line-up. The most interesting products concerning WebSSO and smart card
support are Policy Director, ClearTrust and SiteMinder. In addition, very feature
rich products like eTrust SSO, TrustBroker and SPS are listed, but they are either
prohibitively expensive or much too complex for the current needs of a web single
sign-on and access control environment. The other products are of no interest in
this thesis, because they only support a subset of the mandatory features identified
previously.

6.12 Selection process of the products to be tested


Based on the above discussion of interesting products and their claimed features,
several vendors were contacted for product presentations, talks and further
clarification of the claimed functionalities each product advertises in its marketing
materials.

After a thorough evaluation of these products based on all collected data and
vendor impressions, three products were chosen for actual laboratory evaluations.
These were RSA ClearTrust, Netegrity SiteMinder and IBM Policy Director.

From the products tested, the winner is evaluated below in chapter 7 “Evaluation
Of The Selected Product”. The other two were lacking in either their certificate
support or the architecture. ClearTrust claimed to support certificates while
actually using them only as textual containers for usernames and PolicyDirector is
tied to the reverse proxy architecture so it would not be versatile enough for our
application.

68
7 EVALUATION OF THE SELECTED PRODUCT
After evaluating the three products mentioned above Netegrity SiteMinder was
chosen as the example evaluation product because of its merits. Its architecture,
user interface and operation are described in detail, and evaluated as an example
of an S/SSO product according to the criteria defined in chapter 5.

7.1 Evaluation background

The evaluated service platform is to be deployed into the Elisa Research Centre’s
extranet setting with both internal and third party clients accessing the same
information. It is therefore crucial that the sites access control and policies are of
the highest quality and that there is clear role separation of administrative duties.

The basis of this evaluation lies between the requirements of open access to
extranet clients, and information publication on time to these external clients. It
needs to be possible to store public, confidential and secret documents on the
extranet, because some of the external partners are more involved in the projects.
Some partners have cursory access to old results, while others are active
participants in research projects.

There is a clear need to delegate administration, because there are many projects
running concurrently and each project leader needs to be able to add to and
remove users from his project access group.

7.2 Netegrity SiteMinder 4.51

Netegrity SiteMinder (SM) was tested in the Research Centre’s laboratory. The
main platform for testing was the Solaris platform running on Sun hardware.

SiteMinder is the most versatile product with regard to architectural possibilities


that range from fully distributed to reverse proxy operation. Typically, SM
operates in a native-agent mode with clustered policy services providing fail-over
capabilities.

69
In this sub-chapter, SiteMinder’s features, components, architecture and
operations are described. In later sub-chapters, the administration application is
briefly explained and the test bench setup is discussed. Finally, SiteMinder is
evaluated against the requirement specification introduced in chapter 5, and
problems encountered during the tests are described in detail.

7.2.1 Supported platforms

Netegrity SiteMinder is supported on the following server platforms i.e. the policy
server component can be run on Windows NT/2000 or Solaris.

The web agent platforms include [39]:


• MS Internet Information Server on Windows NT/2000
• Netscape Server on Windows NT/2000, Solaris, HP-UX and AIX
• Apache on Linux, Solaris and HP-UX
• Domino on Solaris and HP-UX
• IBM HTTP on AIX

Supported user directories include [39]:


• iPlanet Directory Server
• Oracle Directory Server
• Microsoft Active Directory
• Novell NDS eDirectory
• IBM SecureWay Directory
• Siemens DirX
• Critical Path
• MS SQL Server
• Oracle
• Windows NT Domain Database

7.2.2 SiteMinders functional components

SiteMinder consists of a few major components. These are


• The policy server (authentication, authorisation, audit, administration)
• The policy store (directory)
• The user database (directory)
• Web- and affiliate agents (web- and application servers)
• External authentication services (custom authentication schemes)

70
Administration Audit Audit
External Server Server Database
Authentication
Services

User Authentication Authorisation Policy


Directory Server Server Store

Web Web Web


Agent Agent Agent

Web Web Web


Server Server Server

Figure 7.1 Netegrity SiteMinder components and interactions

The policy server component provides four distinct services to the SiteMinder
application agents: authentication, authorisation, audit and administration
services. All of these services are provided by separate daemons and at least
authentication and authorisation need to be present for SM to be operational. The
relationships and components are shown in “Figure 7.1 Netegrity SiteMinder
components and interactions” above.

In addition to these core services, SiteMinder requires two directory repositories –


the user data and policy data repositories. These are usually obtained by using an
LDAP server with suitable schemas installed to support the SiteMinder structures.
It is also possible to use databases for the user- and policy stores like Oracle or
MS SQL Server.

With both the core services and databases on-line, the agents can be distributed to
all services that require protection. The standard agents support all of the
aforementioned web server and operating system combinations. It is also possible
to configure an Apache web server running on Solaris to act as a reverse proxy for
the protection of such services that are not running on supported platforms.

71
7.2.3 SiteMinder’s architecture

SiteMinder’s architecture is quite simple and powerful. The major components are
depicted below in “Figure 7.2 The SiteMinder architecture overview” are briefly
explained below:

External
Authentication Logic

Web Server and Agent

Policy Data

Policy Store
S/SSO
Web Server Authentication and
User and Agent Authorisation
User Data

Policy Server Cluster


Web Server and Agent User Directories

Web Server and Agent

Figure 7.2 The SiteMinder architecture overview

SiteMinder’s architecture consists of few central components - the policy server,


the user- and policy directories, external authentication logics and the agents. Its
power lies in the extensive use of external directories and a powerful policy
description language.

When a user tries to access the protected resource on a web server, he actually
communicates with the SiteMinder web agent. The web agent takes care of all
authentication and authorisation procedures on behalf of the web server and
application servers. If the resource requires authentication the web agent prompts
the user with the desired authentication dialogs and proceeds with the
authentication process with the policy server. If an application server or the web
server requires knowledge of the user, this can be forwarded by using HTTP-
headers that the web agent includes in the requests it forwards to the protected
service.

7.2.3.1 Load balancing and fail-over support


Load balancing distributes data traffic across many systems to avoid
overburdening a single system. Load balancing provides faster and more efficient
access to resources, such as policies or user directories. Fail-over is a redundancy
mode that allows an administrator to specify a primary and a set of backup
systems. When the primary system fails, requests are transferred to the backup

72
systems until the primary recovers. SiteMinder supports load balancing and fail-
over between the following:
- Web Agents and Policy Servers
- Policy Servers and LDAP user directories
- Policy Servers and ODBC user databases (fail-over only)

You can select the load-balancing operation mode to distribute user requests
directed from the Web Agents to multiple Policy Servers and from the Policy
Server to replicated LDAP user directories. [43, p.63]

7.2.3.2 Operation description


SiteMinder’s operation is quite complex. It relies on multiple components, which
operate in concert to provide authentication of the user and authorisation of his
requested actions. In the following paragraphs, the operation of SiteMinder is
explained with the help of a hypothetical login event.

Resource protection
First, the user attempts to access a protected resource by specifying the
application’s URL. The Web Agent will intercept this request at the server, and
determine from its locally cached copy of the policy database whether this
resource is being protected by SiteMinder [44]. If not, then it will exit and the
Web Server will process the request. Otherwise, it will proceed with the
authentication request process described below.

Authentication
If SiteMinder is protecting the resource, the Policy Server will determine which
form of authentication is required based on its policy database and the associated
security levels of the requested URI. SiteMinder supports a wide range of
authentication methods including passwords, certificates and tokens, but also a
combination of these methods. [44]

The Web Agent will send a request to the user's browser, and it will return the
user's credentials. Typically, this is a username/password, but it could also be a
certificate and a token card PIN. The Policy Server then passes this information to
the directory for authentication. [44] With certificates, this involves the checking
of the integrity of the certificate and once convinced of its integrity and
trustworthiness, the extraction of the specified mapping-field from the certificate
and comparing this with the directory’s corresponding entry.

If the user failed to authenticate, then custom pages or actions can be taken, such
as a personalised error page. If the user successfully authenticated to the Policy
Server, then a strongly encrypted cookie is created and stored in the user's
browser. This cookie does not contain any sensitive information like a password.

73
Instead, it contains the user's full directory name, and a number of timestamps and
other information. Once a user has successfully authenticated to SiteMinder, this
cookie can be used later to allow single sign-on across all the applications on the
Web site. [44] If cookies are not supported by the browser, session tracking can be
obtained by following the SSL-session ID’s, but global single sign-on
functionality is lost.

Authorisation
Once the user has been authenticated, SiteMinder must next determine if they
should be granted access to this specific resource. The Policy Server then looks up
all the policies that are related to the requested resource. The Policy Server
consults the directory, and determines whether the user is a member of any of the
groups associated with these policies. If the user is not authorised for this
resource, then custom error pages can be created and presented to the user. If the
authorisation is successful, then the user will be granted access to the application.
[44]

Personalisation
When the application is invoked, SiteMinder passes information concerning this
user directly to the application in the form of header variables. This information
often contains user attributes from the directory or it could be dynamic data from
various data sources. [44]

7.2.3.3 PKI support


SiteMinder has very good support for PKI and X.509v3 certificates. It supports all
certificates issued by the major commercial certificate authorities, and revocation
list processing.

Figure 7.3 SiteMinder X.509 client authentication process explains the operation
of SiteMinder in the client-certificate authentication process.

74
Web Policy
Server Server
Azn_Request(DN, URL)
Azn_Redirect(Cert,URL) Web
Web
Agent/
Agent
Azn_Info(UID,URL) SCC

SSL Client Authentication(Cert)


HTTP_Request(URL)

Figure 7.3 SiteMinder X.509 client authentication process

When a user authenticates using an X.509v3 certificate, the SiteMinder Web


Agent forwards the connection to the SSL Credential Collector after the X.509v3
client authentication has successfully been completed. SCC then extracts the
necessary user information from the certificate, such as a user’s distinguished
name (DN) and the certificate issuer’s DN for further processing. The Web
Agent/SCC passes this information to the Policy Server. The Policy Server
verifies that the user is listed in the appropriate user directory, and then
authenticates the user. [45, p. 230]

After verifying the user’s identity and validity, the Policy Server authorises the
user access to the requested resources. SiteMinder also supports certificate
revocation list (CRL) processing provided by most PKI vendors. Certificate
revocation checking ensures that the certificates in use have not been invalidated
by the owner. If a certificate expires, the PKI system does not accept it, which is
critical for secure transactions. [43, p59]

7.3 The SiteMinder test bench

The general test setup built by the author in Elisa’s Research Centre laboratory
was constructed as illustrated in “Figure 7.4 SiteMinder test setup” below:

75
Local Policy
Store
IIS4 + Web Agent FINEID
Directory

IIS5 + Web Agent

SM Policy Server Portugal Telecom


Directory
IIS5 + Web Agent

Netscape HTTPd + Web Agent


Elisa Research
Directory

Figure 7.4 SiteMinder test setup

Elisa Research Centre had as figure 7.4 describes one Policy Server running on
Solaris 8, which was accompanied by the Netscape web server. Logically these
two are separate, and therefore they are separated into two different entities in the
picture. The Policy Server had its local policy store in a flat file database, and all
entitlement information was stored in this database.

There were multiple web servers participating in the test and they were:
• sm-iis.rc.elisa.fi running Windows NT 4.0 Server and Internet Information
Server 4.0
• parsec.rc.elisa.fi running Windows 2000 Server and Internet Information
Server 5.0
• pt-iaa.pki.aveiro-digital.org running Windows 2000 Server and Internet
Information Server 5.0 in Portugal
• netra3.rc.elisa.fi as the Netscape/iPlanet HTTP-server running on Solaris 8
on the same Sun as the Policy Server.

The directories used for this test were the governmental Finnish Electronic
Identity (FINEID) public directory, Elisa Research Centre’s (RC) internal LDAP-
directory and the Portugal Telecom Research Centre’s (PT) internal LDAP –
directory. RC runs the iPlanet Directory on Solaris 8 and PT a Microsoft LDAP
front-end for Active Directory.

76
After setting the machines up and installing the required software, the testing and
evaluation could commence. In addition to the author, several users participated in
the testing phase of the S/SSO implementation from around Europe, using both
file- and smart card based X.509v3 certificates. The author was able to
authenticate all of the test users with their certificates into the test realm after
some setup related problems were resolved. In addition, single sign-on was
successfully tested between the various servers. For comparison and debugging
purposes, Basic-authentication was also successfully tested.

7.3.1 Structure of the sites to be protected

The sites protected in this test bed were all stand-alone sites with two categories
of pages, protected and public. All protected pages required certificate
authentication, and in most cases, the user certificate resided on a smart card. All
sites had the SiteMinder agent running in native plug-in, mode and only one agent
acted as the cookie provider for the entire authentication realm. This agent resided
on netra3 – the policy server along with the SSL Credential Collector. All sites
belonged to the same authentication and single sign-on realm called ‘Test Realm’.

The sites permitted access to the public front page shown in “Figure 7.5 The
public jump page from netra3.rc.elisa.fi”, and from here links to the other sites
and to the protected content on the server. When a client accessed the site in
question he was shown the public front page and prompted for authentication if he
placed a request for a protected resource. Once the client had been authenticated
on any one of the server hosts, he was able to access the protected items on every
host in the Test Realm. It would have been possible to specify multiple levels of
authentication but smart card login was deemed secure enough for this test.

77
Figure 7.5 The public jump page from netra3.rc.elisa.fi

In “Figure 7.5 The public jump page from netra3.rc.elisa.fi” the front page where
the client could select either to access the local protected resource or navigate to
one of the affiliate sites in the S/SSO realm. Below in “Figure 7.6 SiteMinder
administration console front page” is the protected resource of netra3.rc.elisa.fi –
the SiteMinder administrations console front page:

78
Figure 7.6 SiteMinder administration console front page

From this page the administration console for SiteMinder is loaded as a Java™
applet applications to any host running a Java™ compliant run-time environment
and having the required three-factor (PIN + certificate and basic authentication on
top of that) authentication information.

7.3.2 The SiteMinder graphical administration user interface

Figure 7.7 The SiteMinder administration console

The graphical SiteMinder administration console in “Figure 7.7 The SiteMinder


administration console” is very intuitive in design and after one has mastered the

79
usage logic, modifying the S/SSO infrastructures configuration becomes very
efficient. This is one of the best administrations GUIs seen in this test. “Figure 7.8
The agent configuration menu” illustrates the agents:

Figure 7.8 The agent configuration menu

From this menu, the agent parameters can be adjusted, and the administrator can
revoke agent access when necessary. Next, the directories are introduced to the
S/SSO framework in the following “Figure 7.9 The user directories”:

Figure 7.9 The user directories

SiteMinder needs to know certain details about the directory, which are the
directory’s Internet address, the search base and which field contains the
distinguished name attribute in the directory. The details of the directory
configuration are shown below in “Figure 7.10 Directory configuration”:

80
Figure 7.10 Directory configuration

Of course, the directories themselves are configured by LDIF-files with the


appropriate directory schemas for SiteMinder to operate on.

Figure 7.11 Realms in the S/SSO policy domain

Then the configuration of the realm is all put together in the domain control tab in
“Figure 7.11 Realms in the S/SSO policy domain” above where realms are added
to policy domains and rules are added to realms as shown in “Figure 7.12 Rules
for the ptiaa-realm” below. A realm can be thought to consist of a single resource,
which an agent protects with different access control rules. There can be multiple
realms per agent since there may be different access control needs for different
parts of a site.

81
Figure 7.12 Rules for the ptiaa-realm

In this test setup, each site had only one realm, which was associated with the
only protected resource residing in that URL-path. There could be multiple rules
in a realm and these rules could point to the same resource, because the rules are
bound to users in the policy and different rules can exist for different users or
groups to the same resource.

Finally, a single test policy is shown below in “Figure 7.13 SiteMinder policy
setup dialog” defines acceptable user directories, associated rules and other
constraints. The users who are managed by this policy are selected via LDAP
queries and are associated with the relevant rules.

Figure 7.13 SiteMinder policy setup dialog

82
The critical part of SiteMinders certificate support is the certificate mapping
component seen in “Figure 7.14 Certificate mapping in SiteMinder” below where
one can tell which attribute is matched in the given directory. For example, it
could be that the user profile in the LDAP directory has a name of ‘Jack Smith’
with the title field of this record equalling the DN-field of his certificate. If
SiteMinder was instructed to match the title-field in LDAP with the extracted DN-
entry of his certificate and they match, then the rules and other information under
his LDAP-entry are related to him.

Figure 7.14 Certificate mapping in SiteMinder

For this to be secure one has to remember that in the first phase of authentication
the X.509 client authentication protocol is used to verify the validity of the
presented certificate. Additional requirements include that the certifiers’ public-
key certificate has been introduced to the S/SSO environment beforehand by the
administrator for certificate path validation purposes. Therefore, the certificates
DN-field can be trusted without further cryptographic processing and just match
strings with the LDAP entry. Obviously, the LDAP –server holding user- and
policy information must be well protected by other means to prevent modification
of the policy records by rogue persons.

7.3.3 Product evaluation to requirement specification

In this subchapter, the SiteMinder product is evaluated with the requirements


specified by the author previously in chapter 5 and the extent of support for each
feature is briefly discussed.

83
7.3.3.1 Confidentiality and integrity requirements
In this section, the communication security features are evaluated.

7.3.3.1.1 Encrypted communication paths


In SiteMinder, the agent to policy server communication is encrypted with a
symmetric crypto algorithm, but the algorithm used is unknown to us since the
implementation details are hidden behind the Tunnel Service API of the SDK.
This raises some doubt as to the security of the encryption algorithm.

The client to agent path can be encrypted with SSL tunnelling using any mutually
supported encryption algorithm between the client and agent. This is regarded as a
secure approach.

The policy server to directory path can also be encrypted also by SSL tunnelling
using any mutually supported encryption algorithm between the policy server and
the directory. This is also regarded as a secure approach.

7.3.3.1.2 Secure key management


The agent and policy server are the only two components that share a secret
encryption key. This key is decided upon when the components are setup and
subsequent key changes are made via the encrypted tunnel between the agent and
policy server. The policy server is automatically capable of encryption key
resynchronisation between all agents and policy servers. This is regarded as
secure key handling.

All other encryption keys are negotiated as part of the SSL handshake protocol
and this is regarded as secure.

7.3.3.1.3 Cryptographic access tickets


This key is also used for cookie encryption and therefore it has to be synchronised
between all agents in the S/SSO realm. The same doubt as in 6.3.3.1.1 is cast on
the encryption algorithm used in securing the cookies, but it is currently regarded
as being sufficient.

7.3.3.1.4 Alternative method of ticketing


In SiteMinder, sessions can be tracked in a somewhat limited scale within a
reverse proxy setup by tracking the SSL session ID and maintaining S/SSO
capability with the proxy server. This limits the S/SSO functionality to single
sign-on onto local servers situated behind the reverse proxy differing from agent
based wide area single sign-on between different networks. The support for this
requirement is sufficient though not flexible enough to replace cookies entirely.

7.3.3.1.5 Pluggable authentication method support

84
SiteMinder supports external authentication methods and authorisation logics via
well-defined APIs that enable the implementation of any authentication or
authorisation model. This support is adequate.

7.3.3.1.6 Strong authentication support


SiteMinder supports out-of-the-box X.509 client certificate authentication and
OTP-like tokens like CryptoCard RB-1. It can also interface with the RSA
SecurID ACE servers, and it can be further extended to support hybrid
authentication schemes. This is reasonably flexible.

7.3.3.1.7 Transaction non-repudiation


With the use of strong authentication transaction non-repudiation can be
implemented by requiring digital signatures in the authentication process to
validate the transactions before committing them. Unfortunately, this is not a
default function and it has to be implemented by hand. This is flexible enough, but
not entirely satisfactory.

7.3.3.1.8 Fine grained access policy enforcement


SiteMinder has one of the most flexible policy languages available and it enables
access control decisions the granularity of any single URI, CGI-script parameter
and Servlets, JSPs and EJBs. This is very satisfactory.

7.3.3.2 Availability
In this section, the availability features of SiteMinder are evaluated against the
requirement specification presented previously in chapter 5.

7.3.3.2.1 Transaction atomicity


SiteMinder documentation is not very clear about this so it can be assumed that
the back-end databases running the LDAP database may support transaction
atomicity, but SiteMinder does nothing to enforce this. This is unsatisfactory.

7.3.3.2.2 Component redundancy


In SiteMinder, all the components with the exception of plug-in agents can be
mirrored with peer fail-over capabilities. The agents can also be installed in a fail-
over configuration where the load is balanced between agents in the round robin
fashion. All critical components can be made redundant so the implementation is
satisfactory.

7.3.3.2.3 Disaster recovery


The infrastructure can be mirrored with hot fail-over services at different locations
for disaster recovery functionality. This is implemented in a satisfactory way.

7.3.3.2.4 Incremental scalability


The scaling of SiteMinder is incremental since more nodes can be brought on-line
by just adding new hosts with the agent configured to take part in the load-

85
balancing cluster. Additional policy server mirrors can also be added to the
infrastructure on-line. This is satisfactory.

7.3.3.2.5 Load balancing features


SiteMinder supports round robin load balancing for agents to contact backup
policy servers and second level cache features in the agent to offload repetitive
authorisation decisions from the policy server cluster. In addition, all hardware
based load-balancing features are available like L2 switches, etc. This is
satisfactory.

7.3.3.2.6 Multi-platform availability


SiteMinder supports all Microsoft Windows server platforms and the Sun Solaris
platform. This support is satisfactory as there is support for at least one stable
platform on which to run these critical services. Another UNIX platform would be
appreciated.

7.3.3.3 Accountability and auditability


In this section, the accounting and audit features of SiteMinder are evaluated
against the requirement specification laid down previously.

7.3.3.3.1 Centralised management framework integration


Currently no integration to any centralised management framework’s user
administration is available. Such integration is implementable through
SiteMinders extension API’s. This is still unsatisfying.

7.3.3.3.2 Delegated administration


With SiteMinder Delegated Management Services, the directory administration
can be split into very fine-grained partitions all the way down to LDAP –query
level selection. This is very satisfactory, but the requirement of another product
extension is a slight disappointment.

7.3.3.4 Other features


In this section, all the features that do not fit into any of the above categories are
evaluated against the requirement specification laid down previously.

7.3.3.4.1 External application server support


SiteMinder supports any CGI-compliant scripting system as well as having native
application agents for BEA WebLogic and IBM WebSphere. This is adequate.

7.3.3.4.2 Future support for global authenticators


There are no public roadmaps available concerning this issue. SiteMinders
architecture supports a wide variety of directories for authentication so it is very
possible that SiteMinder has sufficiently flexible architecture to support any future
growth paths. The lack of public information is disappointing.

86
7.3.3.4.3 Remote administration
SiteMinders administration is handled with both the graphical Java™ based user
interface and command line tools in UNIX for policy server management, so that
remote administration is well supported. This is satisfactory.

7.3.3.4.4 Graphical user interface for administration


The GUI for administration is very intuitive and quick to use. It is satisfactory.

7.3.3.4.5 Database locations and supported formats


SiteMinder exports all of its internal policy and user objects into standard LDAP,
ODBC and NT domain databases. This is satisfactory.

7.3.4 Evaluation results in table format

The above results can be condensed into the following “Table 7.1 Correspondence
of the requirement specification to SiteMinders features”:

1 (poor)

5 (good)
Correspondence of the requirement
specification to SiteMinders features

Encrypted communication paths X


Secure key management X
Cryptographic access tickets X
Alternative method of ticketing X
Pluggable authentication method support X
Strong authentication support X
Transaction non-repudiation X
Fine grained access policy enforcement X
Transaction atomicity X
Component redundancy X
Disaster recovery X
Incremental scalability X
Load balancing features X
Multi-platform availability X
Centralised management framework integration X
Delegated administration X
External application server support X
Future support for global authenticators X
Remote administration X
Graphical UI for administration X
Database locations and supported formats X
Table 7.1 Correspondence of the requirement specification to SiteMinders
features

87
This can be summarised as a good representative to all requirements laid out for a
successful candidate for the access control task. The handling of an extranet site
particularly requires granular access control policies, powerful administration
features, strong authentication support and a flexible architecture to protect any
future investment.

The most significant features concerning this particular case are the security,
integrity and availability features. In this case, since the production sites
transaction volume and traffic is low, transaction atomicity, load balancing and
centralised management framework integration could be dismissed. With this
additional weighting, SiteMinder fulfils the requirements satisfactorily.

The most displeasing feature of this product is its lack of a clear development
roadmap for any future version and development schedules. In addition, the
suitability for extreme customisation and extensive possibilities for policy
formulation might lead to administration problems if too fine-grained access
policies are implemented or alternatively extremely complex environments are
built.

The most pleasing feature of this product is its good support for external
standardised interfaces to databases such as LDAP. This enables easy migration to
potentially large user populations and guarantees data interchange ability. The
reliable certificate handling also merits acknowledgement.

7.3.5 Problems encountered in testing

There were some problems with the product while testing. At first, there were
problems with Elisa’s firewalls that blocked access to the required ports.
Fortunately, SiteMinder has very good documentation so these problems were
resolved quite rapidly.

The next problem was much harder to resolve, because certificates issued by the
FINEID began to work, while Elisa Research Centre’s test certificates were not
understood by SiteMinder. After weeks of analysis with Netegrity technical
support, it was discovered that SiteMinder parses the certificates contents using
the comma (‘,’) as an attribute separator, which in Elisa Research Centre’s case
was not a correct assumption, because the DN-field contained a comma in the
organisation text string. This was a valid X.509v3 certificate, but because of
invalid assumptions on the part of Netegrity’s development team, this almost
became a critical bug leading to rejection of the whole product from further
testing.

88
Having solved the above, the author had to install an Apache web server on Linux
together with the associated SiteMinder Web Agent. Unfortunately, Linux support
for SiteMinder is not yet stable. We were able to get the agent up and running on a
RedHat Linux 6.2 server after a few weeks of trial and error, but for some
unknown reason it would not communicate with the Policy Server. The SM plug-
in for Apache on Linux is still in development and as such has to be excused as
beta software.

The remainder of the problems encountered were generally related to


configuration mistakes and other obvious operator logic failures, while building
policies and rules.

89
8 CONCLUSIONS
In this chapter, the results of this thesis are described. In addition, some future
trends are discussed as the closing remarks of this thesis work.

8.1 Evaluation results

After evaluating multiple commercial Web S/SSO products, it has become clear to
the author that none of them is mature enough to be trusted with protecting a
critical operative service. Netegritys’ SiteMinder is the most mature product of
those tested. Consequently, I can recommend it for low- to medium critical
environments such as intra- or extranets and customer care sites.

The plug-in based architecture used extensively in SiteMinder is a near optimal


architecture solution (see Figure 8.1 Suggested S/SSO architecture) for a web-
service single sign-on solution. It provides the best flexibility in terms of support
for different web-servers, co-location of equipment and independence of network
structure. As modularity is increasingly becoming the de facto standard of
software development, the plug-in architecture is the most likely one to succeed in
the near future. In addition to plug-in capabilities, an optimal architecture should
include encrypted data-paths, redundancy features for fail-over, load-balancing
support and good remote management features. All of these aforementioned
features are present in SiteMinder. Other less general requirements are detailed in
chapter 5 of this thesis, and a detailed evaluation of SiteMinder is presented above
in chapter 6.

90
External
Authentication Logic

Web Server and Agent

Policy Data

Policy Store
S/SSO
Web Server Authentication and
User and Agent Authorisation
User Data

Policy Server Cluster


Web Server and Agent User Directories

Web Server and Agent

Figure 8.1 Suggested S/SSO architecture

As can be seen in the above sketch, the previously mentioned architecture


operates on the ‘thin client / heavy server’ paradigm, which makes it very
compelling to use. The many benefits of this architecture are realised through
cost-savings in both hardware and administration costs. The centralised
management of the access policy enables efficient administration, and easy
placement of the services, because physical access is not required by the
administrators, and there is no need for external proxies in front of every
physically separate web-server requiring protection.

The extensive use of external directories for authorisation and user data is a two
edged sword in the security sense. It enables very flexible user and policy
management with directory access enabled administration tools, and easy data
migration from one system to another in upgrade cases. On the other hand, it also
introduces a potentially weak link into the security chain of the entire S/SSO
infrastructure. Directory security is very much neglected because administration
staff generally regards directories as little more than normal public telephone
directories. Therefore, the security of the directory must be enforced with all
available technical aids such as firewalls, encrypted directory access protocols and
strong authentication if possible.

Secure single sign-on using X.509 certificates is very compelling, as the user only
has to remember his PIN-code for authentication. Fortunately, this is a reality
today and can be accomplished with both the FINEID-card and file-based X.509
certificates. Of the evaluated products, only SiteMinder was mature enough to
support the certificate profiles used in all tested certificates. All the other products
had problems with accepting the FINEID-profile certificates or understanding

91
certificates at all. Almost all vendors claim to have support for PKI, but the
quality of the support varies very much from vendor to vendor.

Overall, the tests were successful in revealing the current accurate state of
certificate support in the tested products, and they gave new insights into the inner
workings of web single sign-on systems, certificate support implementations and
their pitfalls.

8.2 Future trends

It has also became clear that there exists a huge demand for single sign-on
solutions, because companies are struggling to provide more sensitive information
to their customers and business partners over the Internet in a safe manner. As
web services grow, and the world becomes even further networked, it will be
essential to be able to have fine-grained, centralised access control over the
resources one is providing.

Two interesting projects on the Internet strive to provide ubiquitous identity to the
“citizens of the Internet”. These are the Microsoft Passport [46] system and the
Liberty Alliance [47] from a coalition of companies including Sun Microsystems,
Nokia, RSA Security and 37 others. The stated goal of these projects is to provide
a ubiquitous identity and a secure personal information platform, but the most
exciting feature is the creation of a universal single sign-on standard for global
usage. If successful, this will open many new possibilities for services to do
business on the Internet with heightened levels of security and confidence in a
customer’s identity.

Today, neither secure single sign-on or smart card supports are ready for prime
time, but they are getting closer day by day. The standards defining certificates
and smart cards are stable, but unfortunately, related software components are
evolving very rapidly and there are no standards available for single sign-on in the
web environment. X.509 seems to be the strongest contender in the field of
certificates, and currently, the Microsoft’s Windows environment is the best
environment for smart card usage, while it still has only rudimentary PKI support.
Luckily, the pace of development is increasing and we are about to see many new
fine products on the market that use smart cards in one way or the other.

Good examples of this sort of development extend from the FINEID-, the French
Medical cards and the EMV –standard of major credit card institutions for
identification and digital signature functions to the SSH Communications firewall
products that creatively use smart cards to distribute the configuration and
authentication data to their firewall products.

92
As more applications begin to use smart cards, they will become as common as
the magnetic striped cards that we use today as ATM –cards.

Eventually, a global authentication system will be beneficial for both the


authentication service provider and web-service providers, who need to provide a
strong authentication services for their customers. It is hoped that in the future
there would exist open international standards for secure single sign-on utilising
PKI at its very core, as well as authentication –service access protocols.

Once good security systems have been implemented to protect services and log all
unauthorised activity, the attackers can be stopped and subsequently traced.

93
BIBLIOGRAPHY
[1] Chinitz, J. Single Sign-On: Is It Really Possible?, Access Control
Systems and Methodology, 2000. p. 32-45.
[2] Schneier, B. Secrets and Lies: Digital Security in a Networked
World, USA: John Wiley & Sons, 2000. ISBN 0-47-125311-1.
[3] The Open Group, Open Group Guide G801: Architecture for Public-
Key Infrastructure, 1998. ISBN 1-85912-221-3.
[4] Fielding, R., Gettys, J., Mogul, J., Frystyk, H., Masinter, L., Leach,
P., Berners-Lee, T. RFC2616: Hypertext Transfer Protocol --
HTTP/1.1, IETF, June 1999
[5] The Open Group, Open Group Technical Standard C908:
Authorization (AZN) API, 2000. ISBN 1-85912-266-3.
[6] Stallings, W. Cryptography & Network Security: Principles &
Practice 2nd edition, USA: Prentice Hall, 1998. ISBN 0-13-869017-0.
[7] Schneier, B. Applied Cryptography: Protocols, Algorithms, and
Source Code in C 2nd Edition, USA: John Wiley & Sons, 1995.
ISBN 0-471-12845-7.
[8] Krawczyk, H., Bellare, M., Canetti, R. RFC2104: HMAC Keyed-
Hashing for Message Authentication, IETF, February 1997
[9] NIST, FIPS-197: Advanced Encryption Standard,
http://csrc.nist.gov/encryption/aes/ (ref. 1.2.2002), NIST, 2001
[10] Ashley, P., Vandenwauver, M., Practical Intranet Security : Overview
of the State of the Art and Available Technologies, Netherlands:
Kluwer Academic Publishers, 1999. ISBN 0-7923-8354-0.
[11] Eurescom Gmbh., Impact of PKI on the European
Telecommunication Business, Eurescom Gmbh., 1999. EDIN P944-
GI
[12] Setec Oy, Smart Card Basics,
http://www.setec.fi/english/press/material/smartcardbasics.html (ref.
1.2.2002), Setec Oy, 2001
[13] GemPlus Corp., Welcome to smart cards,
http://www.gemplus.com/basics/what.htm (ref. 1.2.2002), GemPlus
Corp., 2000
[14] Smart Card Forum, What's so smart about Smart Cards?,

94
http://www.gemplus.com/basics/download/scards.pdf (ref. 1.2.2002),
Smart Card Forum, 2000
[15] Setec Oy, SetCOS Operating System used in more than 10 million
smart cards, http://www.setec.fi/english/press/material/setecos.html
(ref. 1.2.2002), Setec Oy, 2001
[16] Housley, R., Ford, W., Polk, W., Solo, D., RFC2459: Internet X.509
Public Key Infrastructure Certificate and CRL Profile, IETF, January
1999
[17] International Telecommunication Union, ITU-T Recommendation
X.509, ISO/IEC 9594-8: Information Technology Open Systems
Interconnection The Directory: Public-key And Attribute Certificate
Frameworks, http://www.itu.int/rec/dologin.asp?lang=e&id=T-REC-
X.509-200003-I!!PDF-E&parent=T-REC-X.509-200003-I (ref.
5.2.2002), International Telecommunication Union, 2000.
[18] Microsoft Corp., Windows 2000 Certificate Validation Logic,
http://www.microsoft.com/windows2000/techinfo/reskit/en-
us/default.asp (ref. 7.12.2001), Microsoft Corp., 2001
[19] Adams, C., Lloyd, S., Kent, S., Understanding the Public-Key
Infrastructure: Concepts, Standards, and Deployment Considerations,
USA: New Riders Publishing, 1999. ISBN 1-57-870166-X.
[20] Eurescom Gmbh., PKI Implementation and Test Suites for Selected
Applications and Services Final Report, Eurescom Gmbh., 2001.
EDIN 0170-1001
[21] Kent, S. RFC1422: Privacy Enhancement for Internet Electronic Mail
Part II: Certificate-Based Key Management, IETF, February 1993
[22] Franks, J., Hallam-Baker, P., Hostetler, J., Lawrence, S., Leach, P.,
Luotonen, A., Stewart, L., RFC2617: HTTP Authentication: Basic
and Digest Access Authentication, IETF, June 1999
[23] Haller, N., Metz, C., Nesser, P., Straw, M., RFC2289: A One-Time
Password System, IETF, February 1998
[24] RSA Security Inc., RSA SecurID Authentication: A Better Value for
Better ROI,
http://www.rsasecurity.com/products/securid/whitepapers/BVBROI_
WP_1201.pdf (ref. 6.1.2002), RSA Security Inc. 2001
[25] Smith, R.E., Authentication: From Passwords to Public Keys, USA:
Addison-Wesley Pub Co, 2001. ISBN 0-201-61599-1.
[26] Tung, B., Hur, M., Medvinsky, A., Medvinsky, S., Wray, J., Trostle,
J., Internet Draft: Public Key Cryptography for Initial Authentication
in Kerberos, http://www.ietf.org/internet-drafts/draft-ietf-cat-
kerberos-pk-init-15.txt (ref. 6.1.2002), IETF, 2002
[27] Hur, M., Tung, B., Ryutov, T., Neuman, C., Medvinsky, A., Tsudik,
G., Sommerfeld, B., Internet Draft: Public Key Cryptography for
Cross-Realm Authentication in Kerberos,

95
http://www.ietf.org/internet-drafts/draft-ietf-cat-kerberos-pk-cross-
08.txt (ref. 6.1.2002), IETF, 2002
[28] Netegrity Inc., Siteminder 4.6 Planing Guide, Netegrity Inc., 2001.
[29] Netegrity Inc., Siteminder 4.6 Deployment Guide, Netegrity Inc.,
2001.
[30] Tivoli Inc., Tivoli SecureWay Policy Director Overview White
Paper, Tivoli Inc., 2000.
[31] Carden, P. The New Face of Single Sign-On,
http://www.networkcomputing.com/shared/printArticle?article=nc/10
06/1006f1full.html&pub=nwc (ref. 11.2.2002), Network Computing
Magazine, March 22 1999.
[32] Meller, P. European ministers agree on spam ban, cookie rules,
http://www.computerworld.com/storyba/0,4125,NAV47_STO66411,
00.html (ref. 5.2.2002), IDG News Service, Dec 2001
[33] Computer Associates International, eTrust Single Sign-On,
http://www3.ca.com/Solutions/Product.asp?ID=166 (ref. 16.12.2001),
Computer Associates International, 2001
[34] CyberSafe Inc., TrustBroker,
http://www.cybersafe.com/solutions/trustbroker.html (ref.
16.12.2001), CyberSafe Inc., 2001
[35] DataLynx Inc., Guardian, www.dlxguard.com/gd.html (ref.
16.12.2001), DataLynx Inc., 2001
[36] Entrust Inc., GetAccess, http://www.entrust.com/getaccess/index.htm
(ref. 16.12.2001), Entrust Inc., 2001
[37] Evidian Inc., Access Master SSO,
http://www.evidian.com/accessmaster/about/index.htm (ref.
16.12.2001), Evidian Inc., 2001
[38] Tivoli Inc., SecureWay Policy Director,
http://www.tivoli.com/products/index/secureway_policy_dir/index.ht
ml (ref. 16.12.2001), Tivoli Inc., 2001
[39] Netegrity Inc., SiteMinder,
http://www.netegrity.com/products/index.cfm?leveltwo=SiteMinder
(ref. 16.12.2001), Netegrity Inc., 2001
[40] RSA Security Inc., ClearTrust,
http://www.rsasecurity.com/products/cleartrust/ (ref. 16.12.2001),
RSA Security Inc., 2001
[41] Proginet Inc., Secure Pass Sync,
http://www.proginet.com/products/securpass/securpas.asp (ref.
16.12.2001), Proginet Inc., 2001
[42] Unisys Corp., Single Point Security,
http://www.unisys.com/security/default-02.asp#P48_5006 (ref.
16.12.2001), Unisys Corp., 2000
[43] Netegrity Inc., SiteMinder 4.6 Consepts Guide, Netegrity Inc., 2001.

96
[44] Netegrity Inc., How SiteMinder Works?,
http://www.netegrity.com/products/index.cfm?leveltwo=SiteMinder
&levelthree=HowItWorks (ref. 4.1.2002), Netegrity Inc., 2001
[45] Netegrity Inc., SiteMinder 4.6 Policy Server Operations Guide,
Netegrity Inc., 2001.
[46] Microsoft Corp., Microsoft .Net Passport,
http://www.passport.com/Consumer/default.asp?lc=1033 (ref.
11.2.2002), Microsoft Corp. 2002
[47] The Liberty Alliance, The Liberty Alliance Project,
http://www.projectliberty.org/ (ref. 1.2.2002), The Liberty Alliance,
2002

97

You might also like