Professional Documents
Culture Documents
This thesis has been submitted for official examination for a Master of Science
degree in electrical engineering on February 25, 2002. Espoo, Finland.
Supervisor __________________________________________
Professor Teemupekka Virtanen
Instructor __________________________________________
Kari Lehtinen MSc
TEKNILLINEN KORKEAKOULU
Sähkö- ja tietoliikennetekniikan osasto
Valvoja __________________________________________
Professori Teemupekka Virtanen
Ohjaaja __________________________________________
FM Kari Lehtinen
ii
HELSINKI UNIVERSITY ABSTRACT OF
OF TECHNOLOGY MASTER’S THESIS
Name of the Thesis: Smart card usage for authentication in web single sign-on
systems
The aim of this study is to select the best access control and administration
solution for protecting the extranet of the Elisa Research Centre, based on the
requirement specification dictated by the author. The basic requirements for
both single sign-on and smart card support are required from all candidate
products.
Both the tested, and many other products were able to satisfy the single sign-on
requirement, but only one of the tested products was able to fulfil successfully
the basic requirement of FINEID smart card utilisation for strong
authentication.
Keywords:
Authentication, single sign-on, security, smart card, certificate, X.509
iii
TEKNILLINEN KORKEAKOULU DIPLOMITYÖN
TIIVISTELMÄ
kertakirjautumisjärjestelmiin
Avainsanat:
Tunnistus, kertakirjautuminen, tietoturva, toimikortti, varmenne, X.509
iv
FOREWORD
This thesis was carried out at the Elisa Communications Corporation’s Research
Centre.
I would like to thank my fiancée Katri Talja and my mother Pirkko for their
toleration and encouragement that has enabled me to complete this thesis. I would
also like to extend my thanks to my father, Darryl Causton, for his suggestions
and corrections that considerably improved the final manuscript.
__________________________________________
Raymond Causton
v
TABLE OF CONTENTS
Abstract................................................................................................................. iii
Tiivistelmä ............................................................................................................ iv
Foreword.................................................................................................................v
List Of Figures...................................................................................................... ix
1 Introduction .......................................................................................................1
1.1 Background and objectives ..........................................................................1
1.2 Problem statement........................................................................................2
1.3 Thesis organisation.......................................................................................3
1.4 Definition of central terms and concepts .....................................................4
2 Background Infrastructure ..............................................................................7
2.1 Cryptography ...............................................................................................7
2.1.1 Hashing and message authentication .......................................................7
2.1.2 Symmetric cryptography..........................................................................9
2.1.3 Asymmetric cryptography .....................................................................10
2.1.4 Public-key digital signatures..................................................................11
2.1.5 Key exchange algorithms.......................................................................12
2.2 Public-key infrastructure............................................................................12
2.2.1 Smart cards ............................................................................................13
2.2.2 The X.509v3 certificate .........................................................................15
2.2.3 The LDAP directory ..............................................................................22
2.2.4 PKI structure ..........................................................................................23
2.2.5 Trust models ..........................................................................................25
2.2.6 Cross-certification..................................................................................26
3 Authentication Protocols And Methods ........................................................28
3.1 User ID / password authentication .............................................................28
3.1.1 Basic authentication ...............................................................................28
vi
3.1.2 Digest authentication .............................................................................29
3.1.3 One-time passwords...............................................................................30
3.2 Biometric authentication ............................................................................32
3.3 Symmetric key based cryptographic authentication ..................................33
3.3.1 ISO/IEC 9798-2 timestamp based unilateral authentication..................33
3.3.2 ISO/IEC 9798-2 nonce based mutual authentication.............................34
3.4 Public-key certificate based cryptographic authentication ........................35
3.4.1 Unilateral authentication protocols ........................................................35
3.4.2 Mutual authentication protocols ............................................................36
3.5 Network authentication systems ................................................................39
3.5.1 Kerberos system architecture.................................................................40
3.5.2 Operation of the Kerberos environment ................................................41
3.5.3 Pitfalls of Kerberos V ............................................................................43
3.5.4 Public-key cryptography extensions to Kerberos V ..............................43
4 Single Sign-On Architecture ..........................................................................45
4.1 Different single sign-on architectures ........................................................45
4.1.1 Native plug-in SSO agent ......................................................................46
4.1.2 Helper application agent ........................................................................47
4.1.3 Reverse proxy architecture ....................................................................47
4.2 Reduced sign-on architectures ...................................................................48
4.3 Single sign-on on the web ..........................................................................49
4.3.1 Credential passing methods ...................................................................50
4.3.2 Simple WebSSO scenario......................................................................51
4.3.3 Rejection points .....................................................................................52
4.4 Generalised single sign-on model ..............................................................52
5 Risk-Analysis Based Requirement specification ..........................................54
5.1 Raw requirements.......................................................................................55
5.2 Confidentiality and integrity ......................................................................55
5.2.1 Encrypted communication paths............................................................56
5.2.2 Secure key management ........................................................................56
5.2.3 Cryptographic access tickets..................................................................56
5.2.4 Alternative method of ticketing .............................................................56
5.2.5 Pluggable authentication method support..............................................57
5.2.6 Strong authentication support ................................................................57
5.2.7 Transaction non-repudiation ..................................................................57
5.2.8 Fine grained access policy enforcement ................................................58
5.3 Availability.................................................................................................58
5.3.1 Transaction atomicity ............................................................................58
5.3.2 Component redundancy .........................................................................59
5.3.3 Disaster recovery ...................................................................................59
5.3.4 Incremental scalability...........................................................................59
5.3.5 Load balancing features .........................................................................60
5.3.6 Multi-platform availability ....................................................................60
5.4 Accountability and audit ............................................................................60
5.4.1 Centralised management framework integration ...................................61
5.4.2 Delegated administration .......................................................................61
5.5 Other features .............................................................................................62
5.5.1 External application server support .......................................................62
vii
5.5.2 Future support for global authenticators ................................................62
5.5.3 Remote administrations .........................................................................63
5.5.4 Graphical UI for administration.............................................................63
5.5.5 Database locations and supported formats.............................................63
5.6 Summary of requirements ..........................................................................64
6 Commercial single sign-on product survey...................................................65
6.1 Computer Associates International: eTrust Single Sign-On ......................65
6.2 CyberSafe: TrustBroker .............................................................................65
6.3 DataLynx: Guardian...................................................................................66
6.4 Entrust: GetAccess .....................................................................................66
6.5 Evidian: AccessMaster SSO ......................................................................66
6.6 Tivoli: SecureWay Policy Director............................................................66
6.7 Netegrity: SiteMinder.................................................................................66
6.8 RSA: ClearTrust.........................................................................................67
6.9 Proginet: SecurPass Sync...........................................................................67
6.10 Unisys: Single Point Security..............................................................67
6.11 Feature summary of surveyed products...............................................67
6.12 Selection process of the products to be tested.....................................68
7 Evaluation Of The Selected Product .............................................................69
7.1 Evaluation background...............................................................................69
7.2 Netegrity SiteMinder 4.51..........................................................................69
7.2.1 Supported platforms...............................................................................70
7.2.2 SiteMinders functional components ......................................................70
7.2.3 SiteMinder’s architecture.......................................................................72
7.3 The SiteMinder test bench .........................................................................75
7.3.1 Structure of the sites to be protected......................................................77
7.3.2 The SiteMinder graphical administration user interface........................79
7.3.3 Product evaluation to requirement specification....................................83
7.3.4 Evaluation results in table format ..........................................................87
7.3.5 Problems encountered in testing ............................................................88
8 Conclusions ......................................................................................................90
8.1 Evaluation results .......................................................................................90
8.2 Future trends ..............................................................................................92
Bibliography .........................................................................................................94
viii
LIST OF FIGURES
ix
Figure 7.1 Netegrity SiteMinder components and interactions .............................71
Figure 7.2 The SiteMinder architecture overview .................................................72
Figure 7.3 SiteMinder X.509 client authentication process...................................75
Figure 7.4 SiteMinder test setup ............................................................................76
Figure 7.5 The public jump page from netra3.rc.elisa.fi........................................78
Figure 7.6 SiteMinder administration console front page......................................79
Figure 7.7 The SiteMinder administration console................................................79
Figure 7.8 The agent configuration menu..............................................................80
Figure 7.9 The user directories ..............................................................................80
Figure 7.10 Directory configuration ......................................................................81
Figure 7.11 Realms in the S/SSO policy domain...................................................81
Figure 7.12 Rules for the ptiaa-realm ....................................................................82
Figure 7.13 SiteMinder policy setup dialog...........................................................82
Figure 7.14 Certificate mapping in SiteMinder .....................................................83
Figure 8.1 Suggested S/SSO architecture ..............................................................91
x
LIST OF TABLES
xi
ABBREVIATIONS AND ACRONYMS
AAA Authentication, Authorisation and Auditing
ACF/2 IBM Access Control Facility 2
ACL Access Control List
AES Advanced Encryption Standard
ANSI American National Standards Institute
API Application Program Interface
AS Authentication Server
ASN.1 ASN.1 DER Encoding is a Tag, Length, and Value Encoding System
ASP Active Server Pages
ATM Automatic Teller Machine
CA Certification Authority
CBC Cipher Block Chaining
CCITT Consultative Committee for International Telegraph and Telephone
CGI Common Gateway Interface
CRL Certificate Revocation List
CVS Concurrent Versions System
DCE Distributed Computing Environment
DER Distinguished Encoding Rules
DES Digital Encryption Standard
DN X.509 Distinguished Name
DSA Digital Signature Algorithm
DSS Digital Signature Standard, see also DSA
EJB Enterprise Java Bean
EMV Europay, Mastercard and Visa
EU European Union
FINEID Finnish Electronic Identity
GSS-API Generic Security Service API
GUI Graphical User Interface
HMAC Hash Message Authentication Code
HTTP Hypertext Transfer Protocol
IAA Identification, Authentication and Authorisation
IEC International Electrotechnical Commission
IETF Internet Engineering Task Force
ISO International Standards Organisation
ITSEC Information Technology Security Evaluation Criteria
xii
ITU-T International Telecommunication Union - Telecommunication
JSP Java Server Pages
LAN Local Area Network
LDAP Lightweight Directory Access Protocol
MAC Message Authentication Code
MD5 Message Digest 5
MIT Massachusetts Institute of Technology
NIST National Institute of Standards and Technology
ODBC Open Database Connectivity
OS Operating System
OTP One-Time Password
PIN Personal Identity Number
PKC Public-Key Certificate
PKI Public-Key Infrastructure
PKIX Public Key Infrastructure for X.509 Certificates (IETF)
PSE Personal Secure Environment
RA Registration Authority
RACF Resource Access Control Facility
RFC Request For Comments
RIPEMD-160 RIPE Message Digest 160
RPC Remote Procedure Call
RSA Rivest-Shamir-Adleman, a public-key crypto algorithm
RSO Reduced Sign-On
S/SSO Secure Single Sign-On
SC Smart Card
SDK Software Development Kit
SHA Secure Hash Algorithm
SIM Subscriber Identity Module
SSL Secure Sockets Layer
SSO Single Sign-On
TCB Trusted Computing Base
TGS Ticket Granting Server
TGT Ticket Granting Ticket
TLS Transport Layer Security
URI Unified Resource Identifier
URL Unified Resource Locator
USB Universal Serial Bus
WAP Wireless Application Protocol
WebSSO Single Sign-On for WWW-Services
VPN Virtual Private Network
WWW World Wide Web
X.500 The ITU Specified Directory
X.509 Certificate Structure Standard
Table 1.1 Abbreviations and acronyms
xiii
1 INTRODUCTION
Originally, computer systems were very open in the sense that they were stand-
alone machines with physical access control to decide who may access the data
stored within the computing environment. With the arrival of terminal
connections to mainframes, it became a necessity to develop the multi-user
environment. The need for simple access control solutions to limit user access to
various resources was subsequently developed. This worked well with plain user-
id/password pair in order to login to systems, because computers were still scarce
and there were only a limited number of users. In time, local area networks began
to connect the computers to each other resulting in multiple systems that require
authentication. The number of passwords one had to memorise began to grow.
Today, with the Internet and global connectivity to various computing systems
together with the abundance of computers in a typical corporate network, the
number of different user-ids and passwords has grown tremendously. People
using these systems are faced with the difficulty of learning multiple credentials to
different systems off by heart. Furthermore, so as not to over simplify the issue,
every user normally has to change their passwords at least twice a year. The
passwords are made long and difficult to remember, because well-administered
computer systems enforce strict password quality requirements. Passwords are
easily misplaced or forgotten when the number of credentials the users has to use
grows. It also consumes precious working time when one has to login to multiple
systems manually, because it requires some seconds to remember and type in the
user-id/password combination on each system when access is needed. It has also
been noticed that login time increases with every failed authentication attempt [1].
1
from, all of these systems. A single sign-on infrastructure provides a solution to
these two problems.
Single sign-on is enabling technology for reducing the number of passwords one
has to use daily when using heterogeneous computing platforms and services. One
may think of single sign-on as a safe that holds the keys to all other resources that
one needs to access. This safe is special, in the sense that one only has to open the
safe with one key and all the other keys will automatically open the locks they
have access to without user intervention.
Therefore, one has to keep in mind the layered approach of computer security
infrastructure, where every successive layer of security is built on top of the
assumed invulnerable lower layer. This is extremely relevant, because nowadays
the complexity of systems has grown so much that it has become impossible to
verify the security of an entire system. Therefore, today all pieces of software are
built somewhat modularly to allow focusing quality assurance on a well-defined
sub-component of the whole one at a time. This leads to the aforementioned,
layered approach.
2
to type in ones authentication credentials during a typical computing session
where one uses two to three applications, which require some sort of
identification.
Due to the payload that legacy systems have to support, any general SSO
infrastructure needs to support and trust these non-native authentication methods
and implement a RSO-like system for mapping SSO access certificates to legacy
systems authentication methods, while at the same time appearing transparent to
the end-user. This functionality may be obtained using application proxies, -plug-
ins or scripting hosts.
The objective of this thesis is to evaluate what is the current state of strong
authentication in single sign-on platforms, and to formulate a recommendation of
an AAA -architecture for use in the Elisa Research Centre. The target computing
environment is heterogeneous, and there is a strong need for centralised extranet
user management.
Therefore, the primary issue addressed in this study is to identify the requirements
necessary for a successful S/SSO -infrastructure, identify and evaluate conforming
products and select the best product for deployment.
3
In chapter 3, different authentication methods are explained. This includes basic
authentication, both symmetric and asymmetric cryptography based strong
authentication methods as well as the most widespread network authentication
model, Kerberos.
In chapter 4, the general single sign-on agent architectures are discussed and the
distinction between web- and legacy single sign-on and reduced sign-on are
explained with an emphasis on the possibilities the web model has to offer to
facilitate strong authentication and access ticket operations.
In chapter 7, the best product and its test set-up is described in detail, and its
compliance to the requirement criteria from chapter 5 is evaluated.
Finally, in chapter 8, this thesis work is summarised and conclusions are made.
Here the meanings of some central terms that I will use in this thesis are
explained.
Software architecture
The architecture of a software system is the set of interfaces through which its
functions are accessed, and the set of protocols with which it communicates with
other systems. [3, p.5]
Platform
The term is used to mean the service- or computing environment. The collection
of computing resources runs the single sign-on services and the host operating
system.
Protected service
The resources of a service are protected by the secure single sign-on
infrastructure.
Client
4
Client is a program that establishes connections for sending requests. [4]
Client environment
This is the computing platform of the person requesting access to protected
resources from the authentication and single sign-on infrastructure. This
includes all applications and devices that are required to access the protected
services.
Server
A server is an application program that accepts connections in order to service
requests by sending back responses to the clients. Any given program may be
capable of being both a client and a server; in this thesis, the use of these terms
refers only to the role being performed by the program for a particular
connection, rather than to the program's capabilities in general. Likewise, any
server may act as an origin server, proxy, gateway, or tunnel, switching
behaviour depending on the nature of each request. [4]
Proxy
Proxy is an intermediary program that acts as both a server and a client for
making requests on behalf of other clients. Requests are serviced internally or by
passing them on, with possible translation, to other servers. A proxy MUST
implement both the client and server requirements of this specification. A
"transparent proxy" is a proxy that does not modify the request or response
beyond what is required for proxy authentication and identification. A "non-
transparent proxy" is a proxy that modifies the request or response in order to
provide some added service to the user agent, such as group annotation services;
media type transformation, protocol reduction, or anonymity filtering. Except
where either transparent or non-transparent behaviour is explicitly stated, the
HTTP proxy requirements apply to both types of proxies. [4]
User agent
User Agent is the client, which initiates a request. These are often browsers,
editors, spiders (web-traversing robots), or other end user tools. [4]
Single sign-on
SSO is the concept of using a single credential to gain access to all computing
resources both locally and on the network. The used credential may include the
use of passwords, tokens, certificates and other authentication methods for initial
authentication.
Reduced sign-on
RSO is the concept of reducing the burden of signing onto multiple systems with
different credentials. This, however, is not single sign-on, as typically the user
5
has to have multiple credentials even though he is using a reduced sign-on
environment, which controls those credentials in everyday use.
Authentication
Authentication is the verification process to compare an electronically stored set
of identification data supposedly unique to a given user, with the same data that
the user inputs as their unique identifier, for comparison with the stored version.
If the comparison is found to be true, then that user is authenticated as correct,
so he can then be granted access rights (i.e. given authorisation) appropriate to
that user. [4]
Authorisation
A generally accepted definition of Authorisation is "the granting of access rights
to a subject (for example a user or a program)." [5, p.1]
Audit
Audit in this paper means the process of collecting transaction data to log-files
for later analysis in case of a system malfunction, breach or other unusual
circumstance. Detailed data of the systems usage history may be required for
later analysis.
6
2 BACKGROUND INFRASTRUCTURE
In this chapter, technologies that play a significant role in building and operating a
single sign-on infrastructure and certificate based authentication systems are
described. These include short introductions to cryptography, the public-key
infrastructure, and its subcomponents. The content of this chapter strives to build
on each previous topic. The relevant cryptographic methods are described first
because these terms will be used throughout the entire study. Next building on the
general understanding of cryptography, the public-key infrastructure and its
components are described. PKI is essential for smart card authentication to
function, because certificates would not exist without PKI.
2.1 Cryptography
The hash –functions are used for generating unique “fingerprints” of an arbitrary
amount of data. This fingerprint is then utilised in digital signatures as the
representative of the actual document that is signed. These fingerprints are also
used for integrity checking purposes in the message authentication form.
7
Lorem ipsum dolor sit amet, consectetuer
adipiscing elit, sed diam nonummy nibh
euismod tincidunt ut laoreet dolore magna
aliquam erat volutpat. Ut wisi enim ad minim
veniam, quis nostrud exerci tation ullamcorper
suscipit lobortis nisl ut aliquip ex ea commodo
consequat. Duis autem vel eum iriure dolor in
hendrerit in vulputate velit esse molestie
consequat, vel illum dolore eu feugiat nulla
facilisis at vero eros et accumsan et iusto odio
dignissim qui blandit praesent luptatum zzril
delenit augue duis dolore te feugait nulla
facilisi. Nam liber tempor cum soluta nobis
eleifend option congue nihil imperdiet doming id
quod mazim placerat facer possim assum. Typi
non habent claritatem insitam; est usus Data
legentis in iis qui facit eorum claritatem.
HASH(Data) FF:2A:45:12:DE:12
Investigationes demonstraverunt lectores
legere me lius quod ii legunt saepius. Claritas
est etiam processus dynamicus, qui sequitur
mutationem consuetudium lectorum. Mirum est
notare quam littera gothica, quam nunc
putamus parum claram, anteposuerit litterarum
formas humanitatis per seacula quarta decima
et quinta decima. Eodem modo typi, qui nunc
nobis videntur parum clari, fiant sollemnes in
futurum.
Hashes are typically used for storing passwords, message authentication and
verifying the signatures of electronic documents. Three typical hash algorithms
used are Secure Hash Algorithm (SHA), Message Digest 5 (MD5) and RIPEMD-
160 [6, p.272-293, 7, p.436-439,442-445].
A hash code can be used to provide message authentication i.e. integrity checking
by appending the hash code as redundant data to the message M. Using a plain
hash of the data is in itself insufficient, and therefore, a secret component needs to
be added to the hashed message. These hash-functions are called keyed-hashes or
message authentication functions [6, p.243, 7, p.455,]. This is displayed in “Figure
2.2 MAC constructed from a hash-function” below.
8
Lorem ipsum dolor sit amet, consectetuer
adipiscing elit, sed diam nonummy nibh
euismod tincidunt ut laoreet dolore magna
aliquam erat volutpat. Ut wisi enim ad minim
veniam, quis nostrud exerci tation ullamcorper
suscipit lobortis nisl ut aliquip ex ea commodo
consequat. Duis autem vel eum iriure dolor in
hendrerit in vulputate velit esse molestie
consequat, vel illum dolore eu feugiat nulla Password
facilisis at vero eros et accumsan et iusto odio
dignissim qui blandit praesent luptatum zzril
delenit augue duis dolore te feugait nulla
facilisi. Nam liber tempor cum soluta nobis
eleifend option congue nihil imperdiet doming id
quod mazim placerat facer possim assum. Typi
non habent claritatem insitam; est usus Data
legentis in iis qui facit eorum claritatem.
HASH(Data,password) FF:2A:45:12:DE:12
Investigationes demonstraverunt lectores
legere me lius quod ii legunt saepius. Claritas
est etiam processus dynamicus, qui sequitur
mutationem consuetudium lectorum. Mirum est
notare quam littera gothica, quam nunc
putamus parum claram, anteposuerit litterarum
formas humanitatis per seacula quarta decima
et quinta decima. Eodem modo typi, qui nunc
nobis videntur parum clari, fiant sollemnes in
futurum.
This differs from conventional hash-functions in that a secret key ‘K’ is appended
to the message M so that MAC = H (M, K) and an attacker cannot re-compute the
hash from a modified message M’ simply by MAC’ = H (M’). Because he does
not know the secret key K, this is impossible. MAC algorithms can be built based
on conventional hash-functions or symmetric encryption algorithms with
modifications to make them one-way. Popular MAC algorithms are HMAC [7,
p.293, 8] and the Data Authentication Algorithm, FIPS PUB 113 [7, p.252].
With symmetric cryptography, the cryptographic function uses the same key for
encryption and decryption as visualised in “Figure 2.3 Symmetric encryption
process” and thus it is said to be symmetric. Symmetric cryptography is
sometimes called shared-secret or secret key cryptography because the
encryption/decryption key has to be shared between all parties authorised to
access the enciphered data.
9
Plain text
Cipher text
Plain text
sdajhdkjahsdklah3lm4#¤"345l
Lorem ipsum dolor sit amet, Lorem ipsum dolor sit amet,
k4hj65klj45h6kj45n6m45n62-
consectetuer adipiscing elit, sed consectetuer adipiscing elit, sed
5.m,6n43%&N3456n3.45k6h#
diam nonummy nibh euismod diam nonummy nibh euismod
Sender ¤%&HJ#:¤%K6h#¤:%M;&HL: Receiver
tincidunt ut laoreet dolore tincidunt ut laoreet dolore
K¤%H&LKJ¤%H&LK%H&¤%
magna aliquam erat volutpat. Ut magna aliquam erat volutpat. Ut
H&J¤%H_&J#¤%H&:#¤H%&:
wisi enim ad minim veniam, quis Password Password wisi enim ad minim veniam, quis
K%JH&:#45-
nostrud exerci tation nostrud exerci tation
634k.56#_¤%6#¤%k6öL%&_¤
ullamcorper suscipit lobortis nisl ullamcorper suscipit lobortis nisl
%Lj7_:JY;:JTYJ4:&K/
ut aliquip ex ea commodo ut aliquip ex ea commodo
j56.k7j5.67j5.6k7j¤%K6j.5kj7.5
consequat. Duis autem vel eum consequat. Duis autem vel eum
6j7.%/J%J/%&:/
iriure dolor in hendrerit in iriure dolor in hendrerit in
j45&:7j56.j.6rk7j45.6k7j6.k7jas
vulputate velit esse molestie vulputate velit esse molestie
ytav
consequat, vel illum dolore eu Data Symmetric Data Symmetric consequat, vel illum dolore eu
j6.w45765.7567£$€@£$€@£
feugiat nulla facilisis at vero eros Cryptographic Algorith Cryptographic Algorith feugiat nulla facilisis at vero eros
$€2345lö345lk34ö€@€@€€sd
et accumsan et iusto odio et accumsan et iusto odio
fkasdjfklajsdlfk$£$ks,sdfaslhfa
dignissim qui blandit praesent dignissim qui blandit praesent
jsdhflkjs.,zxmcnadscFDFw454
luptatum zzril delenit augue duis luptatum zzril delenit augue duis
tsam,.nvi<zoejrwTA%656h45-
dolore te feugait nulla facilisi. dolore te feugait nulla facilisi.
645&&(/
Nam liber tempor cum soluta Nam liber tempor cum soluta
)7nknnvljzxnvjsndfmnamfcDN
nobis eleifend option congue nobis eleifend option congue
VM:SFNV:DFMNGSgjhfshn5
nihil imperdiet doming id quod nihil imperdiet doming id quod
6knskhf.kfK:JSDLKJGFLGJD
mazim placerat facer possim mazim placerat facer possim
FMNBF34k.56#_765.7567£$€
assum. Typi non habent assum. Typi non habent
@£$€@£$€2345lö345lk34ö€
claritatem insitam; est usus le claritatem insitam; est usus le
@€@€€sdfkasdjfklajsdlfk$£$k
s,sdfaslhfajsdhflkjs.,zxmcnads
cFDFw454tsam,.nvi<zoejrwTA
The most common symmetric algorithms in use today are the Data Encryption
Standard (DES) and 3DES – a variant of DES with a triple length encryption key.
More recent symmetric encryption algorithms include IDEA, Blowfish, Two Fish,
Rijndael, etc. Rijndael won the contest to become the Advanced Encryption
Standard [9], for the US National Institute of Standards and Technology. All of
these newer algorithms use longer encryption keys than DES, which makes them
much harder to compromise if no vulnerabilities in the algorithms themselves are
uncovered.
10
Plain text
sdajhdkjahsdklah3lm4#¤"345l
Lorem ipsum dolor sit amet, Lorem ipsum dolor sit amet,
k4hj65klj45h6kj45n6m45n62-
consectetuer adipiscing elit, sed consectetuer adipiscing elit, sed
5.m,6n43%&N3456n3.45k6h#
diam nonummy nibh euismod diam nonummy nibh euismod
¤%&HJ#:¤%K6h#¤:%M;&HL: Receiver
tincidunt ut laoreet dolore tincidunt ut laoreet dolore
K¤%H&LKJ¤%H&LK%H&¤%
magna aliquam erat volutpat. Ut magna aliquam erat volutpat. Ut
H&J¤%H_&J#¤%H&:#¤H%&:
wisi enim ad minim veniam, quis PIN-code wisi enim ad minim veniam, quis
K%JH&:#45-
nostrud exerci tation nostrud exerci tation
634k.56#_¤%6#¤%k6öL%&_¤
ullamcorper suscipit lobortis nisl ullamcorper suscipit lobortis nisl
%Lj7_:JY;:JTYJ4:&K/
ut aliquip ex ea commodo ut aliquip ex ea commodo
j56.k7j5.67j5.6k7j¤%K6j.5kj7.5
consequat. Duis autem vel eum consequat. Duis autem vel eum
Sender 6j7.%/J%J/%&:/
iriure dolor in hendrerit in iriure dolor in hendrerit in
j45&:7j56.j.6rk7j45.6k7j6.k7jas
vulputate velit esse molestie vulputate velit esse molestie
ytav
consequat, vel illum dolore eu Data Asymm.Algorith Data Asymm.Algorithm consequat, vel illum dolore eu
j6.w45765.7567£$€@£$€@£
feugiat nulla facilisis at vero eros EPKB(data) DPrKB(cipher text) feugiat nulla facilisis at vero eros
$€2345lö345lk34ö€@€@€€sd
et accumsan et iusto odio et accumsan et iusto odio
fkasdjfklajsdlfk$£$ks,sdfaslhfa
dignissim qui blandit praesent dignissim qui blandit praesent
jsdhflkjs.,zxmcnadscFDFw454
luptatum zzril delenit augue duis luptatum zzril delenit augue duis
tsam,.nvi<zoejrwTA%656h45-
dolore te feugait nulla facilisi. dolore te feugait nulla facilisi.
645&&(/
Nam liber tempor cum soluta Nam liber tempor cum soluta
)7nknnvljzxnvjsndfmnamfcDN
nobis eleifend option congue nobis eleifend option congue
VM:SFNV:DFMNGSgjhfshn5
nihil imperdiet doming id quod nihil imperdiet doming id quod
6knskhf.kfK:JSDLKJGFLGJD
mazim placerat facer possim mazim placerat facer possim
FMNBF34k.56#_765.7567£$€
assum. Typi non habent assum. Typi non habent
@£$€@£$€2345lö345lk34ö€
claritatem insitam; est usus le claritatem insitam; est usus le
@€@€€sdfkasdjfklajsdlfk$£$k
s,sdfaslhfajsdhflkjs.,zxmcnads
cFDFw454tsam,.nvi<zoejrwTA
The main difference is that anyone can now send you securely encrypted material,
which can be decrypted only with your private key – the public key is not able to
decrypt the encrypted data – only encrypt it.
These two keys are connected to each other mathematically in such a way that one
cannot deduce the other without knowledge of the original generation prime
numbers used to create the key-pair. The RSA algorithm is the most popular and
widely used public-key algorithm on the market, named after its inventors Ron
Rivest, Adi Shamir and Leonard Adleman. [6, p.173, 7, p.467]
The digital signature algorithm, DSA, was developed by the NSA and became the
digital signature standard, DSS, of NIST in 1994. One of the main reasons DSA
was chosen over the de facto standard RSA was the royalty freeness of DSA vs.
the RSA patents and royalty requirements [7, p.486]. It was originally thought
possible to generate only digital signatures and therefore be exempt from export
restrictions. This assumption was shown to be in some cases false as demonstrated
in the book Applied Cryptography [7, p.490-491]. A digital signature is made by
encrypting the hash of the document to be signed with the signer’s secret key and
by appending the resulting value to the document as shown in “Figure 2.5 Digital
signature creation and verification process” below. This signature can be verified
by calculating the same hash-function of the document and comparing this with
the signed hash-value after decrypting it with the signer’s public-key.
11
Plain text Signed data
HASH = D3:E4:GF:23:AC:FF
SIGN = as"44d%f6/
asdaASD#"s23F66jhj5349/
&¤csdfasdk
The RSA algorithm can also be used for digital signatures and nowadays it may
be used without royalties, so there are not many reasons left to use DSA anymore
as its signature verification is slower than RSA’s [7, p.483-486].
With key exchange algorithms, one can negotiate a symmetric encryption key for
bulk data encryption in such a way that an eavesdropper cannot deduce the key
from public information transmitted over the network in the key exchange
negotiation. A very successful key exchange algorithm is the Diffie-Hellman key
exchange algorithm [6, p.190, 7, p.513]. There also exist algorithms that
implement this with public-key encryption utilising certificates [10, p.38].
This chapter is segmented in the following logical order: first, the smart card i.e.
the secure container for the secret key issued by the CA is described. Then the
X.509 certificate and revocation list profiles are explained in order to provide the
12
reader with a general understanding of certificates. Once the certificate and the
immediate storage device have been described, it is necessary to expand the
discussion to include the directory. This is used for publishing the public part of a
certificate, which enables other people to use the cryptographic services provided
by the RSA public-key stored in the certificate. In addition, the revocation lists are
published in similar public directories for on-line determination of the validity of
a certificate. When combined, these technical components form a PKI framework.
The chapter also includes a brief discussion of the administrative entities of a PKI
such as the certificate and registration authorities as well as inter-domain trust
relationship management in the PKI models.
A smart card is the size of a credit card or a smaller SIM card, and it is equipped
with an embedded microcircuit as shown below in “Figure 2.6 Depiction of a
smart card [13]”, which contains memory and a microprocessor together with an
operating system for memory control. The smart card is a secure storage location
for secret information. [12] This is called the personal secure environment, PSE.
13
The card can store data such as personal information, money, or other information
whose alteration or disclosure might be risky. The card may also store encryption
keys, which the card user can use for tasks such as key exchange, network
identification or digital signature. [12, 14]
The smart card can also be used for decoding encrypted messages and for digital
signatures. The card makes it possible to avoid reading the key into the computer
for software encryption, which means that the risk of disclosure is considerably
reduced. [12, 14]
The security of the smart card is based on both logical and physical security.
Logical security means that the card does not leaks out information that should be
kept secret. The logical security of the smart card is controlled by a secure
operating system. [12]
Physical security is related to the structure of the smart card chip. The aim is to
make an unauthorised examination of the chip impossible, or at least very
expensive. Address and data lines that logically belong together are intermingled
in different layers. Phantom transistors are embedded in the circuitry to make
examination more difficult. Upper and lower limits for clock frequency hinder the
examination of the circuitry. [12]
Some tokens might have physical self-destruction mechanisms, but these are
devices mainly used by armies and intelligence services [2].
From a usability aspect the most interesting properties of a smart card are
• Personal
• Portability
• Easy to understand and use by the layman
14
It is worth making the distinction between a smart card and a memory card. While
they are similar in appearance, a memory card is only suitable for disposable data
storage. Memory cards are in general use as phone cards or other disposable
means of payment. [12]
A drawback to memory cards is their weak security. Unlike smart cards, they have
no operating system that controls memory usage, and their use may be subject to
the use of a card PIN-code. However, the protection of the memory card is not as
strong and versatile as in a smart card. [12]
In this thesis, smart cards are utilised to provide a secure, portable and strong
authentication method, which can be used in a heterogeneous computing
environment.
In this chapter, the IETFs PKIX working groups’ definition of Internet operable
profile for X.509v3 certificates and CRLs is introduced based on RFC 2459:
“PKIX X.509v3 certificate profile and X.509v2 CRL profile”.
15
called the version 1 (v1) format. When X.500 was revised in 1993, two more
fields were added, resulting in the version 2 (v2) format. ISO/IEC/ITU and ANSI
X9 developed the X.509 version 3 (v3) certificate format. The v3 format extends
the v2 format by adding a provision for additional extension fields. Particular
extension field types may be specified in standards or may be defined and
registered by any organisation or community. In June 1996, standardisation of the
basic v3 format was completed. [16] As of May 3, 2001 X.509 version 4 (v4) has
been available as a draft specification from ISO/IEC/ITU.
16
As a concrete example, a certificate obeying this profile issued by the Elisa
Communications Corp. test CA is presented here:
In “Figure 2.7 The beginning of a certificate, DN shown” the Subject attribute aka
Distinguished Name or DN is highlighted. It is noteworthy that the DN –attribute
is actually an aggregate of multiple attributes as can be seen in this picture.
Namely, it consists of the attributes CN, G, SN, Serial Number, I, OU, O and C.
These stand for
• CN = Common Name
• G = Given Name
• SN = Surname
• Serial Number = a unique serial number inside the CA who issued this
certificate
• I = Initial
• OU = Organisation Unit
• O = Organisation
• C = Country.
17
There could be other attributes and the contents of these fields differ from subject
and CA to another.
Figure 2.8 The middle section of the same certificate, CRL Distribution Point
shown
In “Figure 2.8 The middle section of the same certificate, CRL Distribution Point
shown” the CRL distribution point is highlighted because this enables the client
software to validate the certificates validity when one attempts to use a certificate.
If the CRL check is omitted or impossible, the trust model of a PKI is broken and
one cannot trust the certificate.
18
Figure 2.9 The end section of a certificate, Key Usage attributes shown
Lastly, the KeyUsage attribute is highlighted in “Figure 2.9 The end section of a
certificate, Key Usage attributes shown,” to show the constraints for the usage of
this particular certificate. As can be deduced from the above constraints this
certificate is used for client authentication with digitalSignature, keyAgreement
and dataEncipherment usage attributes as opposed to the second certificate on the
example smart card. Its KeyUsage field states as the constraint NonRepudiation. It
is therefore usable only for digital signature generation. Other possible values of
the KeyUsage-attiribute are listed below in “Table 2.2 Possible values for the
KeyUsage attribute [16]”:
19
2.2.2.4 The X.509v2 certificate revocation list
One goal of this X.509 v2 CRL profile is to foster the creation of an interoperable
and reusable Internet PKI. CRLs may be used in a wide range of applications and
environments covering a broad spectrum of interoperability goals and an even
broader spectrum of operational and assurance requirements. This profile
establishes a common baseline for generic applications requiring broad
interoperability. The profile defines a baseline set of information that can be
expected in every CRL. In addition, the profile defines common locations within
the CRL for frequently used attributes as well as common representations for
these attributes.
The X.509 v2 CRL syntax is as follows in “Table 2.3 ASN.1 encoded X.509v2
CRL syntax [16]”. For signature calculation, the data that is to be signed is ASN.1
DER encoded. ASN.1 DER encoding is a tag, length, and value encoding system
for each element.
As can be seen from this ASN.1 notation of the syntax of the CRL, a CRL
consists of a signed list of serial numbers of all revoked certificates with
corresponding revocation time stamps and possibly a reason code, if it is of
version 2.
Below in “Figure 2.10 The FINEID CRL on 5.12.2001” the certificate revocation
list of Dec 5 2001 of the FINEID directory is shown as Microsoft Internet
Explorer presents it.
20
Figure 2.10 The FINEID CRL on 5.12.2001
The CRL may be stored and distributed via a directory (LDAP, X.500), a web or
ftp server accessible via http or ftp or some other mechanism to provide the client
with similar check-up capabilities such as OCSP as defined in [16].
21
With path validation logic, the validity of a certificate is checked against all
possible revocation reasons from multiple sources including CRL-lookups. There
can also be constraints on a certificate imposed by certification authority policies
that can be effective on a certificate from higher up in the certification path.
As long as the certificate and its encryption keys are not bound to a person
biometrically, one can only presume that the person using a certificate is the
person who he claims to be, since anyone can use a certificate if the PIN code has
been compromised and the token or container has been stolen. Such biometric
solutions are becoming available on the market.
22
The certificates are stored in directories similar to the telephone directories for
easy access. The X.500 series of recommendations defines a very complex
directory protocol-suite and its structure. The LDAP directory was created as a
simpler alternative with support for the central X.500 features.
This imposes a large burden on the directory access protocol since it should be as
universal as possible, while at the same time enabling wide scalability to support
numerous daily enquiries.
The directory itself can consist of various types of databases so long as it supports
the LDAP query protocol. [19] Typical commercial LDAP directories include the
Netscape Directory Server, Oracle and IBM DB2 databases with LDAP front-
ends.
An LDAP directory can be queried with all modern web browsers like the
Microsoft Internet Explorer and Netscape Navigator as well as some LDAP
browsers specially built for this function.
23
Certification Policy (CP) and
Certification Practice Statements (CPS) Hardware Token
Manufacturer
PDA
LDAP LDAP
CRL Public Keys
Registration
Authority A2-1
Person Person
Server Server
As can be seen in the above “Figure 2.12 A sample PKI environment” there is a
single root CA, three subordinate CA’s and one RA. Here the root CA normally
would be operated by a corporation’s IT administration, while the subordinate
CA’s would represent the various departments, who take care of local user
administration on behalf of the corporation’s data administration.
In particular, for a PKI to be successful all parts have to be reliable and the root-
certificate has to be in trustworthy hands, since the PKI model is built on trust,
and that the root-certificate is non-corruptible.
24
2.2.5 Trust models
The two primary trust models in use today are the strictly hierarchical model and
the networked model. These models differ from each other in the fundamentals of
the hierarchy structure.
PDA
LDAP LDAP
CRL Public Keys
LDAP LDAP
CRL Public Keys
Registration
Authority A2-1
Person Person Person Person Person Person Person Person Person Person Person
Server Server Server Server Server Server Server Server Server
Person Person
Server Server
25
2.2.6 Cross-certification
In the strictly hierarchical trust model, there is a single root for all PKI
participants and there is only one PKI. While this simplifies matters, it is
unfortunately an impractical architecture from a policy perspective. A problem
arose from having a distributed trust model with multiple PKI’s that did not trust
each other per default via a common root. To resolve the problem the concept of
cross-certification was developed.
When two PKI’s wish to interoperate with each other they cross-certify each
other, which occurs when the CA’s in question digitally sign a cross-certificate,
which in practice is a certificate that contains the counterparts’ public keys [6,
p.343-344, 20].
When a client from below the cross-certification point tries to verify a certificate
from another PKI domain, it seeks a route to check the validity and trust
relationship by finding a trusted path via the cross-certificate.
Certification Policy (CP) and Certification Policy (CP) and
Certification Practice Statements (CPS) Hardware Token Certification Practice Statements (CPS)
Manufacturer
PDA
LDAP LDAP
CRL Public Keys
LDAP LDAP
CRL Public Keys
Cross Certification
Root Certification Authority A Root Certification Authority B
n
icatio
Certif
Cross
Registration
Authority A2-1
Person Person Person Person Person Person Person Person Person Person Person
Server Server Server Server Server Server Server Server Server
Person Person
Server Server
In “Figure 2.14 Cross-certified PKI domains with multi-path trust” three possible
cross-certification paths are illustrated. In the first version, the root CA’s cross-
certify each other and subsequently both PKI domains have a trusted path between
one another. A somewhat limited variation of the previous case can be seen as the
arrow between CA A2 and root CA B. Here all of PKI domain B has a trusted
path to all entities registered below CA A2 and vice versa. The most limited
version of cross-certification described here is the arrow between CA A3 and CA
B1. Now only the entities registered below CA A3 and CA B1 trust each other
and the rest of the PKI domains A and B remain alien to one another.
26
All of these cross-certification paths may coexist in peace. Usually though if the
roots cross-certify each other, all subordinate cross-certificates should be deleted
for the sake of clarity.
27
3 AUTHENTICATION PROTOCOLS AND
METHODS
This chapter is supposed to give the reader a general understanding that there
exists multiple methods of authentication, some more secure than others. There
are some de facto standards in authentication such as basic authentication i.e.
using a user inputted username and password combination. In this chapter,
different authentication methods are described in order of complexity and
strength. The most interesting authentication method concerning this thesis is of
course the public-key certificate based authentication scheme described below.
Historically, the use of plain username and password combinations has been the
most common method of authentication. Today, it is still the most prevalent
authentication method in existence, and its demise is nowhere in sight. It is an
outdated, insecure method, but easy to implement with minimal requirements on
the user-terminals with regard to equipment and software. Due to this and large
legacy payload, it will still be in use well into the 21st century.
28
Password
Terminal Server Database
Req(password, UID)
Send(UID)
Prompt(Password)
Send(Password)
Auth(OK)
This information is then sent unencrypted over the network to the service for
authentication. If the given credentials are correct, the client is given access i.e.
authenticated and authorised to access the service. When the identity of the client
is established access control decisions can be made as dictated by the access
policy of the service.
29
this verification can be done without sending the password unscrambled, which is
the biggest drawback of basic authentication. [22]
Terminal Password
Server
Database
Req(H(password), UID)
Send(UID)
Prompt(Password)
Send(H(Password))
Auth(OK)
The digest scheme issues challenges using a nonce value. A valid response
contains a checksum of the username, password, a given nonce value, the HTTP
method and the requested URI. This way, the password is never transmitted
unscrambled over the Internet. [22]
This is a special case of basic authentication where the password changes every
time one authenticates to a service and none of the passwords is reusable. When
doing OTP-authentication the server retains the correct passwords in a secure
index so that when one authenticates only the next unused password is valid. This
protects the authentication process from replay attacks in which an eavesdropper
has recorded previous network traffic and discovered the username/password pair
and attempts to log into the protected service using these credentials.
30
There are two entities in the operation of the OTP one-time password system. The
generator must produce the appropriate one-time password from the user's secret
pass-phrase and from information provided in the challenge from the server. The
server must send a challenge, which includes the appropriate generation
parameters to the generator. The one-time password must be verified, stored and
needs to correspond to the sequence number. [23]
This requires that the server does not contain any compromising secret
information while the seed, sequence number and last used key are all public data
and non-compromising given that the secure hash function used to generate the
password-sequence is non-invertible.
The OTP system generator passes the user's secret pass-phrase, along with a seed
received from the server as part of the challenge, through multiple iterations of a
secure hash function, which produces a one-time password. After each successful
authentication, the number of secure hash function iterations is reduced by one,
which generates a sequence of unique passwords. The server verifies the one-time
password received from the generator by computing the secure hash function
once, and comparing the result with the previously accepted one-time password.
[23]
The generator on the other hand must be reliable and secure as it contains the
secret generation key with which it computes the required number of hashes to
generate the correct password.
In the OTP system the password is coded as six human readable words as shown
in “Table 3.1 A typical one-time password sheet” to encode the 64 bit long
password into an more easily typed version [23] the standard dictionary for this
encoding is documented in the S/Key RFC 1760. The password may also be
encoded in hexadecimal as is presented in “Table 3.2 Alternative hexadecimal
OTP-key representations” or a non-intersecting localised dictionary may be
generated on a server as defined in [23] appendix B.
31
The alternative way of representing the keys in hexadecimal is shown below:
3503785b369cda8b 0x3503785b369cda8b
e5cc a1b8 7c13 096b 0xe5cca1b87c13096b
C7 48 90 F4 27 7B A1 CF 0xc74890f4277ba1cf
In the above hexadecimal representation, the first column illustrates the different
acceptable forms, and the right-hand column the correct interpretation of the
password. This example illustrates the requirement of total ignorance of white
spaces [23].
There are a few variants of OTP like the time-based variant of SecurID™ from
RSA Security Inc., which is a hardware token that generates new passwords every
60 seconds that are valid only for that period of time. In the SecurID™ scheme
the token and the authentication server are clock-synchronised and seeded with
the same secret start value. [24] Once the system has been activated, the stream of
“random” numbers the token generates is identical on all similarly primed tokens.
Usually, this random number is appended to a static personal secret for added
security, so that the possession of the token in itself does not permit access to
protected resources.
If the person can provide a good enough sample of the measured biometric, which
closely match a previously recorded biometric template, the identity of the person
authenticated is guaranteed.
The best-known biometric identification method is the finger scan, while less
common ones include iris scans, voiceprint, facial images and hand geometry
scans.
Biometrics has a very interesting future but it is not widely used today for various
reasons [25]. A possible application of biometrics could be the replacement of
conventional PIN-codes on smart cards with finger scans. This could occur with
the integration of the card reader device and the finger scanner to build a TCB for
smart card and biometric operations.
32
3.3 Symmetric key based cryptographic authentication
There are two major classes of cryptographic protocols based on the reliance on
synchronised time between the communicating parties’ computers. Timestamp
based protocols rely heavily on synchronised time, and subsequently succeed in
creating the required security associations with fewer messages than non-
synchronised algorithms. This is a nice feature, but the price for the simplification
of the protocol is obtaining reliable clock synchronisation between all
communicating parties. This problem is solved by using nonces i.e. random
numbers generated by the communicating parties and passed between the parties
in the protocol messages. Both methods are used to enhance resistance against
replay attacks by an eavesdropper replaying old messages. By tracking the
freshness of issued nonces or the timestamps the communicating parties are able
to deduce if a message is fresh or not.
33
Personal
Terminal Server
EKab(TA,B)
Enter key Kab
The key Kab must be pre-shared and the clocks of both terminal A and server B
must be synchronised for this to work. Server B can verify the validity of the
authenticator by verifying the freshness of the timestamp TA and that his own
identifier ‘B’ is in the encrypted message. If both conditions are met, B can be
certain that the message originated from A. [10, p.36]
Personal
Terminal A Server B
EKab(NA,NB,B)
EKab (NB,NA)
In “Figure 3.4 Mutual authentication with nonces and pre-shared key” the
authentication process does not rely on synchronised time, but still uses pre-
shared symmetric encryption keys. For this to work server B can check the
identity of client A by first checking that the nonce NB matches that sent in phase
1, and that its identifier ‘B’ is in the encrypted message. Client A establishes
34
server B’s identity by checking that the nonces NA and NB are reversed and match
those previously transmitted in phase 3. [10, p.36]
Here three different variations of the PKC based authentication protocols are
described. These are a good representation of PKC based authentication protocols
and progress from unilateral authentication with and without synchronised time to
mutual authentication with and without synchronised clocks. The most interesting
version of these protocols is described in “X.509 mutual authentication with
nonces” subchapter.
35
the encrypted nonce ‘DCpk(ECsk(Nonce))’ and verify the value of the decryption.
The result should be the same nonce-value originally sent to the person to be
authenticated. Nonce is a number that is never used more than once [10, p.36].
This process is crude and does not reflect reality in the implementation details. A
more precise version is presented in the ISO/IEC 9798-3.
Personal
Terminal A Server B
Here client A sends his certificate CA, timestamp TA, destination identifier B and
the signature of both TA and B. When server B receives this message, the validity
of the certificate and its signature is verified with the given public-key. If the
signature is valid, it means that the server is communicating with client A since it
is the only entity in possession of the correct secret-key. [10, p.37]
36
Personal
Terminal A Server B
Enter PIN NB
C A (PK A ), N A , B, S PK -1 (N A , N B , B)
A
C B (PK B ), A, S PK -1 (N B , N A , A)
B
With this method server B sends a nonce NB to client A. Client A attaches his
public-key certificate CA to the response message, with a new nonce NA and the
destination identifier B, and digitally signs the triplet (NA, NB, B). This triplet
ensures that server B can check the integrity of the message, and that A is capable
of using the associated private-key. On receipt, B will check the validity of A’s
certificate and subsequently verify the signature SPKa(NA,NB,B) with the public-
key PKA.
To complete the mutual authentication server B must now send its certificate CB,
the destination identifier ‘A’ and the signed triplet (NB, NA, A). Client A now
checks the validity of B’s certificate and verifies the signature SPKb(NB,NA,A).
37
Personal
Terminal A Server B
Client A sends a message containing its certificate CA, a timestamp TA, a nonce
NA, the destination identifier B, some data like a session key and a signature over
all these elements. Server B must verify the certificates validity, the timestamps
freshness and the signature. If all these agree, A is authenticated to B.
If the data-field is used to transmit an encrypted session key EPK B (k AB ) , its validity
can be confirmed by both parties by decrypting it with their respective private-
keys and validating it against the signatures. [10, p.37]
38
Personal
Terminal A Server B
N B , B, S PK -1 (N B , B)
A
This differs from the timestamp-utilising version by requiring an extra step vs. not
relying on timestamps. Otherwise, this operated the same as above, but instead of
checking for the freshness of the timestamps, A must check that the nonce NA
received in phase 2 is the same as that transmitted to B in phase 1 and similarly B
must check NB in phase 3 vs. for the one transmitted in phase 2. [10, p.38]
The stated goals of the Kerberos system were 1) to allow a user to single sign-
onto the network and 2) to protect the authentication information, making it more
difficult for an impostor to impersonate a legit user. [10, p.80]
The first published version of Kerberos was version 4, but this is now considered
insecure, so only the Kerberos v5 system is described here. Kerberos is a trusted
third party authentication service [7, p.566]. Notice that Kerberos does not attempt
to implement the authorisation or auditing functions.
39
Kerberos V’s most notable use today is within Windows 2000 as its domain
authentication method of choice. Microsoft has added to the Kerberos V
authentication protocol some Windows specific additions, and two missing heads
of Cerberus – authorisation and auditing ones. [10, p.363-366, p.427] Microsoft
also supports the public-key initial authentication extension to Kerberos V, which
is currently available as an Internet Draft from the IETF.
The Kerberos system makes some assumptions about the operating environment.
These are
• Synchronised, reliable clocks
• The client computer is trusted by its user
• The security server is always on-line
• The servers are stateless
• Because of the RSA patent only symmetric cryptography is used
• The time that the user client’s password is available must be minimised
[10, p.80-81]
The Kerberos functional components are depicted below in “Figure 3.10 The
Kerberos general architecture [10, p.81]”:
Client Computer
40
3.5.1.1 The client computer
Client computers are regarded insecure, because the user has full control over
them. The client is usually a general-purpose computer with a Kerberos enabled
OS and applications installed. [10, p.81]
41
A, TGS , RL C , N C , E K A (TC )
S , RL C( 1 ) , N C( 1 ) , E K C − TGS ( C , T C(1 ) )
E K AS − TGS ( A , C , TGS , T s , T e , k C − TGS )
E k C − S (TC( 2 ) , SN C − S )
Application Server (S)
The AS replies with the KRB-AS-REP (2) -message and provides the client with a
session key package kC-TGS, and a ticket granting ticket for accessing the TGS.
After receipt of the TGT and the client-TGS session key, the client may proceed
to request a ticket for accessing server S from the TGS. This is achieved by
sending the KRB-TGS-REQ (3) –message with the appropriate nonce,
timestamps, TGT and authenticator to the TGS.
The client receives the session key for the service S in the KRB-TGS-REP (4) –
message. With this key, the client is able to form an authenticator for the KRB-
AP-REQ (5) –message with the ticket obtained previously. It may also transmit a
sequence number SNC-S for use with the KRB-SAFE and KRB-PRIV messages.
42
After the conclusion of the initial authentication client C and application server S
may negotiate confidentiality and integrity on the communication. This occurs by
using the previously mentioned KRB-SAFE and –PRIV –messages. [6, p.323-
340, 10, p.86-90]
There are a few shortcomings in the Kerberos V system. Some of them are listed
below [10, p.89-90]
• Kerberos V is vulnerable to password guessing attacks
• Kerberos relies on client security
• The confidentiality and integrity of Kerberos implementations are
compromised by a known attack, if both DES-CBC and DES-MAC –
modes are used simultaneously.
• Kerberos is still based on symmetric cryptography. Therefore, it does not
scale well to large inter-realm environments.
• Kerberos does not provide non-repudiation services
• Kerberos lacks access control features completely
The IETF’s Kerberos working group has proposed an extension to the Kerberos V
authentication service to support the use of public-key certificates in user
authentication in the Internet draft “draft-ietf-cat-kerberos-pk-init-15” and inter-
realm authentication in the Internet draft “draft-ietf-cat-kerberos-pk-cross-08”.
43
3.5.4.2 The PK Cross-realm extension
44
4 SINGLE SIGN-ON ARCHITECTURE
Single sign-on is a paradigm, which by utilising authentication, authorisation and
auditing functions as well as protocols for dissemination of access control
information, the client is provided with universal identification with a single
authentication event.
This chapter attempts to clarify the different ways an S/SSO solution can be built
accompanied by a discussion of different approaches to the infrastructure when
operating in a pure WebSSO environment vs. a more traditional S/SSO
environment of heterogeneous legacy computing resources.
The agent is situated near the protected service acting as a gatekeeper, consulting
the server for authentication and authorisation decisions as well as supplying it
with audit-data. The agent is a small piece of code that effectively can say ‘yes’ or
‘no’ to resource requests based on the authorisation information provided by the
server, and forward the acceptable ones to the service and the responses back to
the client.
45
The heart of the system is the server, which provides the back-end processing
capabilities with support for different authentication methods, user databases,
policy evaluation capabilities and audit logging and processing functions.
The three most common ways of situating the agent will be explained shortly.
In this model, the protected resource, i.e. the software service, is modern enough
to support natively authentication method plug-ins. This enables the smooth
addition of the service into the SSO infrastructure. The concept is visualised in
“Figure 4.1 Native plug-in architecture”:
If the used plug-in is a SSO enabled one, it will consult a policy database for the
authenticated clients access rights, and act as a policy enforcer between the
protected service and the client. It also has to be able to propagate the
authorisation credentials from one service to another in order to enable SSO
functionality. [28]
46
What distinguishes this from the next method is the involved binding with a
specified application by integrating directly into the applications internal
authentication and authorisation logic and API as a plug-in.
This differs from the previous agent with regards to the location of the agent in
the system. In this instance, the agent is situated in the same physical computer,
but it is not as indigenous to the application as a plug-in. It has to take over the
applications data connection paths and redirect them through itself to be able to
intercept the data connections before they reach the application. After
interception, the agent operates in much the same way as the plug-in agent does,
with the exception of how the authentication data is transmitted to the application.
This difference is depicted below in “Figure 4.2 External agent architecture”:
The most notable difference is the requirement for the ability to pass
authentication information to the application. This approach is usually used with
applications that are too old to support plug-ins, which are hosted on a common
enough OS platform that agents are available for it. Usually, the external
authentication data passing mechanism employed uses the native basic
authentication facility with the agent on behalf of the client. [29]
With this method shown in “Figure 4.3 Reverse proxy architecture” the
functionality of the agent is situated in an external computer that routes traffic
from the public network to the private network, which is shared between the
proxy and the service. The logic of this model differs only slightly from the above
external agent model. Nevertheless, it has quite a different implementation
structure. [30]
47
Figure 4.3 Reverse proxy architecture
This solution is required if the service is not able to use either plug-ins or there
does not exist an agent that will run on the OS platform of the service. In this
case, the agent is running in its own computing environment and masquerading as
a server to the client and as a client to the service only communicating with its
peers via the networks. Here the public side of the proxy-agent is what is seen to
the outside world and the real services are mapped to its virtual directory
hierarchy.
The fundamental difference between these two is that in a pure single sign-on
model the chosen authentication method is used for all services, whereas in RSO,
the agent residing in the clients’ computer may know of multiple different
authentication methods and apply these when needed. To clarify this distinction
”Figure 4.4 Reduced sign-on architecture” is provided below:
48
Request Authentication / Service to be protected
Send Requested Data
Authentication Plug-In
Interface
Request Service/
Send Authenticatin Data
Pass Validated
Use Stored Basic Basic Auth
Identity to Application
Credentials Plug-In
via internal interfaces
Authentication Plug-In
Interface
Request Service/
Send Authenticatin Data
Pass Validated
PKI Auth Identity to Application
via internal interfaces
The location of the RSO agent is in the client not the service. This approach is
quite clumsy, as every end-system has to have explicit support in the client agent
to be able to utilise this architecture.
The web environment is a natural platform for native plug-in agents since most
modern web-servers support pluggable authentication modules and run on a few
OS platforms that are generally well supported. In addition, the web paradigm
provides native support for convenient identity ticket transfer from one service to
another via the cookie mechanism.
49
4.3.1 Credential passing methods
When the user requests a www-page via his browser, he in effect makes an
HTTP-request for the resource defined in his browsers URL. When this request is
received at the www-server, it begins a process described below.
If the server is not single sign-on enabled, the server just sends the requested data
back to the requestor when the request reaches the server data port. There might
be some active access controls, which may tell the server whether the requestor is
permitted to access the requested data based on identity, IP-address or hostname.
Apart from this, the server has no way of knowing whether to send the requested
data or to skip re-identification, if the user has previously been authenticated.
Browser Agent
Agent/Server #1
Resource Request
Authentication Request Co
ok
ieG
en
era
Authentication Response tio
Personal n
Terminal
Set Authorisation Cookie
Resource Response
Resource Request
Enter
authentication Authentication and
information Authorisation Database
Browser Agent
on
Resource Request alid
ati
ieV
ok
Cookie Request
Co
Cookie Response
Resource Request
With single sign-on capabilities in the server, the user is authenticated after his
initial data request and a single sign-on ticket is either issued to the users browser
in the form of a cookie or encoded in the URLs of the html-document send to him
per his initial request. This is clarified in “Figure 4.5 Cookie handling in the
WebSSO environment”.
50
This ticket usually contains an encrypted certificate of validity that the www-
server or the SSO agent software on that server checks every time the user makes
additional requests for data from the server.
With the cookie mechanism, any server in the same cookie-domain can always
check the content of any cookie previously issued by any one of the other servers,
automatically re-authenticating the user for every subsequent request without user
intervention.
Either these tickets have a certain predefined period of validity after which time
the user is required to re-authenticate, or the server may automatically refresh the
ticket by reissuing the cookie or the URL encoding.
The authentication method can vary from one implementation to another, and it
may be anything from simple user-id/password combinations to retinal scan based
biometric authentication information. The only limit is imagination. For
simplicity’s sake, only two protected servers are used in the example described in
“Figure 4.6 Illustration of multi-host SSO in the web environment”.
51
completed, the SSO service issues an access ticket, which is relayed to the client
by the SSO agent and the original request is only served, if the clients’
authorisation is sufficient.
All subsequent access requests are served based on the initial status- and
authorisation check. Usually, the ticket is valid for a specified period and when it
expires, the client must re-authenticate. This is not normally a problem as the
ticket is renewed automatically if the session is actively in use, and therefore, it
does not visibly expire while actively browsing.
The access policy is maintained by the policy service, and usually, the policy
check results are cached at the agent temporarily to lighten the load of the policy
server.
With the single sign-on framework acting in the background, this scheme can
easily be extended to cover hundreds of services by installing these agents on
every www-server that is to be protected centrally.
There are two obvious places of rejection, with authentication and authorisation.
One must understand that authentication and authorisation are two entirely
different things. One may be a legal user of one part of a www-service, while at
the same time be unauthorised to use another. All S/SSO products that were tested
implement authentication and authorisation as two separate services.
Some products on the market implement SSO capabilities on various systems like
IBM Global Sign-On and RSA Keon Desktop. These are unfortunately very
dependent on an array of natively supported applications, and therefore are not
very flexible to use.
52
One of the best-integrated SSO-like authentication mechanisms is the Microsoft
Windows 2000 domain authentication mechanism with which they use Kerberos
V as the basis for network authentication with rudimentary support for PKI
enabled applications. It is unfortunate that the Windows 2000 model is bound too
tightly to this operating system for wider acceptance in a heterogeneous
computing environment. [10, p.363-366]
53
5 RISK-ANALYSIS BASED REQUIREMENT
SPECIFICATION
In this chapter, the requirements for an S/SSO infrastructure are developed by the
author and Elisa Research Centre/IT Security R&D team. The Research Centre’s
opinions are used for guidance by the Elisa’s Data Administration Division in
their decision-making and evaluation process.
Because this thesis concentrates on the security of the SSO products themselves, it
is natural to point out here that even the best software security cannot compensate
for non-existent physical security measures. Therefore, I hope the reader
appreciates that something as critical as a centralised authentication and
authorisation system must be physically protected with at least the same care as
other critical systems like payroll computers and e-commerce machinery. In this
thesis, proper location security is assumed.
A proposal has been discussed by the EU parliament to ban or at least limit the
use of cookies [32], which might impose severe limitations on the usability of
current S/SSO solutions. In anticipation of this legislation, additional
requirements for alternative identity dissemination methods were introduced.
54
5.1 Raw requirements
The raw requirement list is presented in the following table and the risks related to
each requirement are separately discussed in the following subchapters.
This subchapter discusses the justification of each raw requirement shown above
which is related to confidentiality and integrity of data transferred over the
network or stored on hosts.
55
5.2.1 Encrypted communication paths
Risk: An attacker is able to forge, modify or inject false information into the
AAA-process while the data is in transit.
Solution: All communication paths between the S/SSO systems components and
between the clients and the S/SSO system must be encrypted using methods that
both guarantee confidentiality and integrity.
Justification: If the encryption keys are compromised, it will ruin the security of
the entire platform.
Solution: All key handling should be done using open and proven key
management methods.
Risk: An attacker is able to forge his access ticket in his cookie store to reflect
elevated privileges.
Solution: All cookies submitted to the client for S/SSO credential passing must be
encrypted with strong algorithms and a key known only to the S/SSO system.
Preferably, this key should be randomly selected for every session to minimise the
risk of brute force and dictionary attacks on the key.
56
Justification: This legislation is currently a proposal [32] to ban the use of
“secret” cookies in the EU parliament for inclusion into European law. If the
Finnish lawmakers implement this act in an inappropriate manner, cookies may be
banned altogether.
Risk: A secure authentication method today might become volatile in the future.
Risk: Without non-repudiation, a rogue customer can claim not to have conducted
the contested transaction.
57
Justification: When implementing an S/SSO infrastructure like this, it is likely to
have commercial services rely on the identification framework provided. When
money is involved in the services, fraudulent usage will always follow.
Justification: Too general object controls may result in over privileging users that
need to access some information under control, but not all information the access
privilege grants.
5.3 Availability
This subchapter discusses the justification of each raw requirement above which
is related to availability features of the authentication framework. In this instance,
“good availability” should be considered to be in the 99.999% annual reliability
window.
Justification: By being able to drive the system into an undefined state, it might
be possible to exploit the resulting state for access permission elevation or other
misuse of the system. Additionally, if the system fails for any reason atomic
transactions guarantee that the databases are always in a well-defined state.
58
5.3.2 Component redundancy
Justification: The failure of the S/SSO system will cause system wide denial of
service to all users of the system. If a single component failure can render the
entire system useless, it must be possible to add redundancy to this component to
avoid the huge risk involved in its failure. In comparison, it is cheap to build
redundant computers to having all corporate operations halted because of the
resulting denial of service.
Justification: If the whole system is located in the same physical facility, a risk
exists that the entire platform will be wiped out in one great accident, resulting in
denial of service. Elisa’s Business and Disaster Recovery Plan requires at least a
“Hot Site”-backup system and strongly suggests having fully redundant systems
in geographically separate locations.
Risk: The number of transactions outgrows the systems capabilities. This could
mean in practice that a single S/SSO infrastructure must be able to handle at least
the population of Finland ~5.000.000 clients in expanded configurations.
Justification: If the S/SSO system is a commercial success, the user base might
begin to grow at an unexpected rate, possibly resulting in partial denial of service,
because the server back-end simply cannot process the numerous transactions.
59
user base. Incremental scalability means that the S/SSO platform must be capable
of accepting new modules to the existing framework without disrupting the
current services on-line.
Justification: When the load on the servers grows, it is more economical to have
all servers operate in a load-balancing configuration instead of having dedicated
backup systems waiting idle for the operational system to fail. This way the best
of both worlds is realised in having on-line backup systems available and at the
same time optimising the load distribution of the infrastructure.
Risk: The solutions platform (OS or hardware) becomes obsolete or goes out of
business and leaves Elisa Communications without support for future growth and
services.
Solution: The product is required to support these cheaper and more available
platforms before accepting it into production. In addition, the roadmaps for these
future product lines should be checked to ensure later supportability.
This subchapter discusses the justification of each raw requirement above which
is related to data collection mechanisms, which provide adequate data for both
internal and external audit purposes.
60
5.4.1 Centralised management framework integration
Risk: 1) The product uses its own administration tools and user databases and
consequently goes out of sync with the main user databases and policies. In
addition, additional administration costs are generated by requiring multiple
administrative staff.
Risk: The administrator has too much power and exploits that power maliciously.
Justification: It has been discovered over the years that too much power in too
few hands can wreak havoc on the most secure systems. This has been the case in
the UNIX and Windows operating systems and has resulted in the ideal of
delegated administrative privileges, where no single person has complete power
over the system. This is also called duty separation i.e. the administrator who can
modify account information cannot modify the audit trail of those modifications
and vice versa.
61
Solution: Require delegated administration capabilities to be supported by the
products. The minimal delegation granularity should be at the user role level, but
a more granular approach would be appreciated.
This subchapter discusses the justification of each raw requirement shown above,
which is related to things, which do not fit into any of the preceding categories.
Risk: The solution is too limited, since it can only be used to interoperate with
web servers, and not the back-end application servers.
Justification: If the solution does not include methods to pass the authentication
information to the back-end processing servers, it will limit the application
programmer’s possibilities. In addition, it might trigger unexpected failures in the
back-end applications by interfering in their communication with the web servers.
Solution: Require native support for at least IBM WebSphere, Bea WebLogic,
and Allaire ColdFusion.
Solution: Support plans for both Microsoft Passport and Liberty Alliance global
authentication framework integration should be made available. Plans for how the
S/SSO product will position itself in this global market – as a service provider
himself of as a proxy for these global players – have to be evaluated.
62
5.5.3 Remote administrations
Risk: The system needs attention from the administrator and he is unavailable to
see to it locally.
Justification: Usually, the daily administration does not require console activity.
Simple parameter changes should be doable remotely with suitable tools, because
it is uneconomical to dispatch administrators to every remote system console.
When a proper remote management system is in place, a few administrators can
handle the entire systems administrative tasks from a centralised location.
Solution: The product itself provides a user interface that can be securely
accessed from a remote location e.g. a Java-based administration console or a web
front-end to the administration software.
Risk: The administrator does not understand the logic of the UI provided to him.
Solution: Make sure the UI is reasonably simple and straightforward for the
administration personnel to understand and use. If the UI is too difficult to use,
collaboration with the solution provider is needed to make it more suitable for the
administrators.
Risk: The product stores data in a strange format that cannot be understood by
any other system.
Solution: Require that the system either uses open database access methods like
LDAP or ODBC or has good import/export tools available for automated database
synchronisation between different systems in a common format.
63
5.6 Summary of requirements
These requirements are mainly based on current issues, the experiences of the
author and his knowledge of general systems failure and security aspects. The
grouping follows the traditional computer security categorisation into
confidentiality/integrity, availability, accountability/audit and miscellaneous
features. This requirement list may need to be augmented after the laboratory
benchmarking to test the requirements in practice.
The most critical feature classes in this list are firstly the confidentiality/integrity
requirements, because without both, the entire operation is in danger. Secondly,
the availability features as they dictate how trustworthy a service is. The other
requirement classes are less sensitive, because the basic operation is secured by
the above requirements.
64
6 COMMERCIAL SINGLE SIGN-ON PRODUCT
SURVEY
In this subchapter, many single sign-on products currently available on the market
are enumerated, and the way in which they implement capabilities similar to
single sign-on will be briefly commented on. This part of the study is based
entirely on marketing data from the vendors’ web sites, so no guarantees as to the
accuracy of the claims can be given by the author.
These products were chosen from the results of an extensive web search originally
conducted in December 2000, and updated in January 2002. More than half of the
original products have been withdrawn from the market or have since been
acquired by a rival company. Therefore, only ten currently available products are
introduced in random order below.
This single sign-on module was previously known as the Unicenter TNG Single
Sign-On option. It provides single sign-on functionality to both web and legacy
environments, and is part of the eTrust family of security management products
from CAI. [33]
65
6.3 DataLynx: Guardian
GetAccess is a single sign-on solution for the web arena. It provides single sign-
on support with both basic and strong authentication for web clients. It is available
for Windows NT/2000 and Solaris on the servers and any web browser for the
client. This product was previously known as EnCommerce Get Access. [36]
AccessMaster SSO is a reduced sign-on solution which stores and controls the
multiple passwords on behalf of the user. The user is automatically authenticated
to all systems he accesses by the RSO agent. It supports basic and smart card
authentication to the client desktop agent. It supports Windows 9x/NT/2000,
Solaris, AIX and Linux login automation. It was originally know as BullSoft
AccessMaster. [37]
Policy Director is a reverse proxy type web single sign-on solution. It provides
virtual directory mapping of resources at the proxy level, and enables centralised
user management via either its own management console or integration with
Tivoli User Manager of the Tivoli Framework. Supported authentication methods
are basic, DCE, token and smart card authentication. It is supported on Windows
NT/2000, Solaris and AIX platforms. A standard web browser is needed for the
client environment. [38]
66
6.8 RSA: ClearTrust
SPS is more of a toolkit approach to single sign-on, but it still provides both
legacy and web single sign-on capabilities. It supports basic, token and smart card
authentication with integral support for a wealth of different user databases. SPS
is available on Windows NT/2000, VMS, Netware, Tandem Himalaya, Tru64,
AIX, HP-UX and Solaris. [42]
In the following “Table 6.1 Supported features in surveyed products”, the above
feature descriptions are summarised in tabular form for easier feature comparison.
Legacy SSO
Web SSO
Basic Auth
Token Auth
Windows NT/2000
Solaris
Other Unix
Netware
Products
eTrust SSO X X X X X X X X
TrustBroker X X X X X X X X
Guardian X X X X X X
GetAccess X X X X X X
Access Master X X X X X X
67
Supported Supported Supported
Single Sign-on Authentication Operating Systems
Method Methods
RSO
Legacy SSO
Web SSO
Basic Auth
Token Auth
Windows NT/2000
Solaris
Other Unix
Netware
Products
Policy Director X X X X X X X
SiteMinder X X X X X X X
ClearTrust X X X X X X X
SecurPass Sync X X X X X X
SPS X X X X X X X X X
Table 6.1 Supported features in surveyed products
From this table you will notice that there are only a few full featured products in
this line-up. The most interesting products concerning WebSSO and smart card
support are Policy Director, ClearTrust and SiteMinder. In addition, very feature
rich products like eTrust SSO, TrustBroker and SPS are listed, but they are either
prohibitively expensive or much too complex for the current needs of a web single
sign-on and access control environment. The other products are of no interest in
this thesis, because they only support a subset of the mandatory features identified
previously.
After a thorough evaluation of these products based on all collected data and
vendor impressions, three products were chosen for actual laboratory evaluations.
These were RSA ClearTrust, Netegrity SiteMinder and IBM Policy Director.
From the products tested, the winner is evaluated below in chapter 7 “Evaluation
Of The Selected Product”. The other two were lacking in either their certificate
support or the architecture. ClearTrust claimed to support certificates while
actually using them only as textual containers for usernames and PolicyDirector is
tied to the reverse proxy architecture so it would not be versatile enough for our
application.
68
7 EVALUATION OF THE SELECTED PRODUCT
After evaluating the three products mentioned above Netegrity SiteMinder was
chosen as the example evaluation product because of its merits. Its architecture,
user interface and operation are described in detail, and evaluated as an example
of an S/SSO product according to the criteria defined in chapter 5.
The evaluated service platform is to be deployed into the Elisa Research Centre’s
extranet setting with both internal and third party clients accessing the same
information. It is therefore crucial that the sites access control and policies are of
the highest quality and that there is clear role separation of administrative duties.
The basis of this evaluation lies between the requirements of open access to
extranet clients, and information publication on time to these external clients. It
needs to be possible to store public, confidential and secret documents on the
extranet, because some of the external partners are more involved in the projects.
Some partners have cursory access to old results, while others are active
participants in research projects.
There is a clear need to delegate administration, because there are many projects
running concurrently and each project leader needs to be able to add to and
remove users from his project access group.
Netegrity SiteMinder (SM) was tested in the Research Centre’s laboratory. The
main platform for testing was the Solaris platform running on Sun hardware.
69
In this sub-chapter, SiteMinder’s features, components, architecture and
operations are described. In later sub-chapters, the administration application is
briefly explained and the test bench setup is discussed. Finally, SiteMinder is
evaluated against the requirement specification introduced in chapter 5, and
problems encountered during the tests are described in detail.
Netegrity SiteMinder is supported on the following server platforms i.e. the policy
server component can be run on Windows NT/2000 or Solaris.
70
Administration Audit Audit
External Server Server Database
Authentication
Services
The policy server component provides four distinct services to the SiteMinder
application agents: authentication, authorisation, audit and administration
services. All of these services are provided by separate daemons and at least
authentication and authorisation need to be present for SM to be operational. The
relationships and components are shown in “Figure 7.1 Netegrity SiteMinder
components and interactions” above.
With both the core services and databases on-line, the agents can be distributed to
all services that require protection. The standard agents support all of the
aforementioned web server and operating system combinations. It is also possible
to configure an Apache web server running on Solaris to act as a reverse proxy for
the protection of such services that are not running on supported platforms.
71
7.2.3 SiteMinder’s architecture
SiteMinder’s architecture is quite simple and powerful. The major components are
depicted below in “Figure 7.2 The SiteMinder architecture overview” are briefly
explained below:
External
Authentication Logic
Policy Data
Policy Store
S/SSO
Web Server Authentication and
User and Agent Authorisation
User Data
When a user tries to access the protected resource on a web server, he actually
communicates with the SiteMinder web agent. The web agent takes care of all
authentication and authorisation procedures on behalf of the web server and
application servers. If the resource requires authentication the web agent prompts
the user with the desired authentication dialogs and proceeds with the
authentication process with the policy server. If an application server or the web
server requires knowledge of the user, this can be forwarded by using HTTP-
headers that the web agent includes in the requests it forwards to the protected
service.
72
systems until the primary recovers. SiteMinder supports load balancing and fail-
over between the following:
- Web Agents and Policy Servers
- Policy Servers and LDAP user directories
- Policy Servers and ODBC user databases (fail-over only)
You can select the load-balancing operation mode to distribute user requests
directed from the Web Agents to multiple Policy Servers and from the Policy
Server to replicated LDAP user directories. [43, p.63]
Resource protection
First, the user attempts to access a protected resource by specifying the
application’s URL. The Web Agent will intercept this request at the server, and
determine from its locally cached copy of the policy database whether this
resource is being protected by SiteMinder [44]. If not, then it will exit and the
Web Server will process the request. Otherwise, it will proceed with the
authentication request process described below.
Authentication
If SiteMinder is protecting the resource, the Policy Server will determine which
form of authentication is required based on its policy database and the associated
security levels of the requested URI. SiteMinder supports a wide range of
authentication methods including passwords, certificates and tokens, but also a
combination of these methods. [44]
The Web Agent will send a request to the user's browser, and it will return the
user's credentials. Typically, this is a username/password, but it could also be a
certificate and a token card PIN. The Policy Server then passes this information to
the directory for authentication. [44] With certificates, this involves the checking
of the integrity of the certificate and once convinced of its integrity and
trustworthiness, the extraction of the specified mapping-field from the certificate
and comparing this with the directory’s corresponding entry.
If the user failed to authenticate, then custom pages or actions can be taken, such
as a personalised error page. If the user successfully authenticated to the Policy
Server, then a strongly encrypted cookie is created and stored in the user's
browser. This cookie does not contain any sensitive information like a password.
73
Instead, it contains the user's full directory name, and a number of timestamps and
other information. Once a user has successfully authenticated to SiteMinder, this
cookie can be used later to allow single sign-on across all the applications on the
Web site. [44] If cookies are not supported by the browser, session tracking can be
obtained by following the SSL-session ID’s, but global single sign-on
functionality is lost.
Authorisation
Once the user has been authenticated, SiteMinder must next determine if they
should be granted access to this specific resource. The Policy Server then looks up
all the policies that are related to the requested resource. The Policy Server
consults the directory, and determines whether the user is a member of any of the
groups associated with these policies. If the user is not authorised for this
resource, then custom error pages can be created and presented to the user. If the
authorisation is successful, then the user will be granted access to the application.
[44]
Personalisation
When the application is invoked, SiteMinder passes information concerning this
user directly to the application in the form of header variables. This information
often contains user attributes from the directory or it could be dynamic data from
various data sources. [44]
Figure 7.3 SiteMinder X.509 client authentication process explains the operation
of SiteMinder in the client-certificate authentication process.
74
Web Policy
Server Server
Azn_Request(DN, URL)
Azn_Redirect(Cert,URL) Web
Web
Agent/
Agent
Azn_Info(UID,URL) SCC
After verifying the user’s identity and validity, the Policy Server authorises the
user access to the requested resources. SiteMinder also supports certificate
revocation list (CRL) processing provided by most PKI vendors. Certificate
revocation checking ensures that the certificates in use have not been invalidated
by the owner. If a certificate expires, the PKI system does not accept it, which is
critical for secure transactions. [43, p59]
The general test setup built by the author in Elisa’s Research Centre laboratory
was constructed as illustrated in “Figure 7.4 SiteMinder test setup” below:
75
Local Policy
Store
IIS4 + Web Agent FINEID
Directory
Elisa Research Centre had as figure 7.4 describes one Policy Server running on
Solaris 8, which was accompanied by the Netscape web server. Logically these
two are separate, and therefore they are separated into two different entities in the
picture. The Policy Server had its local policy store in a flat file database, and all
entitlement information was stored in this database.
There were multiple web servers participating in the test and they were:
• sm-iis.rc.elisa.fi running Windows NT 4.0 Server and Internet Information
Server 4.0
• parsec.rc.elisa.fi running Windows 2000 Server and Internet Information
Server 5.0
• pt-iaa.pki.aveiro-digital.org running Windows 2000 Server and Internet
Information Server 5.0 in Portugal
• netra3.rc.elisa.fi as the Netscape/iPlanet HTTP-server running on Solaris 8
on the same Sun as the Policy Server.
The directories used for this test were the governmental Finnish Electronic
Identity (FINEID) public directory, Elisa Research Centre’s (RC) internal LDAP-
directory and the Portugal Telecom Research Centre’s (PT) internal LDAP –
directory. RC runs the iPlanet Directory on Solaris 8 and PT a Microsoft LDAP
front-end for Active Directory.
76
After setting the machines up and installing the required software, the testing and
evaluation could commence. In addition to the author, several users participated in
the testing phase of the S/SSO implementation from around Europe, using both
file- and smart card based X.509v3 certificates. The author was able to
authenticate all of the test users with their certificates into the test realm after
some setup related problems were resolved. In addition, single sign-on was
successfully tested between the various servers. For comparison and debugging
purposes, Basic-authentication was also successfully tested.
The sites protected in this test bed were all stand-alone sites with two categories
of pages, protected and public. All protected pages required certificate
authentication, and in most cases, the user certificate resided on a smart card. All
sites had the SiteMinder agent running in native plug-in, mode and only one agent
acted as the cookie provider for the entire authentication realm. This agent resided
on netra3 – the policy server along with the SSL Credential Collector. All sites
belonged to the same authentication and single sign-on realm called ‘Test Realm’.
The sites permitted access to the public front page shown in “Figure 7.5 The
public jump page from netra3.rc.elisa.fi”, and from here links to the other sites
and to the protected content on the server. When a client accessed the site in
question he was shown the public front page and prompted for authentication if he
placed a request for a protected resource. Once the client had been authenticated
on any one of the server hosts, he was able to access the protected items on every
host in the Test Realm. It would have been possible to specify multiple levels of
authentication but smart card login was deemed secure enough for this test.
77
Figure 7.5 The public jump page from netra3.rc.elisa.fi
In “Figure 7.5 The public jump page from netra3.rc.elisa.fi” the front page where
the client could select either to access the local protected resource or navigate to
one of the affiliate sites in the S/SSO realm. Below in “Figure 7.6 SiteMinder
administration console front page” is the protected resource of netra3.rc.elisa.fi –
the SiteMinder administrations console front page:
78
Figure 7.6 SiteMinder administration console front page
From this page the administration console for SiteMinder is loaded as a Java™
applet applications to any host running a Java™ compliant run-time environment
and having the required three-factor (PIN + certificate and basic authentication on
top of that) authentication information.
79
usage logic, modifying the S/SSO infrastructures configuration becomes very
efficient. This is one of the best administrations GUIs seen in this test. “Figure 7.8
The agent configuration menu” illustrates the agents:
From this menu, the agent parameters can be adjusted, and the administrator can
revoke agent access when necessary. Next, the directories are introduced to the
S/SSO framework in the following “Figure 7.9 The user directories”:
SiteMinder needs to know certain details about the directory, which are the
directory’s Internet address, the search base and which field contains the
distinguished name attribute in the directory. The details of the directory
configuration are shown below in “Figure 7.10 Directory configuration”:
80
Figure 7.10 Directory configuration
Then the configuration of the realm is all put together in the domain control tab in
“Figure 7.11 Realms in the S/SSO policy domain” above where realms are added
to policy domains and rules are added to realms as shown in “Figure 7.12 Rules
for the ptiaa-realm” below. A realm can be thought to consist of a single resource,
which an agent protects with different access control rules. There can be multiple
realms per agent since there may be different access control needs for different
parts of a site.
81
Figure 7.12 Rules for the ptiaa-realm
In this test setup, each site had only one realm, which was associated with the
only protected resource residing in that URL-path. There could be multiple rules
in a realm and these rules could point to the same resource, because the rules are
bound to users in the policy and different rules can exist for different users or
groups to the same resource.
Finally, a single test policy is shown below in “Figure 7.13 SiteMinder policy
setup dialog” defines acceptable user directories, associated rules and other
constraints. The users who are managed by this policy are selected via LDAP
queries and are associated with the relevant rules.
82
The critical part of SiteMinders certificate support is the certificate mapping
component seen in “Figure 7.14 Certificate mapping in SiteMinder” below where
one can tell which attribute is matched in the given directory. For example, it
could be that the user profile in the LDAP directory has a name of ‘Jack Smith’
with the title field of this record equalling the DN-field of his certificate. If
SiteMinder was instructed to match the title-field in LDAP with the extracted DN-
entry of his certificate and they match, then the rules and other information under
his LDAP-entry are related to him.
For this to be secure one has to remember that in the first phase of authentication
the X.509 client authentication protocol is used to verify the validity of the
presented certificate. Additional requirements include that the certifiers’ public-
key certificate has been introduced to the S/SSO environment beforehand by the
administrator for certificate path validation purposes. Therefore, the certificates
DN-field can be trusted without further cryptographic processing and just match
strings with the LDAP entry. Obviously, the LDAP –server holding user- and
policy information must be well protected by other means to prevent modification
of the policy records by rogue persons.
83
7.3.3.1 Confidentiality and integrity requirements
In this section, the communication security features are evaluated.
The client to agent path can be encrypted with SSL tunnelling using any mutually
supported encryption algorithm between the client and agent. This is regarded as a
secure approach.
The policy server to directory path can also be encrypted also by SSL tunnelling
using any mutually supported encryption algorithm between the policy server and
the directory. This is also regarded as a secure approach.
All other encryption keys are negotiated as part of the SSL handshake protocol
and this is regarded as secure.
84
SiteMinder supports external authentication methods and authorisation logics via
well-defined APIs that enable the implementation of any authentication or
authorisation model. This support is adequate.
7.3.3.2 Availability
In this section, the availability features of SiteMinder are evaluated against the
requirement specification presented previously in chapter 5.
85
balancing cluster. Additional policy server mirrors can also be added to the
infrastructure on-line. This is satisfactory.
86
7.3.3.4.3 Remote administration
SiteMinders administration is handled with both the graphical Java™ based user
interface and command line tools in UNIX for policy server management, so that
remote administration is well supported. This is satisfactory.
The above results can be condensed into the following “Table 7.1 Correspondence
of the requirement specification to SiteMinders features”:
1 (poor)
5 (good)
Correspondence of the requirement
specification to SiteMinders features
87
This can be summarised as a good representative to all requirements laid out for a
successful candidate for the access control task. The handling of an extranet site
particularly requires granular access control policies, powerful administration
features, strong authentication support and a flexible architecture to protect any
future investment.
The most significant features concerning this particular case are the security,
integrity and availability features. In this case, since the production sites
transaction volume and traffic is low, transaction atomicity, load balancing and
centralised management framework integration could be dismissed. With this
additional weighting, SiteMinder fulfils the requirements satisfactorily.
The most displeasing feature of this product is its lack of a clear development
roadmap for any future version and development schedules. In addition, the
suitability for extreme customisation and extensive possibilities for policy
formulation might lead to administration problems if too fine-grained access
policies are implemented or alternatively extremely complex environments are
built.
The most pleasing feature of this product is its good support for external
standardised interfaces to databases such as LDAP. This enables easy migration to
potentially large user populations and guarantees data interchange ability. The
reliable certificate handling also merits acknowledgement.
There were some problems with the product while testing. At first, there were
problems with Elisa’s firewalls that blocked access to the required ports.
Fortunately, SiteMinder has very good documentation so these problems were
resolved quite rapidly.
The next problem was much harder to resolve, because certificates issued by the
FINEID began to work, while Elisa Research Centre’s test certificates were not
understood by SiteMinder. After weeks of analysis with Netegrity technical
support, it was discovered that SiteMinder parses the certificates contents using
the comma (‘,’) as an attribute separator, which in Elisa Research Centre’s case
was not a correct assumption, because the DN-field contained a comma in the
organisation text string. This was a valid X.509v3 certificate, but because of
invalid assumptions on the part of Netegrity’s development team, this almost
became a critical bug leading to rejection of the whole product from further
testing.
88
Having solved the above, the author had to install an Apache web server on Linux
together with the associated SiteMinder Web Agent. Unfortunately, Linux support
for SiteMinder is not yet stable. We were able to get the agent up and running on a
RedHat Linux 6.2 server after a few weeks of trial and error, but for some
unknown reason it would not communicate with the Policy Server. The SM plug-
in for Apache on Linux is still in development and as such has to be excused as
beta software.
89
8 CONCLUSIONS
In this chapter, the results of this thesis are described. In addition, some future
trends are discussed as the closing remarks of this thesis work.
After evaluating multiple commercial Web S/SSO products, it has become clear to
the author that none of them is mature enough to be trusted with protecting a
critical operative service. Netegritys’ SiteMinder is the most mature product of
those tested. Consequently, I can recommend it for low- to medium critical
environments such as intra- or extranets and customer care sites.
90
External
Authentication Logic
Policy Data
Policy Store
S/SSO
Web Server Authentication and
User and Agent Authorisation
User Data
The extensive use of external directories for authorisation and user data is a two
edged sword in the security sense. It enables very flexible user and policy
management with directory access enabled administration tools, and easy data
migration from one system to another in upgrade cases. On the other hand, it also
introduces a potentially weak link into the security chain of the entire S/SSO
infrastructure. Directory security is very much neglected because administration
staff generally regards directories as little more than normal public telephone
directories. Therefore, the security of the directory must be enforced with all
available technical aids such as firewalls, encrypted directory access protocols and
strong authentication if possible.
Secure single sign-on using X.509 certificates is very compelling, as the user only
has to remember his PIN-code for authentication. Fortunately, this is a reality
today and can be accomplished with both the FINEID-card and file-based X.509
certificates. Of the evaluated products, only SiteMinder was mature enough to
support the certificate profiles used in all tested certificates. All the other products
had problems with accepting the FINEID-profile certificates or understanding
91
certificates at all. Almost all vendors claim to have support for PKI, but the
quality of the support varies very much from vendor to vendor.
Overall, the tests were successful in revealing the current accurate state of
certificate support in the tested products, and they gave new insights into the inner
workings of web single sign-on systems, certificate support implementations and
their pitfalls.
It has also became clear that there exists a huge demand for single sign-on
solutions, because companies are struggling to provide more sensitive information
to their customers and business partners over the Internet in a safe manner. As
web services grow, and the world becomes even further networked, it will be
essential to be able to have fine-grained, centralised access control over the
resources one is providing.
Two interesting projects on the Internet strive to provide ubiquitous identity to the
“citizens of the Internet”. These are the Microsoft Passport [46] system and the
Liberty Alliance [47] from a coalition of companies including Sun Microsystems,
Nokia, RSA Security and 37 others. The stated goal of these projects is to provide
a ubiquitous identity and a secure personal information platform, but the most
exciting feature is the creation of a universal single sign-on standard for global
usage. If successful, this will open many new possibilities for services to do
business on the Internet with heightened levels of security and confidence in a
customer’s identity.
Today, neither secure single sign-on or smart card supports are ready for prime
time, but they are getting closer day by day. The standards defining certificates
and smart cards are stable, but unfortunately, related software components are
evolving very rapidly and there are no standards available for single sign-on in the
web environment. X.509 seems to be the strongest contender in the field of
certificates, and currently, the Microsoft’s Windows environment is the best
environment for smart card usage, while it still has only rudimentary PKI support.
Luckily, the pace of development is increasing and we are about to see many new
fine products on the market that use smart cards in one way or the other.
Good examples of this sort of development extend from the FINEID-, the French
Medical cards and the EMV –standard of major credit card institutions for
identification and digital signature functions to the SSH Communications firewall
products that creatively use smart cards to distribute the configuration and
authentication data to their firewall products.
92
As more applications begin to use smart cards, they will become as common as
the magnetic striped cards that we use today as ATM –cards.
Once good security systems have been implemented to protect services and log all
unauthorised activity, the attackers can be stopped and subsequently traced.
93
BIBLIOGRAPHY
[1] Chinitz, J. Single Sign-On: Is It Really Possible?, Access Control
Systems and Methodology, 2000. p. 32-45.
[2] Schneier, B. Secrets and Lies: Digital Security in a Networked
World, USA: John Wiley & Sons, 2000. ISBN 0-47-125311-1.
[3] The Open Group, Open Group Guide G801: Architecture for Public-
Key Infrastructure, 1998. ISBN 1-85912-221-3.
[4] Fielding, R., Gettys, J., Mogul, J., Frystyk, H., Masinter, L., Leach,
P., Berners-Lee, T. RFC2616: Hypertext Transfer Protocol --
HTTP/1.1, IETF, June 1999
[5] The Open Group, Open Group Technical Standard C908:
Authorization (AZN) API, 2000. ISBN 1-85912-266-3.
[6] Stallings, W. Cryptography & Network Security: Principles &
Practice 2nd edition, USA: Prentice Hall, 1998. ISBN 0-13-869017-0.
[7] Schneier, B. Applied Cryptography: Protocols, Algorithms, and
Source Code in C 2nd Edition, USA: John Wiley & Sons, 1995.
ISBN 0-471-12845-7.
[8] Krawczyk, H., Bellare, M., Canetti, R. RFC2104: HMAC Keyed-
Hashing for Message Authentication, IETF, February 1997
[9] NIST, FIPS-197: Advanced Encryption Standard,
http://csrc.nist.gov/encryption/aes/ (ref. 1.2.2002), NIST, 2001
[10] Ashley, P., Vandenwauver, M., Practical Intranet Security : Overview
of the State of the Art and Available Technologies, Netherlands:
Kluwer Academic Publishers, 1999. ISBN 0-7923-8354-0.
[11] Eurescom Gmbh., Impact of PKI on the European
Telecommunication Business, Eurescom Gmbh., 1999. EDIN P944-
GI
[12] Setec Oy, Smart Card Basics,
http://www.setec.fi/english/press/material/smartcardbasics.html (ref.
1.2.2002), Setec Oy, 2001
[13] GemPlus Corp., Welcome to smart cards,
http://www.gemplus.com/basics/what.htm (ref. 1.2.2002), GemPlus
Corp., 2000
[14] Smart Card Forum, What's so smart about Smart Cards?,
94
http://www.gemplus.com/basics/download/scards.pdf (ref. 1.2.2002),
Smart Card Forum, 2000
[15] Setec Oy, SetCOS Operating System used in more than 10 million
smart cards, http://www.setec.fi/english/press/material/setecos.html
(ref. 1.2.2002), Setec Oy, 2001
[16] Housley, R., Ford, W., Polk, W., Solo, D., RFC2459: Internet X.509
Public Key Infrastructure Certificate and CRL Profile, IETF, January
1999
[17] International Telecommunication Union, ITU-T Recommendation
X.509, ISO/IEC 9594-8: Information Technology Open Systems
Interconnection The Directory: Public-key And Attribute Certificate
Frameworks, http://www.itu.int/rec/dologin.asp?lang=e&id=T-REC-
X.509-200003-I!!PDF-E&parent=T-REC-X.509-200003-I (ref.
5.2.2002), International Telecommunication Union, 2000.
[18] Microsoft Corp., Windows 2000 Certificate Validation Logic,
http://www.microsoft.com/windows2000/techinfo/reskit/en-
us/default.asp (ref. 7.12.2001), Microsoft Corp., 2001
[19] Adams, C., Lloyd, S., Kent, S., Understanding the Public-Key
Infrastructure: Concepts, Standards, and Deployment Considerations,
USA: New Riders Publishing, 1999. ISBN 1-57-870166-X.
[20] Eurescom Gmbh., PKI Implementation and Test Suites for Selected
Applications and Services Final Report, Eurescom Gmbh., 2001.
EDIN 0170-1001
[21] Kent, S. RFC1422: Privacy Enhancement for Internet Electronic Mail
Part II: Certificate-Based Key Management, IETF, February 1993
[22] Franks, J., Hallam-Baker, P., Hostetler, J., Lawrence, S., Leach, P.,
Luotonen, A., Stewart, L., RFC2617: HTTP Authentication: Basic
and Digest Access Authentication, IETF, June 1999
[23] Haller, N., Metz, C., Nesser, P., Straw, M., RFC2289: A One-Time
Password System, IETF, February 1998
[24] RSA Security Inc., RSA SecurID Authentication: A Better Value for
Better ROI,
http://www.rsasecurity.com/products/securid/whitepapers/BVBROI_
WP_1201.pdf (ref. 6.1.2002), RSA Security Inc. 2001
[25] Smith, R.E., Authentication: From Passwords to Public Keys, USA:
Addison-Wesley Pub Co, 2001. ISBN 0-201-61599-1.
[26] Tung, B., Hur, M., Medvinsky, A., Medvinsky, S., Wray, J., Trostle,
J., Internet Draft: Public Key Cryptography for Initial Authentication
in Kerberos, http://www.ietf.org/internet-drafts/draft-ietf-cat-
kerberos-pk-init-15.txt (ref. 6.1.2002), IETF, 2002
[27] Hur, M., Tung, B., Ryutov, T., Neuman, C., Medvinsky, A., Tsudik,
G., Sommerfeld, B., Internet Draft: Public Key Cryptography for
Cross-Realm Authentication in Kerberos,
95
http://www.ietf.org/internet-drafts/draft-ietf-cat-kerberos-pk-cross-
08.txt (ref. 6.1.2002), IETF, 2002
[28] Netegrity Inc., Siteminder 4.6 Planing Guide, Netegrity Inc., 2001.
[29] Netegrity Inc., Siteminder 4.6 Deployment Guide, Netegrity Inc.,
2001.
[30] Tivoli Inc., Tivoli SecureWay Policy Director Overview White
Paper, Tivoli Inc., 2000.
[31] Carden, P. The New Face of Single Sign-On,
http://www.networkcomputing.com/shared/printArticle?article=nc/10
06/1006f1full.html&pub=nwc (ref. 11.2.2002), Network Computing
Magazine, March 22 1999.
[32] Meller, P. European ministers agree on spam ban, cookie rules,
http://www.computerworld.com/storyba/0,4125,NAV47_STO66411,
00.html (ref. 5.2.2002), IDG News Service, Dec 2001
[33] Computer Associates International, eTrust Single Sign-On,
http://www3.ca.com/Solutions/Product.asp?ID=166 (ref. 16.12.2001),
Computer Associates International, 2001
[34] CyberSafe Inc., TrustBroker,
http://www.cybersafe.com/solutions/trustbroker.html (ref.
16.12.2001), CyberSafe Inc., 2001
[35] DataLynx Inc., Guardian, www.dlxguard.com/gd.html (ref.
16.12.2001), DataLynx Inc., 2001
[36] Entrust Inc., GetAccess, http://www.entrust.com/getaccess/index.htm
(ref. 16.12.2001), Entrust Inc., 2001
[37] Evidian Inc., Access Master SSO,
http://www.evidian.com/accessmaster/about/index.htm (ref.
16.12.2001), Evidian Inc., 2001
[38] Tivoli Inc., SecureWay Policy Director,
http://www.tivoli.com/products/index/secureway_policy_dir/index.ht
ml (ref. 16.12.2001), Tivoli Inc., 2001
[39] Netegrity Inc., SiteMinder,
http://www.netegrity.com/products/index.cfm?leveltwo=SiteMinder
(ref. 16.12.2001), Netegrity Inc., 2001
[40] RSA Security Inc., ClearTrust,
http://www.rsasecurity.com/products/cleartrust/ (ref. 16.12.2001),
RSA Security Inc., 2001
[41] Proginet Inc., Secure Pass Sync,
http://www.proginet.com/products/securpass/securpas.asp (ref.
16.12.2001), Proginet Inc., 2001
[42] Unisys Corp., Single Point Security,
http://www.unisys.com/security/default-02.asp#P48_5006 (ref.
16.12.2001), Unisys Corp., 2000
[43] Netegrity Inc., SiteMinder 4.6 Consepts Guide, Netegrity Inc., 2001.
96
[44] Netegrity Inc., How SiteMinder Works?,
http://www.netegrity.com/products/index.cfm?leveltwo=SiteMinder
&levelthree=HowItWorks (ref. 4.1.2002), Netegrity Inc., 2001
[45] Netegrity Inc., SiteMinder 4.6 Policy Server Operations Guide,
Netegrity Inc., 2001.
[46] Microsoft Corp., Microsoft .Net Passport,
http://www.passport.com/Consumer/default.asp?lc=1033 (ref.
11.2.2002), Microsoft Corp. 2002
[47] The Liberty Alliance, The Liberty Alliance Project,
http://www.projectliberty.org/ (ref. 1.2.2002), The Liberty Alliance,
2002
97