You are on page 1of 358

Encryption and Security Tutorial

Peter Gutmann
University of Auckland
http://www.cs.auckland.ac.nz/~pgut001

Security Requirements
Confidentiality
• Protection from disclosure to unauthorised persons
Integrity
• Maintaining data consistency
Authentication
• Assurance of identity of person or originator of data
Non-repudiation
• Originator of communications can’t deny it later
Security Requirements (ctd)
Availability
• Legitimate users have access when they need it
Access control
• Unauthorised users are kept out
These are often combined
• User authentication used for access control purposes
• Non-repudiation combined with authentication

Security Threats
Information disclosure/information leakage
Integrity violation
Masquerading
Denial of service
Illegitimate use
Generic threat: Backdoors, trojan horses, insider attacks
Most Internet security problems are access control or
authentication ones
• Denial of service is also popular, but mostly an annoyance
Attack Types

Passive attack can only observe communications or data


Active attack can actively modify communications or data
• Often difficult to perform, but very powerful
– Mail forgery/modification
– TCP/IP spoofing/session hijacking

Security Services
From the OSI definition:
• Access control: Protects against unauthorised use
• Authentication: Provides assurance of someone's identity
• Confidentiality: Protects against disclosure to unauthorised
identities
• Integrity: Protects from unauthorised data alteration
• Non-repudiation: Protects against originator of
communications later denying it
Security Mechanisms
Three basic building blocks are used:
• Encryption is used to provide confidentiality, can provide
authentication and integrity protection
• Digital signatures are used to provide authentication, integrity
protection, and non-repudiation
• Checksums/hash algorithms are used to provide integrity
protection, can provide authentication
One or more security mechanisms are combined to provide
a security service

Services, Mechanisms, Algorithms


A typical security protocol provides one or more services

• Services are built from mechanisms


• Mechanisms are implemented using algorithms
Conventional Encryption
Uses a shared key

Problem of communicating a large message in secret


reduced to communicating a small key in secret

Public-key Encryption
Uses matched public/private key pairs

Anyone can encrypt with the public key, only one person
can decrypt with the private key
Key Agreement
Allows two parties to agree on a shared key

Provides part of the required secure channel for exchanging


a conventional encryption key

Hash Functions
Creates a unique “fingerprint” for a message

Anyone can alter the data and calculate a new hash value
• Hash has to be protected in some way
MAC’s
Message Authentication Code, adds a password/key to a
hash

Only the password holder(s) can generate the MAC

Digital Signatures
Combines a hash with a digital signature algorithm
Digital Signatures (ctd)
Signature checking:

Message/Data Encryption
Combines conventional and public-key encryption
Message/data Encryption (ctd)

Public-key encryption provides a secure channel to


exchange conventional encryption keys

Security Protocol Layers

The further down you go, the more transparent it is


The further up you go, the easier it is to deploy
Encryption and Authentication
Algorithms and Technology

Cryptography is nothing more than a mathematical


framework for discussing the implications of
various paranoid delusions
— Don Alvarez

Historical Ciphers
Non-standard hieroglyphics, 1900BC
Atbash cipher (Old Testament, reversed Hebrew alphabet,
600BC)
Caesar cipher:
letter = letter + 3
‘fish’  ‘ilvk’
rot13: Add 13/swap alphabet halves
• Usenet convention used to hide possibly offensive jokes
• Applying it twice restores original text
Substitution Ciphers
Simple substitution cipher:
a = p, b = m, c = f, ...
Break via letter frequency analysis
Polyalphabetic substitution cipher
1. a = p, b = m, c = f, ...
2. a = l, b = t, c = a, ...
3. a = f, b = x, c = p, ...
Break by decomposing into individual alphabets, then
solve as simple substitution

One-time Pad (1917)


Message s e c r e t
18 5 3 17 5 19
OTP +15 8 1 12 19 5
7 13 4 3 24 24
g m d c x x

OTP is unbreakable provided


• Pad is never reused (VENONA)
• Unpredictable random numbers are used (physical sources, e.g.
radioactive decay)
One-time Pad (ctd)
Used by
• Russian spies
• The Washington-Moscow “hot line”
• CIA covert operations
Many snake oil algorithms claim unbreakability by
claiming to be a OTP
• Pseudo-OTP’s give pseudo-security
Cipher machines attempted to create approximations to
OTP’s, first mechanically, then electronically

Cipher Machines (~1920)


1. Basic component = wired rotor

• Simple substitution
2. Step the rotor after each letter
• Polyalphabetic substitution, period = 26
Cipher Machines (ctd)
3. Chain multiple rotors

Each steps the next one when a full turn is complete

Cipher Machines (ctd)


Two rotors, period = 26  26
= 676
Three rotors, period = 26  26  26
= 17,576
Rotor sizes are chosen to be relatively prime to give
maximum-length sequence
Key = rotor wiring
= rotor start position
Cipher Machines (ctd)
Famous rotor machines
US: Converter M-209
UK: TYPEX
Japan: Red, Purple
Germany: Enigma
Many books on Enigma
Kahn, Seizing the Enigma
Levin, Ultra Goes to War
Welchman, The Hut Six Story
Winterbothm, The Ultra Secret

“It would have been secure if used properly”


Use of predictable openings:
“Mein Fuehrer! ...”
“Nothing to report”
Use of the same key over an extended period
Encryption of the same message with old (compromised)
and new keys
Device treated as a magic black box, a mistake still made
today
Inventors believed it was infallible, " " " " "
Cipher Machines (ctd)
Various kludges made to try to improve security — none
worked
Enigmas were sold to friendly nations after the war
Improved rotor machines were used into the 70’s and 80’s
Further reading:
Kahn, The Codebreakers
Cryptologia, quarterly journal

Stream Ciphers
Binary pad (keystream), use XOR instead of addition
Plaintext = original, unencrypted data
Ciphertext = encrypted data

Plaintext 1 0 0 1 0 1 1
Keystream XOR 0 1 0 1 1 0 1
Ciphertext 1 1 0 0 1 1 0
Keystream XOR 0 1 0 1 1 0 1
Plaintext 1 0 0 1 0 1 1

Two XORs with the same data always cancel out


Stream Ciphers (ctd)
Using the keystream and ciphertext, we can recover the
plaintext
but
Using the plaintext and ciphertext, we can recover the
keystream
Using two ciphertexts from the same keystream, we can
recover the XOR of the plaintexts
• Any two components of an XOR-based encryption will recover
the third
• Never reuse a key with a stream cipher
• Better still, never use a stream cipher

Stream Ciphers (ctd)


Vulnerable to bit-flipping attacks
RC4
Stream cipher optimised for fast software implementation
2048-bit key, 8-bit output
Former trade secret of RSADSI, reverse-engineered and
posted to the net in 1994
while( length-- )
{
x++; sx = state[ x ]; y += sx;
sy = state[ y ]; state[ y ] = sx; state[ x ] = sy;
*data++ ^= state[ ( sx+sy ) & 0xFF ];
}

Takes about a minute to implement from memory

RC4 (ctd)
Extremely fast
Used in SSL (Netscape, MSIE), Lotus Notes, Windows
password encryption, MS Access, Adobe Acrobat, MS
PPTP, Oracle Secure SQL, ...
• Usually used in a manner which allows the keystream to be
recovered (Windows password encryption, Windows server
authentication, Windows NT SYSKEY, early Netscape server
key encryption, some MS server/browser key encryption, MS
PPTP, MS Access, ...)
• Every MS product which is known to use it has got it wrong at
some time
Illustrates the problem of treating a cipher as a magic black
box
Recommendation: Avoid this, it's too easy to get wrong
Block Ciphers
Originated with early 1970’s IBM effort to develop
banking security systems
First result was Lucifer, most common variant has 128-bit
key and block size
• It wasn’t secure in any of its variants

Called a Feistel or product cipher

Block Ciphers (ctd)


f()-function is a simple transformation, doesn’t have to be
reversible
Each step is called a round; the more rounds, the greater the
security (to a point)
Most famous example of this design is DES:
• 16 rounds
• 56 bit key
• 64 bit block size (L,R = 32 bits)
Designed by IBM with, uh, advice from the NSA
Attacking Feistel Ciphers
Differential cryptanalysis
• Looks for correlations in f()-function input and output
Linear cryptanalysis
• Looks for correlations between key and cipher input and output
Related-key cryptanalysis
• Looks for correlations between key changes and cipher
input/output
Differential cryptanalysis discovered in 1990; virtually all
block ciphers from before that time are vulnerable...
...except DES. IBM (and the NSA) knew about it 15
years earlier

Strength of DES
Key size = 56 bits
Brute force = 255 attempts
Differential cryptanalysis = 247 attempts
Linear cryptanalysis = 243 attempts
(but the last two are impractical)
> 56 bit keys don’t make it any stronger
> 16 rounds don’t make it any stronger
DES Key Problems
Key size = 56 bits
= 8  7-bit ASCII chars
Alphanumeric-only password converted to uppercase
= 8  ~5-bit chars
= 40 bits
DES uses low bit in each byte for parity
= 32 bits
• Forgetting about the parity bits is so common that the NSA
probably designs its keysearch machines to accommodate this

Breaking DES
DES was designed for efficiency in early-70’s hardware
Makes it easy to build pipelined brute-force breakers in
late-90’s hardware

16 stages, tests 1 key per clock cycle


Breaking DES (ctd)
Can build a DES-breaker using
• Field-programmable gate array (FPGA), software-
programmable hardware
• Application-specific IC (ASIC)
100 MHz ASIC = 100M keys per second per chip
Chips = $10 in 5K+ quantities
$50,000 = 500 billion keys/sec
= 20 hours/key (40-bit DES takes 1 second)

Breaking DES (ctd)


$1M = 1 hour per key (1/20 sec for 40 bits)
$10M = 6 minutes per key (1/200 sec for 40 bits)
(US black budget is ~$25-30 billion)
(distributed.net = ~70 billion keys/sec with 20,000
computers)
EFF (US non-profit organisation) broke DES in 2½ days
Amortised cost over 3 years = 8 cents per key
• If your secret is worth more than 8 cents, don’t encrypt it with
DES
September 1998: German court rules DES “out of date and
unsafe” for financial applications
Brute-force Encryption Breaking
Type of Budget Tool Time and cost per Keylen (bits)
Attacker key recovered for security
40 bits 56 bits 1995 2015
Pedestrian Tiny PC 1 week Infeasible 45 59
hacker $400 FPGA 5 hours 38 years 50 64
$0.08 $5,000
Small $10K FPGA 12 mins 556 days 55 69
business $0.08 $5,000
Corporate $300K FPGA 24 secs 19 days 60 74
department $0.08 $5,000
ASIC 0.18 secs 3 hours
$0.001 $38
Big $10M FPGA 0.7 secs 13 hours 70 84
company $0.08 $5,000
ASIC 0.005 s 6 mins
$0.001 $38
Intelligence $300M ASIC 0.0002 s 12 secs 75 89
agency $0.001 $38

Other Block Ciphers


Triple DES (3DES)
• Encrypt + decrypt + encrypt with 2 (112 bits) or 3 (168 bits)
DES keys
• By late 1998, banking auditors were requiring the use of 3DES
rather than DES
RC2
• Companion to RC4, 1024 bit key
• RSADSI trade secret, reverse-engineered and posted to the net
in 1996
• RC2 and RC4 have special status for US exportability
Other Block Ciphers (ctd)
IDEA
• Developed as PES (proposed encryption standard), adapted to
resist differential cryptanalysis as IPES, then IDEA
• Gained popularity via PGP, 128 bit key
• Patented
Blowfish
• Optimised for high-speed execution on 32-bit processors
• 448 bit key, relatively slow key setup
CAST-128
• Used in PGP 5.x, 128 bit key

Other Block Ciphers


Skipjack
• Classified algorithm originally designed for Clipper,
declassified in 1998
• Very efficient to implement using minimal resources (e.g.
smart cards)
• 32 rounds, breakable with 31 rounds
• 80 bit key, inadequate for long-term security
GOST
• GOST 28147, Russian answer to DES
• 32 rounds, 256 bit key
• Incompletely specified
Other Block Ciphers
AES
• Advanced Encryption Standard, replacement for DES
• 128 bit block size, 128/192/256 bit key
Many, many others
• No good reason not to use one of the above, proven algorithms

Using Block Ciphers


ECB, Electronic Codebook

Each block encrypted independently


Using Block Ciphers (ctd)
Original text
Deposit $10,000 in acct. number 12-3456- 789012-3

Intercepted encrypted form


H2nx/GHE KgvldSbq GQHbrUt5 tYf6K7ug S4CrMTvH 7eMPZcE2

Second intercepted message


H2nx/GHE KgvldSbq GQHbrUt5 tYf6K7ug Pts21LGb a8oaNWpj

Cut and paste blocks with account information


H2nx/GHE KgvldSbq GQHbrUt5 tYf6K7ug S4CrMTvH a8oaNWpj

Decrypted message will contain the attacker’s account —


without them knowing the encryption key

Using Block Ciphers (ctd)


Need to
• Chain one block to the next to avoid cut & paste attacks
• Randomise the initial block to disguise repeated messages
CBC (cipher block chaining) provides chaining, random
initialisation vector (IV) provides randomisation
Using Block Ciphers (ctd)
Both ECB and CBC operate on entire blocks
CFB (ciphertext feedback) operates on bytes or bits

This converts a block cipher to a stream cipher (with the


accompanying vulnerabilities)

Relative Performance
Fast
RC4
Blowfish, CAST-128, AES
Skipjack
DES, IDEA, RC2
3DES, GOST
Slow
Typical speeds
• RC4 = Tens of MB/second
• 3DES = MB/second
Recommendations
• For performance, use Blowfish
• For job security, use 3DES
Public Key Encryption
How can you use two different keys?
• One is the inverse of the other:
key1 = 3, key2 = 1/3, message M = 4
Encryption: Ciphertext C = M  key1
=43
= 12
Decryption: Plaintext M = C  key2
= 12  1/3
=4
One key is published, one is kept private  public-key
cryptography, PKC

Example: RSA
n, e = public key, n = product of two primes p and q
d = private key
Encryption: C = Me mod n
Decryption: M = Cd mod n
p, q = 5, 7
n=pq
= 35
e=5
d = e-1 mod ((p-1)(q-1))
=5
Example: RSA (ctd)
Message M = 4
Encryption: C = 45 mod 35
=9
Decryption: M = 95 mod 35
= 59049 mod 35
=4
(Use mathematical tricks otherwise the numbers get
dangerous)

Public-key Algorithms
RSA (Rivest-Shamir-Adelman), 1977
• Digital signatures and encryption in one algorithm
• Private key = sign and decrypt
• Public key = signature check and encrypt
DH (Diffie-Hellman), 1976
• Key exchange algorithm
Elgamal
• DH variant, one algorithm for encryption, one for signatures
• Attractive as a non-patented alternative to RSA (before the
RSA patent expired)
Public-key Algorithms (ctd)
DSA (Digital Signature Algorithm)
• Elgamal signature variant, designed by the NSA as the US
government digital signature standard
• Intended for signatures only, but can be adapted for encryption
All have roughly the same strength:
512 bit key is marginal
1024 bit key is recommended minimum size
2048 bit key is better for long-term security
Recommendation
• Anything suitable will do
• RSA has universal acceptance, others are less accepted

Elliptic Curve Algorithms


Use mathematical trickery to speed up public-key
operations
Elliptic Curve Algorithms (ctd)
Now we can add, subtract, etc. So what?
• Calling it “addition” is arbitrary, we can just as easily call it
multiplication
• We can now move (some) conventional PKCs over to EC
PKCs (DSA  ECDSA)
Now we have a funny way to do PKCs. So what?
• Breaking PKCs over elliptic curve groups is much harder than
beaking conventional PKCs
• We can use shorter keys which consume less storage space

Advantages/Disadvantages of ECC’s
Advantages
• Sometimes useful in smart cards because of their low storage
requirements
Disadvantages
• New, details are still being resolved
– Many ECC techniques are still too new to trust
• Almost nothing uses or supports them
• No more efficient than standard algorithms like RSA
• ECCs are a minefield of patents, pending patents, and
submarine patents
Recommendation: Don’t use them unless you really need
the small key size
Key Sizes and Algorithms
Conventional vs public-key vs ECC key sizes
Conventional Public-key ECC
(40 bits) — —
56 bits (400 bits) —
64 bits 512 bits —
80 bits 768 bits —
90 bits 1024 bits 160 bits
112 bits 1792 bits 195 bits
120 bits 2048 bits 210 bits
128 bits 2304 bits 256 bits
(Your mileage may vary)

Key Sizes and Algorithms (ctd)


However
• Conventional key is used once per message
• Public key is used for hundreds or thousands of messages
A public key compromise is much more serious than a
conventional key compromise
• Compromised logon password, attacker can
– Delete your files
• Compromised private key, attacker can
– Drain credit card
– Clean out bank account
– Sign contracts/documents
– Identity theft
Key Sizes and Algorithms (ctd)
512 bit public key vs 40 bit conventional key is a good
balance for weak security
Recommendations for public keys:
• Use 512-bit keys only for micropayments/smart cards
• Use 1K bit key for short-term use (1 year expiry)
• Use 1.5K bit key for longer-term use
• Use 2K bit key for certification authorities (keys become more
valuable further up the hierarchy), long-term contract signing,
long-term secrets
The same holds for equivalent-level conventional and ECC
keys

Hash Algorithms
Reduce variable-length input to fixed-length (128 or 160
bit) output
Requirements
• Can’t deduce input from output
• Can’t generate a given output (CRC fails this requirement)
• Can’t find two inputs which produce the same output (CRC
also fails this requirement)
Used to
• Produce fixed-length fingerprint of arbitrary-length data
• Produce data checksums to enable detection of modifications
• Distil passwords down to fixed-length encryption keys
Also called message digests or fingerprints
MAC Algorithms
Hash algorithm + key to make hash value dependant on the
key
Most common form is HMAC (hash MAC)
hash( key, hash( key, data ))
• Key affects both start and end of hashing process
Naming: hash + key = HMAC-hash
MD5  HMAC-MD5
SHA  HMAC-SHA

Algorithms
MD2: 128-bit output, deprecated
MD4: 128-bit output, broken
MD5: 128-bit output, weaknesses
SHA-1: 160-bit output, NSA-designed US government
secure hash algorithm, companion to DSA
RIPEMD-160: 160-bit output
HMAC-MD5: MD5 turned into a MAC
HMAC-SHA: SHA-1 turned into a MAC
Recommendation: Use SHA-1, HMAC-SHA
Key Management and Certificates

By the power vested in me I now declare this text


and this bit string ‘name’ and ‘key’. What RSA
has joined, let no man put asunder
— Bob Blakley

Key Management
Key management is the hardest part of cryptography
Two classes of keys
• Short-term session keys (sometimes called ephemeral keys)
– Generated automatically and invisibly
– Used for one message or session and discarded
• Long-term keys
– Generated explicitly by the user
Long-term keys are used for two purposes
• Authentication (including access control, integrity, and non-
repudiation)
• Confidentiality (encryption)
– Establish session keys
– Protect stored data
Key Management Problems
Key certification
Distributing keys
• Obtaining someone else’s public key
• Distributing your own public key
Establishing a shared key with another party
• Confidentiality: Is it really known only to the other party?
• Authentication: Is it really shared with the intended party?
Key storage
• Secure storage of keys
Revocation
• Revoking published keys
• Determining whether a published key is still valid

Key Lifetimes and Key Compromise


Authentication keys
• Public keys may have an extremely long lifetime (decades)
• Private keys/conventional keys have shorter lifetimes (a year or
two)
Confidentiality keys
• Should have as short a lifetime as possible
If the key is compromised
• Revoke the key
Effects of compromise
• Authentication: Signed documents are rendered invalid unless
timestamped
• Confidentiality: All data encrypted with it is compromised
Key Distribution
Alice retains the private key and sends the public key to
Bob

Mallet intercepts the key and substitutes his own key

Mallet can decrypt all traffic and generate fake signed


message

Key Distribution (ctd)


A certification authority (CA) solves this problem

CA signs Alice’s key to guarantee its authenticity to Bob


• Mallet can’t substitute his key since the CA won’t sign it
Certification Authorities
A certification authority (CA) guarantees the connection
between a key and an end entity
An end entity is
• A person
• A role (“Director of marketing”)
• An organisation
• A pseudonym
• A piece of hardware or software
• An account (bank or credit card)
Some CA’s only allow a subset of these types

Obtaining a Certificate
Obtaining a Certificate (ctd)
1. Alice generates a key pair and signs the public key and
identification information with the private key
• Proves that Alice holds the private key corresponding to the
public key
• Protects the public key and ID information while in transit to
the CA
2. CA verifies Alice’s signature on the key and ID
information
2a. Optional: CA verifies Alice’s ID through out-of-band
means
• email/phone callback
• Business/credit bureau records, in-house records

Obtaining a Certificate (ctd)


3. CA signs the public key and ID with the CA key,
creating a certificate
• CA has certified the binding between the key and ID
4. Alice verifies the key, ID, and CA’s signature
• Ensures the CA didn’t alter the key or ID
• Protects the certificate in transit
5. Alice and/or the CA publish the certificate
Role of a CA
Original intent was to certify that a key really did belong to
a given party
Role was later expanded to certify all sorts of other things
• Are they a bona fide business?
• Can you trust their web server?
• Can you trust the code they write?
• Is their account in good standing?
• Are they over 18?
When you have a certificate-shaped hammer, everything
looks like a nail

Certificate History
To understand the X.509 PKI, it’s necessary to understand
the history behind it
Original 1970s research work saw certificates as a one-time
assertion about public keys
• “This key is valid at this instant for this person”
• Never put into practice
Certificates in practice were applied to protect access to the
X.500 directory
• All-encompassing, global directory run by monopoly telcos
Certificate History (ctd)
Concerns about misuse of the directory
• Companies don’t like making their internal structure public
– Directory for corporate headhunters
• Privacy concerns
– Directory of single women
– Directory of teenage children
X.509 certificates were developed as part of the directory
access control mechanisms
• Acted as an RSA analog to a password
• Strictly a password replacement, no concept of CAs, key
usage, etc

X.500 Naming
X.500 introduced the Distinguished Name (DN), a
guaranteed unique name for everything on earth
X.500 Naming (ctd)
Typical DN components
• Country C
• State or province SP
• Locality L
• Organisation O
• Organisational unit OU
• Common name CN
Typical X.500 DN
C=US/L=Area 51/O=Hanger 18/OU=X.500 Standards
Designers/CN=John Doe
– When the X.500 revolution comes, your name will be lined
up against the wall and shot

Problems with X.500 Names


No-one ever managed to figure out how to make DN’s
work
This is a real diagram
taken from X.521
Problems with X.500 Names (ctd)
No clear plan on how to organise the hierarchy
• Attempts were made to define naming schemes, but nothing
really worked
• People couldn’t even agree on what things like ‘localities’ were
Hierarchical naming model fits the military and
governments, but doesn’t work for businesses or
individuals

Problems with X.500 Names (ctd)


DNs provide the illusion of order while preserving
everone’s God-given Freedom to Build a Muddle
Simple problem cases
• Communal living (jails, boarding schools)
• Nomadic peoples
• Merchant ships
• Quasi-permanent non-continental structures (oil towers)
• US APO addresses
• LA phone directory contains > 1,000 people called “Smith” in
a nonexistant 90000 area code
– A bogus address is cheaper than an unlisted number
– Same thing will happen on a much larger scale if people are
forced to provide information (cf cypherpunks login)
Problems with X.500 Names (ctd)
For a corporation, is C, SP, L
• Location of company?
• Location of parent company?
• Location of field office?
• Location of incorporation?
For a person, is C, SP, L
• Place of birth?
• Place of residence/domicile?
– Dual citizenship
– Stateless persons
– Nomads
• Place of work?
Solution: Specify it in the CPS, which noone reads anyway

DNs in Practice
Public CAs typically set
C = CA country
O = CA name
OU = Certificate type/class
CN = User name
email = User email address
• Some European CAs add oddball components required by local
signature laws
• Some CAs modify the DN with a nonce to try and guarantee
uniqueness
DNs in Practice (ctd)
Private CAs (organisations or people signing their own
certs) typically set any DN fields supported by their
software to whatever makes sense for them
• Some software requires that all of { C, O, OU, SP, L, CN } be
set
• Resulting certificates contain strange or meaningless entries as
people try and guess values, or use dummy values
• Windows 2000 has given up on issuer  subject chaining by
names entirely and instead chains by hash of the public key

Solving the DN Problem


Two solutions were informally adopted
1. Users put whatever they felt like into the DN
2. X.509v3 added support for alternative (non-DN) names
– These are largely ignored in favour of the DN though
General layout for a business-use DN
Country + Organisation + Organisational Unit + Common Name
– C=New Zealand
O=Dave’s Wetaburgers
OU=Procurement
CN=Dave Taylor
Solving the DN Problem (ctd)
General layout for a personal-use DN
Country + State or Province + Locality + Common Name
– C=US
SP=California
L=San Francisco
CN=John Doe
There are dozens of other odd things which can be
specified
• teletexTerminalIdentifier
• destinationIndicator
• supportedApplicationContext
Luckily these are almost never used

Non-DN Names
X.509 v3 added support for other name forms
• email addresses
• DNS names
• URL’s
• IP addresses
• EDI and X.400 names
• Anything else (type+value pairs)
For historical reasons, email addresses are often stuffed
into DN’s rather than being specified as actual email
addresses
Problems with Naming/Identity Certificates
“The user looks up John Smith’s certificate in a directory”
• Which directory?
• Which John Smith?
X.509-style PKI turns a key distribution problem into a
name distribution problem
• Cases where multiple people in same O, OU have same first,
middle, and last name
• Solve by adding some distinguishing value to DN (eg part of
SSN)
– Creates unique DNs, but they’re useless for name lookups
– John Smith 8721 vs John Smith 1826 vs John Smith 3504

Qualified Certificates
Certificate designed to identify a person with a high level
of assurance
Precisely defines identification information in order to
identify the cert owner in a standardised manner
• Defines additional parameters such as key usage, jurisdiction
where certificate is valid, biometric information, etc
• Qualified certificates only apply to natural persons
Some jurisdictions don’t allow this type of unique personal
identifier
• Any government that can issue this type of identifier can create
unpersons by refusing to issue it
Qualified Certificates (ctd)
Allows use of a pseudonym
• Pseudonym must be registered, ie can be mapped to a real
name via an external lookup
• Most implementations assume every DN contains a CN, so
some approximation to a CN must be supplied even if a
pseudonym is used
Defines personalData, a new subjectAltName subtype
• Registration authority for personal data information
• Collection of personal data
– Full (real, not DN) name, gender
– Date and place of birth
– Country of residence and/or citizenship
– Postal address

The X.500 Directory


The directory contains multiple objects in object classes
defined by schemas
A schema defines
• Required attributes
Attribute Value
• Optional attributes
Object Attribute Value
• The parent class
Attribute Value
Attributes are type-and-
value pairs
• Type = DN, value = John Doe
• Type may have multiple values associated with it
• Collective attributes are attributes shared across multiple
entries (eg a company-wide fax number)
The X.500 Directory (ctd)
Each instantiation of an object is a directory entry
Entries are identified by DN’s
• The DN is comprised of relative distinguished names (RDN’s)
which define the path through the directory
Directory entries may have aliases which point to the actual
entry
The entry contains one or more attributes which contain the
actual data

The X.500 Directory (ctd)

Data is accessed by DN and attribute type


Searching the Directory
Searching is performed by subtree refinement
• Base specifies where the start in the subtree
• Chop specifies how much of the subtree to search
• Filter specifies the object class to filter on
Example
• Base = C=NZ
• Chop = 1 RDN down from the base
• Filter = organisation
Typical application is to populate a tree control for
directory browsing
• SELECT name WHERE O=*

Directory Implementation
The directory is implemented using directory service
agents (DSA’s)

Users access the directory via a directory user agent (DUA)


• Access requests may be satisfied through referrals or chaining
One or more DSA’s are incorporated into a management
domain
Directory Access
Typical directory accesses:
• Read attribute or attributes from an entry
• Compare supplied value with an attribute of an entry
• List DN’s of subordinate entries
• Search entries using a filter
– Filter contains one or more matching rules to apply to
attributes
– Search returns attribute or attributes which pass the filter
• Add a new leaf entry
• Remove a leaf entry
• Modify an entry by adding or removing attributes
• Move an entry by modifying its DN

LDAP
X.500 Directory Access Protocol (DAP) adapted for
Internet use
• Designed to allow account name + password access on a ‘286
PC running DOS
• Originally Lightweight Directory Access Protocol, now closer
to HDAP
Provides access to LDAP servers (and hence DSAs) over a
TCP/IP connection
• bind and unbind to connect/disconnect
• read to retrieve data
• add, modify, delete to update entries
• search, compare to locate information
LDAP (ctd)
LDAP provides a complex hierarchical directory
containing information categories with sub-categories
containing nested object classes containing entries with
one or more (usually more) attributes containing actual
values
• In one large-scale interop test the use of a directory for cert
storage was found to be the single largest cause of problems
Simplicity made complex
“It will scale up into the billions. We have a pilot with 200 users
running already”
Most practical way to use it is as a simple database
SELECT key WHERE name=‘John Doe’

Key Databases/Directories
Today, keys are stored in
• Flat files (one per key)
• Berkeley DB
• Relational databases
• Proprietary databases (Netscape)
• Windows registry (MSIE)
Pragmatic solution uses a conventional RDBMS
• Already exists in virtually all corporates
• Tied into the existing corporate infrastructure
• Amenable to key storage
– SELECT key WHERE name=‘John Doe’
– SELECT key WHERE expiryDate < today + 1 week
CA Hierarchy in Theory
Portions of the X.500 hierarchy have CAs attached to them

RD
N
O=University of Auckland
Organisational CA
DN
N
RD
OU=Computer Science
Departmental CA
RD
N

CN=end user

Top-level CA is called the root CA, aka “the single point of failure”

CA Hierarchy in Practice
Flat or Clayton’s hierarchy

CA certificates are hard-coded into web browsers or email


software
• Later software added the ability to add new CAs to the
hardcoded initial set
Cross-Certification
Original X.500-based scheme envisaged a strict hierarchy
rooted at the directory root
• PEM tried (and failed) to apply this to the Internet
Later work had large numbers of hierarchies
• Many, many flat hierarchies
• Every CA has a set of root certificates used to sign other
certificates in relatively flat trees
What happens when you’re in hierarchy A and your trading
partner is in hierarchy B?
Solution: CAs cross-certify each other
• A signs B’s certificate
• B signs A’s certificate

Cross-Certification (ctd)
Problem: Each certificate now has two issuers
• All of X.509 is based on the fact that there’s a unique issuer
• Toto, I don’t think we’re in X.509 any more
With further cross-certification, re-parenting, subordination
of one CA to another, revocation and re-issuance/
replacement, the hierarchy of trust…
Cross-Certification (ctd)
…becomes the spaghetti of doubt…

…with multiple certificate paths possible

Cross-Certification (ctd)
Different CAs and paths have different validity periods,
constraints, etc etc
• Certificate paths can contain loops
• Certificate semantics can change on different iterations through
the loop
• Are certificate paths Turing-complete?
• No software in existence can handle these situations
Cross-certification is the black hole of PKI
• All existing laws break down
• Noone knows what it’s like on the other side
Cross-Certification (ctd)
The theory: A well-managed PKI will never end up like
this
The practice: If you give them the means, they will build it
• Allow cross-certification and it’s only a matter of time before
the situation will collapse into chaos
• c.f. CA vs EE certificates
– There are at least 5 different ways to differentiate the two
– Only one of these was ever envisaged by X.509

Bridge CAs
Attempt to solve the cross-certification chaos by unifying
disparate PKIs with a super-root
Bridge CA

Still has problems


• PKIn root has different semantics than bridge root
• What if PKI1 = CIA, PKI2 = KGB, PKI3 = Mossad?
– Trust issues are discussed elsewhere
X.509 Certificate Usage Model
Relying party wants to verify
a signature
• Fetch certificate
• Fetch certificate revocation
list (CRL) Fet
tch ch
Fe
• Check certificate against Cert CRL
CRL Check

• Check signature using Check

certificate
Signature

Certificate Revocation
Revocation is managed with a certificate revocation list
(CRL), a form of anti-certificate which cancels a
certificate
• Equivalent to 1970s-era credit card blacklist booklets
• Relying parties are expected to check CRLs before using a
certificate
– “This certificate is valid unless you hear somewhere that it
isn’t”
CRL Problems
CRLs don’t work
• Violate the cardinal rule of data-driven programming
“Once you have emitted a datum you can’t take it back”
• In transaction processing terms, viewing a certificate as a
PREPARE and a revocation as a COMMIT
– No action can be taken between the two without destroying
the ACID properties of the transaction
– Allowing for other operations between PREPARE and
COMMIT results in nondeterministic behaviour
• Blacklist approach was abandoned by credit card vendors 20
years ago because it didn’t work properly

CRL Problems (ctd)


CRLs mirror credit card blacklist problems
• Not issued frequently enough to be effective against an attacker
• Expensive to distribute
• Vulnerable to simple DOS attacks
– Attacker can prevent revocation by blocking CRL delivery
CRLs add further problems of their own
• Can contain retroactive invalidity dates
• CRL issued right now can indicate that a cert was invalid last
week
– Checking that something was valid at time t isn’t sufficient
to establish validity
– Back-dated CRL can appear at any point in the future
• Destroys the entire concept of nonrepudiation
CRL Problems (ctd)
CA cert revocation is more difficult than end-entity
revocation
• One interop test found that revoking a CA cert would require a
“system rebuild”
– Replace the current PKI software with updated software
• Testing of CA cert revocation was deferred until later

CRL Problems (ctd)


Revoking self-signed certificates is even hairier
• Cert revokes itself
• Applications may
– Accept the CRL as valid and revoke the certificate
– Reject the CRL as invalid since it was signed with a
revoked certificate
– Crash
• Computer version of Epimenides paradoxon “All Cretans are
liars”
– Crashing is an appropriate response
CRL Problems (ctd)
CRL Distribution Problems
• CRLs have a fixed validity period
– Valid from issue date to expiry date
• At expiry date, all relying parties connect to the CA to fetch the
new CRL
– Massive peak loads when a CRL expires (DDOS attack)
• Issuing CRLs to provide timely revocation exacerbates the
problem
– 10M clients download a 1MB CRL issued once a minute =
~150GB/s traffic
– Even per-minute CRLs aren’t timely enough for high-value
transactions with interest calculated by the minute

CRL Problems (ctd)


• Clients are allowed to cache CRLs for efficiency purposes
– CA issues a CRL with a 1-hour expiry time
– Urgent revocation arrives, CA issues an (unscheduled)
forced CRL before the expiry time
– Clients which re-fetch the CRL each time will recognise the
cert as expired
– Clients which cache CRLs won’t
– Users must choose between huge bandwidth consumption/
processing delays or missed revocations
CRL Problems (ctd)
Various ad hoc solutions proposed
• Segment CRLs based on urgency of revocation
– “Key compromise” issued once a minute
– “Affiliation changed” issued once a day
– Possible attacks
– Substitute one CRL for another
– Attacker can place key on low-priority CRL before victim can
place it on high-priority CRL
• Delta CRLs
– Short-term CRLs which modify a main CRL
– Discussion on PKI mailing lists indicates that use of delta
CRLs will be an interesting experience

CRL Problems (ctd)


• Stagger CRLs
– Over-issue CRLs so that multiple overlapping CRLs exist at
one time
– Timeliness guarantees vanish
– Plays havoc with CRL semantics
– Cert may or may not appear on any of several CRLs valid at a
given time
Certificate Revocation (ctd)
Many applications require prompt revocation
• CA’s (and X.509) don’t really support this
• CA’s are inherently an offline operation
Requirements for online checks
• Should return a simple boolean value “Certificate is valid/not
valid right now”
• Can return additional information such as “Not valid because
…”
• Historical query support is also useful, “Was valid at the time
the signature was generated”
• Should be lightweight (c.f. CRLs, which can require fetching
and parsing a 10,000 entry CRL to check the status of a single
certificate)

Bypassing CRLs
SET sidesteps CRL problems entirely
• End user certificates are “revoked” by cancelling the credit
card
• Merchant certificates are “revoked” by marking them as invalid
at the acquiring bank
• Payment gateways have short-term certificates which are
quickly replaced
Account Authority Digital Signatures (AADS/X9.59)
• Public key is tied to an existing account
• Revocation is handled by removing the key
• Matches 1970s model of certificates: “This key is valid at this
instant for this account”
Online Status Checking
Online Certificate Status Protocol, OCSP
OCSP
Responder

Fetch Validate

Cert

Check

Signature

• Inquires of the issuing CA whether a given certificate is still


valid
– Acts as a simple responder for querying CRL’s
– Still requires the use of a CA to check validity

Online Status Checking (ctd)


OCSP acts as a selective CRL protocol
• Standard CRL process: “Send me a CRL for everything you’ve
got”
• OCSP process: “Send me a pseudo-CRL/OCSP response for
only these certs”
– Lightweight pseudo-CRL avoids CRL size problems
• Reply is created on the spot in response to the request
– Ephemeral pseudo-CRL avoids CRL validity period
problems
Online Status Checking (ctd)
• Returned status values are non-orthogonal
– Status = “good”, “revoked”, or “unknown”
– “Not revoked” doesn’t necessarily mean “good”
– “Unknown” could be anything from “Certificate was never
issued” to “It was issued but I can’t find a CRL for it”
• Problems are due in some extent to the CRL-based origins of
OCSP
– CRL can only report a negative result
– “Not revoked” doesn’t mean a cert was ever issued
– Some OCSP implementations will report “I can’t find a
CRL” as “Good”
– Some relying party implementations will assume “revoked”
 “not good”, so any other status = “good”
– Much debate among implementors about OCSP semantics

Online Status Checking (ctd)


Other protocols
• Simple Certificate Validation Protocol (SCVP)
– Relying party submits a full chain of certificates
– Server indicates whether the chain can be verified
– Aimed mostly at thin clients
• Data Validation and Certification Server Protocols (DVCS)
– Provides facilities similar to SCVP disguised as a general
third-party data validation mechanism
• Integrated CA Services Protocol (ICAP)
• Real-time Certificate Status Protocol (RCSP)
• Web-based Certificate Access Protocol (WebCAP)
• Open CRL Distribution Protocol (OpenCDP)
Online Status Checking (ctd)
• Directory Supported Certificate Status Options (DCS)
• Data Certification Server (also DCS)
• Peter’s Active Revocation Protocol (PARP)
• Delegated Path Validation (DPV)
– Offshoot of the SCVP/DVCS debate and an OCSP
alternative OCSP-X
• Many, many more
– Protocol debate has been likened to religious sects arguing
over differences in dogma

Online Status Checking (ctd)


Online protocols place an enormous load on the CA
• CA must carefully protect their signing keys
… but …
• CA must be able to sign x,000 status requests per second
• CRL is inherently a batch operation
– Once an hour, scan a database table and sign the resulting
list
• Online status protocols have a high processing overhead
– For each query, check for a revocation and produce a signed
response
– By their very nature, it’s not possible to pre-generate
responses, since they must be fresh
Cost of Revocation Checking
CAs charge fees to issue a certificate
• Most expensive collection of bits in the world
Revocation checks are expected to be free
• CA can’t tell how often or how many checks will be made
• CRLs require
– Processor time
– Multiple servers (many clients can fetch them)
– Network bandwidth (CRLs can get large)
• Active disincentive for CAs to provide real revocation
checking capabilities

Cost of Revocation Checking (ctd)


Example: ActiveX
• Relatively cheap cert can sign huge numbers of ActiveX
controls
• Controls are deployed across hundreds of millions of Windows
machines
• Any kind of useful revocation checking would be
astronomically expensive
Example: email certificate
• Must be made cheap (or free) or users won’t use them
• Revocation handling isn’t financially feasible
Cost of Revocation Checking (ctd)
Revocation checking in these cases is, quite literally,
worthless
• Leave an infrequently-issued CRL at some semi-documented
location and hope few people find it
Charge for revocation checks
• Allows certain guarantees to be associated with the check
• Identrus charges for every revocation check (i.e. certificate use)
• GSA cost was 40¢…$1.20 each time a certificate was used

Rev./Status Checking in the Real World


CA key compromise: Everyone finds out
• Sun handled revocation of their CA key via posts to mailing
lists and newsgroups
SSL server key compromise: Noone finds out
• Stealing the keys from a typical poorly-secured server isn’t
hard (c.f. web page defacements)
• Revocation isn’t necessary since certificates are included in the
SSL handshake
– Just install a new certificate
email key compromise: Who cares?
• If necessary, send a copy of your new certificate to everyone in
your address book
Rev./Status Checking in the Real World (ctd)
In practice, revocation checking is turned off in user
software
• Serves no real purpose, and slows everything down a lot
Possible alternative revocation techniques
• Self-signed revocation (suicide note)
• Certificate of health/warrant of fitness for certificates (anti-
CRL)
Certificate of health provides better proof than CRLs
• CRL is a negative statement
• Anti-CRL is a positive statement
• Proving a negative is much harder than proving a positive

Rev./Status Checking in the Real World (ctd)


PKI researchers like to tinker with revocation in the same
way that petrol-heads tinker with car engines
Anyone who can figure out how to make revocation work,
please see me afterwards
Revocation as Distributed Trans.Processing
View revocation as a distributed transaction processing
problem
• Allows analysis of requirements and solution using established
TP mechanisms
• Goal is to distribute certificate status information in a reliable,
consistent manner to all parties in the presence of hardware and
software failures
• All users in a closed community are presented with a
guaranteed-consistent view of certificate information
– Meets the online status check requirements given earlier

Revocation as Distributed TP (ctd)


Managing distributed status information

B
Initially, all hosts (except E, A C
which is down) maintain a
standard view of a valid cert D E

Certificate = valid
B
A C Certificate is invalidated,
atomic update propagates
D E across all parties

Certificate = invalid
Revocation as Distributed TP (ctd)

B
Crashed server is A C
restarted and also
updates its state D E

Update propagation

• In X.509 terms this is equivalent to propagating a CRL to all


relying parties simultaneously using only a single transaction
• Since transaction times are recorded, this system can also
resolve historical queries
– “Was this cert valid at time t?”
– “Was this cert valid at the time it signed this document?”

Certificate Chains
Collection of certificates from a leaf up to a root or trust
anchor
• All previous problems are multiplied by the length of the chain

CRL RA Cert CRL CA Cert

CRL CA Cert

Cert

• Complexity of revocation checking is proportional to the


square of the depth of the issuance hierarchy
Certificate Chains (ctd)
Use OCSP with an access concentrator

OCSP

Fetch

OCSP
Cert chain OCSP
Gateway
Validate
Check

OCSP
Signature

• Gateway does all the work


• Requests can be forwarded to further gateways
• User is billed once at the access concentrator

Closing the Circle


Fetching a cert and then immediately having to perform a
second fetch to determine whether it’s any good is silly
• Fetch a known-good cert (no
revocation check necessary) Cert Server
• Solves previous revocation-
checking problems Fetch
• Simplify further: Submit hash of Cert
certificate on hand
– “It’s good, go ahead and Check

use it”
Signature
– “It’s no good, use this one
instead”
Closing the Circle (ctd)
All we really care about is the key
• Issuer/subject DN, etc are historical artifacts/baggage
• “Give me the key for John Smith”
• This operation is currently performed locally when the key is
fetched from a certificate store/Windows registry/flat file
• Moving from a local to a remote query allows centralised
administration

Closing the Circle (ctd)


Key-fetch is still an unnecessary step
• Validation server performs the check directly
• Similar to the 1970’s Davies and
Price model
Validation
– CA provides a dispute resolution Server
mechanism via a one-time
interactive certificate for the Check

transaction
Signature
• Fits the banking/online settlement
transaction model
Key Backup/Archival
Need to very carefully balance security vs backup
requirements
• Every extra copy of your key is one more failure point
• Communications and signature keys never need to be
recovered — generating a new key only takes a minute or so
• Long-term data storage keys should be backed up
Never give the entire key to someone else
• By extension, never use a key given to you by someone else
(eg generated for you by a third party)

Key Backup/Archival (ctd)


Use a threshold scheme to handle key backup
• Break the key into n shares
• Any m of n shares can recover the original
• Store each share in a safe, different location (locked in the
company safe, with a solicitor, etc)
• Shares can be reconstructed under certain conditions (eg death
of owner)
Defeating this setup requires subverting multiple
shareholders
Never give the entire key to someone else
Never give the key shares to an outside third party
Key Destruction
Ensure all copies of a private key are destroyed
• Is every copy really gone?
Public keys may need to survive private keys by quite some
time
• Signature on 20-year mortgage
Long-term key ownership can be a thorny issue
• CA goes bankrupt and auctions off keys
– c.f. bankrupt dot-coms selling user lists after they promised
not to
– Only asset the CA had left
– Bidding quickly shot up to rather high values
• Do you want a third-party CA issuing your corporate certs?

What is Trust?
This term appears constantly in relation to certificates
“Alice sees the certificate and trusts Bob”
What is trust anyway?
Types of trust
• Blind trust
– Sometimes the only option, eg emergencies
• Swift trust
– Based on a series of hedges to reduce potential loss
• Deference-based trust
– Disincentive to betray trust
– Contract / auditing / “our systems are infallible, don’t even
think about it”
What is Trust? (ctd)
• Knowledge-based / historical trust
– Based on established history / trading relationship
• Social trust
– Based on emotions rather than rational thought
• Identification-based trust
– Parties have common goals
• Indirect trust
– Sometimes, trust can’t be established directly
– Establish indirect trust using third parties

What is Trust? (ctd)


Type of Trust Mechanism
Blind None necessary
Swift None necessary
Deference-based Bilateral trading agreements
Contracts/legal agreements
Laws
Knowledge-based / None necessary
historical
Social trust ―
Identity-based Identity certificates
What is Trust? (ctd)
Trust can be grouped into one of three classes
• Mechanistic trust
– Based on positive evidence
– “We’ve done it before and it worked”
• Religious trust
– Based on faith
– No evidence, but we hope for a positive outcome
• Psychotic trust
– Based on negative evidence
– “We’ve done it before and it didn’t work”
Much current PKI “trust” is either religious or psychotic

What is Trust? (ctd)


Trust degradation
• Without reinforcement, trust decays over time
• Trust may be deliberately destroyed
– “My credit card has been stolen”
– Prevents parties from making decisions based on invalid
trust data
Certificate Structure
Version (X.509 v3)
Serial number
Issuer name (DN)
Validity (start and end time)
Subject Name (DN)
Subject public key
Extensions (added in v3)
Extra identification information, usage
constraints, policies, etc

Usually either the subject name or issuer and serial number


identify the certificate
Validity field indicates when certificate renewal fee is due

Certificate Structure (ctd)


Typical certificate
• Serial Number = 177545
• Issuer Name = Verisign
• ValidFrom = 12/09/98
• ValidTo = 12/09/99
• Subject Name = John Doe
• Public Key = RSA public key
Certificate Extensions
Extensions consist of a type-and-value pair, with optional
critical flag
Critical flag is used to protect CA’s against assumptions
made by software which doesn’t implement support for a
particular extension
• If flag is set, extension must be processed (if recognised) or the
certificate rejected
• If flag is clear, extension may be ignored
Ideally, implementations should process and act on all
components of all fields of an extension in a manner
which is compliant with the semantic intent of the
extension

Certificate Extensions (ctd)


Actual definitions of critical flag usage are extremely
vague
• X.509: Noncritical extension “is an advisory field and does not
imply that usage of the key is restricted to the purpose
indicated”
• PKIX: “CA’s are required to support constraint extensions”,
but “support” is never defined
• S/MIME: Implementations should “correctly handle” certain
extensions
• MailTrusT: “non-critical extensions are informational only and
may be ignored”
• Verisign: “all persons shall process the extension... or else
ignore the extension”
Certificate Extensions (ctd)
Extensions come in two types
Usage/informational extensions
• Provide extra information on the certificate and its owner
Constraint extensions
• Constrain the user of the certificate
• Act as a Miranda warning (“You have the right to remain
silent, you have the right to an attorney, ...”) to anyone using
the certificate

Certificate Usage Extensions


Key Usage
• Defines the purpose of the key in the certificate
digitalSignature
• Short-term authentication signature (performed automatically
and frequently)
• “This key can sign any kind of document…
… except one that happens to look like an X.509 certificate”
nonRepudiation
• Binding long-term signature (performed consciously)
• Another school of thought holds that nonRepudiation acts as an
additional service on top of digitalSignature
• Certificate profiles are split roughly 50:50 on this
Certificate Usage Extensions (ctd)
keyEncipherment
• Exchange of encrypted session keys (RSA)
keyAgreement
• Key agreement (DH)
keyCertSign/cRLSign
• Signature bits used by CA’s
No-one really knows what the nonRepudiation bit signifies
• Asking 8 different people will produce 10 different responses
• c.f. crimeFree bit
– “This certificate will be used for transactions which are not
a perpetration of fraud or other illegal activities”

Certificate Usage Extensions (ctd)


• Possible definition: “Nonrepudiation is anything which fails to
go away when you stop believing in it”
– If you can convince someone it’s not worth repudiating a
signature, you have nonrepudiation
– Have them sign a legal agreement promising not to do it
– Convince them that the smart card they used is infallible
and it’s not worth going to court over
– Threaten to kill their kids
• The only definitive statement which can be made upon seeing
the NR bit set is “The subscriber asked the issuing CA to set
this bit”
• Suggestion that CAs set this bit at random just to prevent
people from arguing that its presence has a meaning
Certificate Usage Extensions (ctd)
Extended Key Usage
Extended forms of the basic key usage fields
• serverAuthentication
• clientAuthentication
• codeSigning
• emailProtection
• timeStamping

Certificate Usage Extensions (ctd)


Two interpretations of what extended key usage values
mean when set in a CA certificate
• Certificate can be used for the indicated usage
– Interpretation used by PKIX, some vendors
• Certificate can issue certificates with the given usage
– Interpretation used by Netscape, Microsoft, other vendors
Netscape cert-type
• An older Netscape-specific extension which performed the
same role as keyUsage, extKeyUsage, and basicConstraints
Certificate Usage Extensions (ctd)
Private Key Usage Period
Defines start and end time in which the private key for a
certificate is valid
• Signatures may be valid for 10-20 years, but the private key
should only be used for a year or two
Alternative Names
Everything which doesn’t fit in a DN
• rfc822Name
– email address, dave@wetaburgers.com
• dNSName
– DNS name for a machine, ftp.wetaburgers.com

Certificate Usage Extensions (ctd)


• uniformResourceIdentifier
– URL, http://www.wetaburgers.com
• iPAddress
– 202.197.22.1 (encoded as CAC51601)
• x400Address, ediPartyName
– X.400 and EDI information
• directoryName
– Another DN, but containing stuff you wouldn’t expect to
find in the main certificate DN
– Actually the alternative name is a form called the
GeneralName, of which a DN is a little-used subset
• otherName
– Type-and-value pairs (type=MPEG, value=MPEG-of-cat)
Certificate Usage Extensions (ctd)
Certificate Policies
Information on the CA policy under which the certificate
is issued
• Policy identifier
• Policy qualifier(s)
• Explicit text (“This certificate isn’t worth the paper it’s not
printed on”)
Defines/constrains what the CA does, not what the user does
• Passport issuer can’t constrain how a passport is used
• Driver’s licence issuer can’t constrain how a driver’s licence is
used
• SSN issuer can’t even constrain how an SSN is (mis-)used

Certificate Usage Extensions (ctd)


X.509 delegates most issues of certificate semantics or trust
to the CA’s policy
• Many policies serve mainly to protect the CA from liability
– “Verisign disclaims any warranties... Verisign makes no
representation that any CA or user to which it has issued a
digital ID is in fact the person or organisation it claims to
be... Verisign makes no assurances of the accuracy,
authenticity, integrity, or reliability of information”
• Effectively these certificates have null semantics
• If CA’s didn’t do this, their potential liability would be
enormous
Certificate Usage Extensions (ctd)
Policy Mappings
• Maps one CA’s policy to another CA
• Allows verification of certificates issued under other CA
policies
– “For verification purposes we consider our CA policy to be
equivalent to the policy of CA x”
• Mapping of constraints is left hanging

Certificate Constraint Extensions


Basic Constraints
Whether the certificate is a CA certificate or not
• Prevents users from acting as CAs and issuing their own
certificates
• Redundant, since keyUsage specifies the same thing in a more
precise manner
• Much confusion over its use in non-CA certificates
– German ISIS profile mandates its use
– Italian profile forbids its use
Certificate Constraint Extensions (ctd)
Name Constraints
Constrain the DN subtree under which a CA can issue
certificates
• Constraint of C=NZ, O=University of Auckland would enable
a CA to issue certificates only for the University of Auckland
• Main use is to balkanize the namespace so a CA can buy or
license the right to issue certificates in a particular area
• Constraints can also be applied to email addresses, DNS
names, and URLs

Certificate Constraint Extensions (ctd)


Policy Constraints
Can be used to disable certificate policy mappings
• Policy = “For verification purposes we consider our CA policy
to be equivalent to the policy of CA x”
• Policy constraint = “No it isn’t”
Certificate Profiles
X.509 is extremely vague and nonspecific in many areas
• To make it usable, standards bodies created certificate profiles
which nailed down many portions of X.509
PKIX
Internet PKI profile
• Requires certain extensions (basicConstraints, keyUsage) to be
critical
– Doesn’t require basicConstraints in end entity certificates,
interpretation of CA status is left to chance
• Uses digitalSignature for general signing, nonRepudiation
specifically for signatures with nonRepudiation
• Defines Internet-related altName forms like email address,
DNS name, URL

Certificate Profiles (ctd)


FPKI
(US) Federal PKI profile
• Requires certain extensions (basicConstraints, keyUsage,
certificatePolicies, nameConstraints) to be critical
• Uses digitalSignature purely for ephemeral authentication,
nonRepudiation for long-term signatures
• Defines (in great detail) valid combinations of key usage bits
and extensions for various certificate types
MISSI
US DoD profile
• Similar to FPKI but with some DoD-specific requirements
(you’ll never run into this one)
Certificate Profiles (ctd)
ISO 15782
Banking — Certificate Management Part 1: Public Key
Certificates
• Uses digitalSignature for entity authentication and
nonRepudiation strictly for nonrepudiation (leaving digital
signatures for data authentication without nonrepudiation
hanging)
• Can’t have more than one flag set
Canada
• digitalSignature or nonRepudiation must be present in all
signature certs
• keyEncipherment or dataEncipherment must be present in
confidentiality certs

Certificate Profiles (ctd)


SEIS
Secured Electronic Information in Society
• Leaves extension criticality up to certificate policies
• Uses digitalSignature for ephemeral authentication and some
other signature types, nonRepudiation specifically for
signatures with nonRepudiation
– nonRepudiation can’t be combined with other flags
– Requires three separate keys for digital signature,
encryption, and nonrepudiation
• Disallows certain fields (policy and name constraints)
Certificate Profiles (ctd)
TeleTrusT/MailTrusT
German MailTrusT profile for TeleTrusT (it really is
capitalised that way)
• Requires keyUsage to be critical in some circumstances
• Uses digitalSignature for general signatures, nonRepudiation
specifically for signatures with nonRepudiation
ISIS
German Industrial Signature Interoperability Spec
• Only allows some combinations of key usage bits
• ISIS extensions should be marked non-critical even if their
semantics would make them critical
• Requires authorityCertIssuer/SerialNumber instead of
authorityKeyIdentifier

Certificate Profiles (ctd)


Australian Profile
Profile for the Australian PKAF
• Requires certain extensions (basicConstraints, keyUsage) to be
critical
• Defines key usage bits (including digitalSignature and
nonRepudiation) in terms of which bits may be set for each
algorithm type
• Defines (in great detail) valid combinations of key usage bits
and extensions for various certificate types
German Profile
Profile to implement the German digital signature law
• Requires that private key be held only by the end user
Certificate Profiles (ctd)
SIRCA Profile
(US) Securities Industry Association
• Requires all extensions to be non-critical
• Requires certificates to be issued under the SIA DN subtree
Microsoft Profile (de facto profile)
• Rejects certificates with critical extensions
• Always seems to set nonRepudiation flag when
digitalSignature flag set
• Ignores keyUsage bit
• Treats all certificate policies as the hardcoded Verisign policy

Certificate Profiles (ctd)


Many, many more
You can't be a real country unless you have a beer and an airline. It
helps if you have some kind of a football team, or some nuclear
weapons, but at the very least you need a beer.
— Frank Zappa
And an X.509 profile.
— Peter Gutmann
Need to
• Ensure CA issues certificates conformant to the profile
• Ensure CA software conforms to the profile
• Ensure relying party software conforms to the profile
• Extensively test both to ensure they really do this (rather than
just having the vendor claim they do this)
Setting up a CA
No-one makes money running a CA
• You make money by selling CA services and products
Typical cost to set up a proper CA from scratch: $1M
Writing the policy/certificate practice statement (CPS)
requires significant effort
Getting the top-level certificate (root certificate) installed
and trusted by users can be challenging
• Root certificate is usually self-signed

Bootstrapping a CA
Get your root certificate signed by a known CA
• Your CA’s certificate is certified by the existing CA
• Generally requires becoming a licensee of the existing CA
• Your CA is automatically accepted by existing software
Get users to install your CA certificate in their applications
• Difficult for users to do
• Specific to applications and OSes
• Not transparent to users
• No trust mechanism for the new certificate
Bootstrapping a CA (ctd)
Publish your CA certificate(s) by traditional means
• Global Trust Register,
http://www.cl.cam.ac.uk/Research/Security/
Trust-Register/
• Book containing register of fingerprints of the world’s most
important public keys
• Implements a top-level CA using paper and ink
Install custom software containing the certificate on user
PC’s
• Even less transparent than manually installing CA certificates
• No trust mechanism for the new certificate

Business Expectations of a CA
Current work follows the “if you build it, they will (might)
come” model
• Industry (particularly governments) make great testbeds for
PKI experimentation
– They’ll even pay you for it!
Survey of US businesses revealed that they require CA’s to
be insurable
• Must be possible to quantify risk reliably enough to make
meaningful warranties
• c.f. Verisign’s null-semantics certificates
Business Expectations of a CA (ctd)
Two approaches to this problem:
1. Practical solution: CA has only two warranted
responsibilities
1. Ensure each name is unique
2. Protect the CA’s key(s)
– Interpreting the certificate is left to the relying party
2. Legal solution: If you do x, the government will
indemnify you
• x expands to “jump through all the hoops defined in this digital
signature law”
• Type, size, and number of hoops varies from country to
country

CA Business Model
Free email certs
• Noone will pay for them
• Clown suit certs
SSL certs run as a protection racket
• Buy our certs at US$200/kB/year or your customers
will be scared away
• Actual CA advertising:
If you fail to renew your Server ID prior to the expiration date,
operating your Web site will become far riskier than normal […]
your Web site visitors will encounter multiple, intimidating warning
messages when trying to conduct secure transactions with your
site. This will likely impact customer trust and could result in lost
business for your site.

CA consulting services
Finding a Workable Business Model
PKI requires of the user
• Certificate management software to be installed and configured
• Payment for each certificate
• Significant overhead in managing keys and certificates
PKI provides to the user
• “…disclaims any warranties... makes no representation that any
CA or user to which it has issued a digital ID is in fact the
person or organisation it claims to be... makes no assurances of
the accuracy, authenticity, integrity, or reliability of
information”

Finding a Workable Business Model (ctd)


A PKI is not just another IT project
• Requires a combined organisational, procedural, and legal
approach
• Staffing requires a skilled, multidisciplinary team
• Complexity is enormous
– Initial PKI efforts vastly underestimated the amount of
work involved
– Current work is concentrating on small-scale pilots to avoid
this issue
To be accepted, a PKI must provide perceived value
• Failure to do so is what killed SET
• Noone has really figured out a PKI business model yet
CA Policies
Serves two functions
• Provides a CA-specific mini-profile of X.509
• Defines the CA terms and conditions/indemnifies the CA
CA policy may define
• Obligations of the CA
– Checking certificate user validity
– Publishing certificates/revocations
• Obligations of the user
– Provide valid, accurate information
– Protect private key
– Notify CA on private key compromise

CA Policies (ctd)
• List of applications for which issued certificates may be
used/may not be used
• CA liability
– Warranties and disclaimers
• Financial responsibility
– Indemnification of the CA by certificate users
• Certificate publication details
– Access mechanism
– Frequency of updates
– Archiving
• Compliance auditing
– Frequency and type of audit
– Scope of audit
CA Policies (ctd)
• Security auditing
– Which events are logged
– Period for which logs are kept
– How logs are protected
• Confidentiality policy
– What is/isn’t considered confidential
– Who has access
– What will be disclosed to law enforcement/courts

CA Policies (ctd)
• Certificate issuing
– Type of identification/authentication required for issuance
– Type of name(s) issued
– Resolution of name disputes
– Handling of revocation requests
– Circumstances under which a certificate is revoked, who can
request a revocation, type of identification/authentication required
for revocation, how revocation notices are distributed
• Key changeover
– How keys are rolled over when existing ones expire
• Disaster recovery
CA Policies (ctd)
• CA security
– Physical security
– Site location, access control, fire/flood protection, data backup
– Personnel security
– Background checks, training
– Computer security
– OS type used, access control mechanisms, network security
controls
– CA key protection
– Generation, key sizes, protection (hardware or software, which
protection standards are employed, key backup/archival,
access/control over the key handling software/hardware)
• Certificate profiles
– Profile amendment procedures
– Publication

CA’s and Scaling


The standard certification model involves direct user
interaction with a CA

This doesn’t scale well


• CA has to verify details for each user
• Processing many users come from a similar background (eg a
single organisation) results in unnecessary repeated work
RA’s
Registration authorities offload user processing and
checking from the CA

RA acts as a trusted intermediary


• RA has a trusted relationship with CA
• RA has access to user details

Timestamping
Certifies that a document existed at a certain time
Used for added security on existing signatures
• Timestamped countersignature proves that the original
signature was valid at a given time
• Even if the original signatures key is later compromised, the
timestamp can be used to verify that the signature was created
before the compromise
Requires a data format which can handle multiple
signatures
• Only PGP keys and S/MIME signed data provide this
capability
Problems with X.509
Most of the required infrastructure doesn’t exist
• Users use an undefined certification request protocol to obtain
a certificate which is published in an unclear location in a
nonexistent directory with no real means to revoke it
• Various workarounds are used to hide the problems
– Details of certificate requests are kludged together via web
pages
– Complete certificate chains are included in messages
wherever they’re needed
– Revocation is either handled in an ad hoc manner or ignored
entirely
Standards groups are working on protocols to fix this
• Progress is extremely slow

Problems with X.509 (ctd)


Certificates are based on owner identities, not keys
• Owner identities don’t work very well as certificate ID’s
– Real people change affiliations, email addresses, even
names
– An owner will typically have multiple certificates, all with
the same ID
• Owner identity is rarely of security interest (authorisation/
capabilities are what count)
– When you check into a hotel, buy goods in a store, you’re
asked for a payment instrument, not a passport
• Revoking a key requires revoking the identity of the owner
• Renewal/replacement of identity certificates is nontrivial
Problems with X.509 (ctd)
Authentication and confidentiality certificates are treated
the same way for certification purposes
• X.509v1 and v2 couldn’t even distinguish between the two
Users should have certified authentication keys and use
these to certify their own confidentiality keys
• No real need to have a CA to certify confidentiality keys
• New confidentiality keys can be created at any time
• Doesn’t require the cooperation of a CA to replace keys

Problems with X.509 (ctd)


Aggregation of attributes shortens the overall certificate
lifetime
• Steve’s Rule of Revocation: Frequency of certificate change is
proportional to the square of the number of attributes
• Inflexibility of certificate conflicts with real-world IDs
– Can get a haircut, switch to contact lenses, get a suntan,
shave off a moustache, go on a diet, without invalidating
your passport
– Changing a single bit in a certificate requires getting a new
one
– Steve’s certificate is for an organisation which no longer
exists
Problems with X.509 (ctd)
Certificates rapidly become a dossier as more attributes are
added
SEQUENCE { SEQUENCE {
OBJECT IDENTIFIER signedData (1 2 840 113549 1 7 2) SET {
[0] { SEQUENCE {
SEQUENCE { OBJECT IDENTIFIER countryName (2 5 4 6)
INTEGER 1 PrintableString 'CH'
SET { }
SEQUENCE { }SET {
OBJECT IDENTIFIER sha1 (1 3 14 3 2 26) SEQUENCE {
NULL OBJECT IDENTIFIER organizationName (2 5 4 10)
} PrintableString 'Swisskey AG'
} }
SEQUENCE { }
OBJECT IDENTIFIER data (1 2 840 113549 1 7 1) SET {
} SEQUENCE {
[0] { OBJECT IDENTIFIER organizationalUnitName (2 5 4 11)
SEQUENCE { PrintableString 'Public CA Services'
SEQUENCE { }
[0] { }
INTEGER 2 SET {
} SEQUENCE {
INTEGER 145 OBJECT IDENTIFIER localityName (2 5 4 7)
SEQUENCE { PrintableString 'Zuerich'
OBJECT IDENTIFIER md5withRSAEncryption (1 2 840 113549 1 1 4) }
NULL }
} SET {
SEQUENCE {
OBJECT IDENTIFIER commonName (2 5 4 3)
PrintableString 'Swisskey ID CA 1024'
}
continues }
}

Problems with X.509 (ctd)


SEQUENCE { SET {
UTCTime '980929093816Z' SEQUENCE {
UTCTime '000929093800Z' OBJECT IDENTIFIER countryName (2 5 4 6)
} PrintableString 'CH'
SEQUENCE { }
SET { }
SEQUENCE { SET {
OBJECT IDENTIFIER organizationName (2 5 4 10) SEQUENCE {
PrintableString 'Swisskey AG' OBJECT IDENTIFIER commonName (2 5 4 3)
} PrintableString 'Juerg Spoerndli'
} }
SET { }
SEQUENCE { SET {
OBJECT IDENTIFIER organizationalUnitName (2 5 4 11) SEQUENCE {
PrintableString '008510000050200000128' OBJECT IDENTIFIER emailAddress (1 2 840 113549 1 9 1)
} IA5String 'jspoerndli@swisskey.ch'
} }
SET { }
SEQUENCE { }
OBJECT IDENTIFIER organizationalUnitName (2 5 4 11) SEQUENCE {
PrintableString 'Product Management' SEQUENCE {
} OBJECT IDENTIFIER rsaEncryption (1 2 840 113549 1 1 1)
} NULL
SET { }
SEQUENCE { BIT STRING 0 unused bits
OBJECT IDENTIFIER postalCode (2 5 4 17) 30 81 89 02 81 81 00 EE 7B BA 00 A0 1A C2 05 8B
PrintableString '8008' 8F 52 26 E9 01 C4 A3 7A C9 6E C5 4C 2B FD 3A 2A
} 44 48 72 29 7E E3 57 03 2A C9 F3 BB 1D C2 12 2D
} E7 7E 8D B3 3C 58 AD D6 8A 29 4D D1 9F 0F 1E 45
SET { F3 1E 67 39 9D 83 0B 1A 0D 1F 82 35 B0 D7 2A 6E
SEQUENCE { 35 6B 76 C2 05 9B 67 E4 3F 8B 6A 8F A6 04 85 F7
OBJECT IDENTIFIER localityName (2 5 4 7) 56 EB 51 D9 69 D6 C9 23 AF 5E 0A AE D3 90 7F 60
PrintableString 'Zuerich' 16 81 CF 1F 20 B6 A5 A5 5E F0 9F 6D B0 40 F9 8D
} [ Another 12 bytes skipped ]
} }

continues
Problems with X.509 (ctd)
[3] { SEQUENCE {
SEQUENCE { OBJECT IDENTIFIER privateKeyUsagePeriod (2 5 29 16)
SEQUENCE { OCTET STRING, encapsulates {
OBJECT IDENTIFIER netscape-cert-type (2 16 840 1 113730 1 1) SEQUENCE {
OCTET STRING, encapsulates { [0] '19980929093816Z'
BIT STRING 5 unused bits [1] '20000929093800Z'
'101'B }
} }
} }
SEQUENCE { }
OBJECT IDENTIFIER netscape-comment (2 16 840 1 113730 1 13) }
OCTET STRING }
16 81 C6 54 68 69 73 20 63 65 72 74 69 66 69 63 SEQUENCE {
61 74 65 20 68 61 73 20 62 65 65 6E 20 69 73 73 OBJECT IDENTIFIER md5withRSAEncryption (1 2 840 113549 1 1 4)
75 65 64 20 62 79 20 53 77 69 73 73 6B 65 79 20 NULL
41 47 20 67 6F 76 65 72 6E 65 64 20 62 79 20 69 }
74 73 20 43 65 72 74 69 66 69 63 61 74 65 20 50 BIT STRING 0 unused bits
72 61 63 74 69 63 65 20 53 74 61 74 65 6D 65 6E 2A 2A 40 C4 03 48 0B B9 7D 7F B6 85 FD CF A8 D7
74 20 28 43 50 53 29 2E 20 43 50 53 20 61 6E 64 CF 96 D8 55 5D C0 87 4D BE E6 C1 0F 7A 0B 0F 17
20 66 75 72 74 68 65 72 20 69 6E 66 6F 72 6D 61 DF 7A 10 49 81 EB A1 6B 8C 16 93 FB 38 37 79 A0
[ Another 73 bytes skipped ] B6 1F B3 EA F0 AA D5 CA 0A 52 DA D3 19 3A 55 B6
} F6 7F 77 4E 30 15 D4 5C 8C 73 44 62 FF 15 9C 44
SEQUENCE { C3 38 F0 D1 58 85 D0 C6 88 55 7C FF D0 67 14 4C
OBJECT IDENTIFIER keyUsage (2 5 29 15) DE D2 7F F8 00 A8 BC 6E A7 35 BD 51 DD CB 7D F2
OCTET STRING, encapsulates { C8 E7 34 61 00 C2 25 51 F0 ED 0B B0 38 93 FC 30
BIT STRING 5 unused bits }
'101'B SEQUENCE {
} SEQUENCE {
} [0] {
INTEGER 2
}
INTEGER 5
SEQUENCE {
OBJECT IDENTIFIER md5withRSAEncryption (1 2 840 113549 1 1 4)
NULL
}

continues

Problems with X.509 (ctd)


SEQUENCE { SEQUENCE {
SET { SET {
SEQUENCE { SEQUENCE {
OBJECT IDENTIFIER countryName (2 5 4 6) OBJECT IDENTIFIER countryName (2 5 4 6)
PrintableString 'CH' PrintableString 'CH'
} }
} }
SET { SET {
SEQUENCE { SEQUENCE {
OBJECT IDENTIFIER organizationName (2 5 4 10) OBJECT IDENTIFIER organizationName (2 5 4 10)
PrintableString 'Swisskey AG' PrintableString 'Swisskey AG'
} }
} }
SET { SET {
SEQUENCE { SEQUENCE {
OBJECT IDENTIFIER organizationalUnitName (2 5 4 11) OBJECT IDENTIFIER organizationalUnitName (2 5 4 11)
PrintableString 'Public CA Services' PrintableString 'Public CA Services'
} }
} }
SET { SET {
SEQUENCE { SEQUENCE {
OBJECT IDENTIFIER localityName (2 5 4 7) OBJECT IDENTIFIER localityName (2 5 4 7)
PrintableString 'Zuerich' PrintableString 'Zuerich'
} }
} }
SET { SET {
SEQUENCE { SEQUENCE {
OBJECT IDENTIFIER commonName (2 5 4 3) OBJECT IDENTIFIER commonName (2 5 4 3)
PrintableString 'Swisskey Root CA' PrintableString 'Swisskey ID CA 1024'
} }
} }
} }
SEQUENCE { SEQUENCE {
UTCTime '980706134849Z' SEQUENCE {
UTCTime '051231235900Z' OBJECT IDENTIFIER rsaEncryption (1 2 840 113549 1 1 1)
} NULL
}
continues
Problems with X.509 (ctd)
BIT STRING 0 unused bits SEQUENCE {
30 81 89 02 81 81 00 AB E9 1F E9 AD FF 53 9F 71 OBJECT IDENTIFIER md5withRSAEncryption (1 2 840 113549 1 1 4)
70 35 6D F8 F8 4C 76 B4 F7 43 E8 19 80 DD A9 0A NULL
D6 4E 60 C2 FD 48 7B 43 F6 6E BE 53 D0 0E 62 F0 }
35 27 6F 2E 55 22 F2 82 40 2E 21 5B 5D 7E 18 16 BIT STRING 0 unused bits
CA 87 31 2E 12 71 4C 5F 92 8A AB 36 61 9C 91 38 0E 0F 67 22 AA D2 8A 7B BF 3D 47 AB 1F 5E 8C F3
BC BD 95 88 BF 7E 0C 4A D7 A0 12 F9 FA FF 0F 84 2C 32 3E AB D3 48 60 A1 BA 49 FD 81 28 6A 26 69
F8 57 6E DE AE B4 03 FC 77 CF 7C E5 B3 33 79 61 83 97 29 1F E8 80 14 96 30 2B C3 18 97 3B 6C F3
31 4E CE 70 03 E7 73 D8 E8 1B D3 EB 15 FF 69 B3 F0 A2 D6 E0 30 EF F6 2C 38 1F C0 37 7E 9E 45 FD
[ Another 12 bytes skipped ] 62 38 67 07 27 BE 81 07 E9 12 60 E8 BE 6B ED 14
} 8E 61 17 52 99 C2 FE 33 B7 21 CA 5E FE 6D B4 1E
[3] { B9 8C 54 36 42 55 1E 73 D9 81 DE 5D 25 AD 72 39
SEQUENCE { 15 AF 68 E9 44 45 55 7F 2E 2E F9 6F EF 44 B0 E0
SEQUENCE { }
OBJECT IDENTIFIER basicConstraints (2 5 29 19) SEQUENCE {
BOOLEAN TRUE SEQUENCE {
OCTET STRING, encapsulates { [0] {
SEQUENCE { INTEGER 2
BOOLEAN TRUE }
INTEGER 0 INTEGER 1
} SEQUENCE {
} OBJECT IDENTIFIER md5withRSAEncryption (1 2 840 113549 1 1 4)
} NULL
SEQUENCE { }
OBJECT IDENTIFIER keyUsage (2 5 29 15) SEQUENCE {
BOOLEAN TRUE SET {
OCTET STRING, encapsulates { SEQUENCE {
BIT STRING 1 unused bits OBJECT IDENTIFIER countryName (2 5 4 6)
'1100000'B PrintableString 'CH'
} }
} }
} SET {
} SEQUENCE {
} OBJECT IDENTIFIER organizationName (2 5 4 10)
PrintableString 'Swisskey AG'
}
}
continues

Problems with X.509 (ctd)


SET { SET {
SEQUENCE { SEQUENCE {
OBJECT IDENTIFIER organizationalUnitName (2 5 4 11) OBJECT IDENTIFIER organizationName (2 5 4 10)
PrintableString 'Public CA Services' PrintableString 'Swisskey AG'
} }
} }
SET { SET {
SEQUENCE { SEQUENCE {
OBJECT IDENTIFIER localityName (2 5 4 7) OBJECT IDENTIFIER organizationalUnitName (2 5 4 11)
PrintableString 'Zuerich' PrintableString 'Public CA Services'
} }
} }
SET { SET {
SEQUENCE { SEQUENCE {
OBJECT IDENTIFIER commonName (2 5 4 3) OBJECT IDENTIFIER localityName (2 5 4 7)
PrintableString 'Swisskey Root CA' PrintableString 'Zuerich'
} }
} }
} SET {
SEQUENCE { SEQUENCE {
UTCTime '980706120207Z' OBJECT IDENTIFIER commonName (2 5 4 3)
UTCTime '051231235900Z' PrintableString 'Swisskey Root CA'
} }
SEQUENCE { }
SET { }
SEQUENCE { SEQUENCE {
OBJECT IDENTIFIER countryName (2 5 4 6) SEQUENCE {
PrintableString 'CH' OBJECT IDENTIFIER rsaEncryption (1 2 840 113549 1 1 1)
} NULL
} }

continues
Problems with X.509 (ctd)
BIT STRING 0 unused bits SEQUENCE {
30 81 89 02 81 81 00 AC AB 60 E0 C5 69 FD 07 4E OBJECT IDENTIFIER md5withRSAEncryption (1 2 840 113549 1 1 4)
97 9B AF 4A 1C 30 D7 68 26 D1 2C 3D 44 F0 D6 AB NULL
16 34 6F 00 D8 7F D6 3F B9 35 D6 83 28 77 A3 3E }
24 5D A4 D1 C2 FA 04 B3 DB 4D 38 91 23 70 6C 2B BIT STRING 0 unused bits
2D 48 69 D5 15 6F 4A 9F 91 BC E4 83 2F 35 A2 29 72 A7 93 A3 CD D7 3A DB 79 50 DB 98 03 52 B0 CD
DB 55 66 F8 90 C6 0E 0C 32 75 95 24 E0 8D B7 8E AF 0C D2 A6 89 38 52 6C 5C E9 7C B3 37 3C 9E 94
AB 13 70 61 1E 01 91 7D 9D 44 37 42 41 C9 C2 01 C4 74 57 D4 BB 78 05 5B B6 B9 31 04 FC 60 33 51
DD 26 D8 B9 2C 29 57 A1 54 17 1E AC 1A DE 8C 6C 5F CF 2C 44 55 85 EC 1F 0B CB 89 E7 F0 93 D4 CD
[ Another 12 bytes skipped ] 85 D3 FF B6 B5 99 D3 7C 35 06 11 7B 0E 9F E6 BE
} 99 B3 49 D0 5A 85 FA 7C BA 54 9B B9 AF F7 4B E3
[3] { FF DC 83 4A 04 F8 F9 A5 1D EC 37 AE C6 23 4C 9D
SEQUENCE { B2 01 1F D4 26 EA E4 4A 7E BE BE 1E 11 1E 27 D1
SEQUENCE { }
OBJECT IDENTIFIER basicConstraints (2 5 29 19) }
BOOLEAN TRUE SET {
OCTET STRING, encapsulates { SEQUENCE {
SEQUENCE { INTEGER 1
BOOLEAN TRUE SEQUENCE {
INTEGER 3 SEQUENCE {
} SET {
} SEQUENCE {
} OBJECT IDENTIFIER countryName (2 5 4 6)
SEQUENCE { PrintableString 'CH'
OBJECT IDENTIFIER keyUsage (2 5 29 15) }
BOOLEAN TRUE }
OCTET STRING, encapsulates { SET {
BIT STRING 1 unused bits SEQUENCE {
'1100000'B OBJECT IDENTIFIER organizationName (2 5 4 10)
} PrintableString 'Swisskey AG'
} }
} }
} SET {
} SEQUENCE {
OBJECT IDENTIFIER organizationalUnitName (2 5 4 11)
PrintableString 'Public CA Services'
}

continues }

Problems with X.509 (ctd)


SET { SEQUENCE {
SEQUENCE { OBJECT IDENTIFIER messageDigest (1 2 840 113549 1 9 4)
OBJECT IDENTIFIER localityName (2 5 4 7) SET {
PrintableString 'Zuerich' OCTET STRING
} 2F 7E 95 9F 34 AC 85 B8 1C 53 9E 5C F8 60 BE 3A
} AA D0 30 B5
SET { }
SEQUENCE { }
OBJECT IDENTIFIER commonName (2 5 4 3) SEQUENCE {
PrintableString 'Swisskey ID CA 1024' OBJECT IDENTIFIER sMIMECapabilities (1 2 840 113549 1 9 15)
} SET {
} SEQUENCE {
} SEQUENCE {
INTEGER 145 OBJECT IDENTIFIER des-EDE3-CBC (1 2 840 113549 3 7)
} }
SEQUENCE { SEQUENCE {
OBJECT IDENTIFIER sha1 (1 3 14 3 2 26) OBJECT IDENTIFIER rc2CBC (1 2 840 113549 3 2)
NULL INTEGER 128
} }
[0] { SEQUENCE {
SEQUENCE { OBJECT IDENTIFIER desCBC (1 3 14 3 2 7)
OBJECT IDENTIFIER contentType (1 2 840 113549 1 9 3) }
SET { SEQUENCE {
OBJECT IDENTIFIER data (1 2 840 113549 1 7 1) OBJECT IDENTIFIER rc2CBC (1 2 840 113549 3 2)
} INTEGER 64
} }
SEQUENCE { SEQUENCE {
OBJECT IDENTIFIER signingTime (1 2 840 113549 1 9 5) OBJECT IDENTIFIER rc2CBC (1 2 840 113549 3 2)
SET { INTEGER 40
UTCTime '981113072133Z' }
} }
} }
}
}

continues
Problems with X.509 (ctd)
SEQUENCE {
OBJECT IDENTIFIER rsaEncryption (1 2 840 113549 1 1 1)
NULL
}
OCTET STRING
9F EC C4 B4 B2 5A FE 87 EA 28 22 C2 6A 1F E3 2F
16 8D 01 EA 2F 35 0E 13 D1 3E BE 1D 92 48 EF F0
8E BB BC 98 3B 11 44 88 A8 20 AE AB 65 2D 98 E1
3E 62 E1 47 5F FE 18 39 AF 97 29 7E D1 68 03 F1
03 78 44 DB A1 BB 9F 3B C9 89 D5 0D 00 B3 0B FA
98 F8 2E 58 4C E4 4F 73 02 D6 17 41 84 B6 50 A2
94 F8 E2 6F C3 78 AF 4D 71 CF E7 FF 25 97 B9 00
CC A5 BE A8 8C 3D 52 43 C9 BB 41 A9 87 5F 85 6F
}
}
}
}
}

All this from a standard S/MIME signature!

Problems with X.509 (ctd)


Hierarchical certification model doesn’t fit typical business
practices
• Businesses generally rely on bilateral trading arrangements or
existing trust relationships
• Third-party certification is an unnecessary inconvenience when
an existing relationship is present
X.509 PKI model entails building a parallel trust
infrastructure alongside the existing, well-established
one
• In the real world, trust and revocation is handled by closing the
account, not with PKIs, CRLs, certificate status checks, and
other paraphernalia
Problems with X.509 (ctd)
In a closed system (SWIFT, Identrus)
• Members sign up to the rules of the club
• Only members who will play by the rules and can carry the risk
are admitted
• Members are contractually obliged to follow the rules,
including obligations for signatures made with their private key
In an open system
• Parties have no previously established network of contracts
covering private key use on which they can rely
– On what basis do you sue someone when they repudiate a
signature?
– Have they published a legally binding promise to the world
to stand behind that signature?

Problems with X.509 (ctd)


– Do they owe a duty of care, actionable in the case of
negligence?
• Possible ways to proceed
– Claim a duty of care where negligence resulted in financial
loss (generally negligence claims for pure financial loss
won’t support this)
– Claim that publishing the key was a negligent misstatement
(unlikely that this will work)
– Go after the CA (CA won’t suffer any loss if the keyholder
is negligent, so they can’t go after the keyholder)
• On the whiteboard:
“Alice does something magical/mathematical with Bob’s key,
and the judge says ‘Obviously Bob is guilty’”
• In practice: Would you like to be the test case?
Problems with X.509 (ctd)
Certificates don’t model standard authority delegation
practices
• Manager can delegate authority/responsibility to an employee
– “You’re in charge of purchasing”
• CA can issue a certificate to an employee, but can’t delegate
the responsibility which comes with it
Residential certificates are even more problematic
• Noone knows who has the authority to sign these things

Problems with Implementations


Relying parties must, by definition, be able to rely on the
handling of certificates
Currently difficult to do because of
• Implementation bugs
• Different interpretations of standards by implementors
• Implementation of different parts of standards
• Implementation of different standards
Problems with Implementations (ctd)
Examples of common problems
• rfc822Name has ambiguous definition/implementation
(Assorted standards/implementations)
– Should be used as luser@aol.com
– Can often get away with President George W.Bush
<luser@aol.com>
• Name constraints can be avoided through creative name
encoding (Problem in standards)
– Multiple encodings for the same character, zero-width
spaces, floating diacritics, etc
– Can make identical-appearing strings compare as different
strings
– Can also evade name constraints by using altNames

Problems with Implementations (ctd)


• Software crashes when it encounters a Unicode or UTF-8
string (Netscape)
– Some other software uses Unicode for any non-ASCII
characters, guaranteeing a crash
– At least one digital signature law requires the (unnecessary)
use of Unicode for a mandatory certificate field
– Standards committee must have had MS stockholders on it
• Software produces negative numeric values because the
implementors forgot about the sign bit (Microsoft and a few
others)
– Everyone changed their code to be bug-compatible with MS
• Software hardcodes the certificate policy so that any policy is
treated as if it were the Verisign one (Microsoft)
Problems with Implementations (ctd)
• Known extensions marked critical are rejected; unknown
extensions marked critical are accepted (Microsoft)
– Due to a reversed flag in the MS certificate handling
software
– Other vendors and CAs broke their certificates in order to
be bug-compatible with MS
– Later certs were broken in order to be bug-compatible with
the earlier ones
– Spot check: If you have a cert from a public CA, check
whether the important extensions are marked critical or not

Problems with Implementations (ctd)


• Software ignores the key usage flags and uses the first cert it
finds for the purpose it needs (Microsoft)
– If users have separate encryption and signing certs, the
software will grab the first one it finds and use it for both
purposes
– CryptoAPI seems to mostly ignore usage constraints on
keys
– AT_KEYXECHANGE keys (with corresponding certificates) can
be used for signing and signature verification without any trouble
Problems with Implementations (ctd)
• Cert chaining by name is ignored (Microsoft)
– Certificate issued by “Verisign Class 1 Public Primary
Certification Authority” could actually be issued by
“Honest Joe’s Used Cars and Certificates”
– “No standard or clause in a standard has a divine right of
existence” – MS PKI architect
– Given the complete chaos in DNs, this isn’t quite the
blatantly wrong decision which it seems

Problems with Implementations (ctd)


• Obviously bogus certificates are accepted as valid (Microsoft)
-----BEGIN CERTIFICATE-----
MIIQojCCCIoCAQAwDQYJKoZIhvcNAQEEBQAwGDEWMBQGA1UEAxMNS29tcGxleCBM
YWJzLjAeFw01MTAxMDEwMDAwMDBaFw01MDEyMzEyMzU5NTlaMBgxFjAUBgNVBAMT
DUtvbXBsZXggTGFicy4wggggMA0GCSqGSIb3DQEBAQUAA4IIDQAwgggIAoIIAQCA
A+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+//////////////////////////////////////////////////////////////+
+//////////////////////////////////////////////////////////////+
+///++++HELLO+THERE++++////////////////////////////////////////+
+//////////////////////////////////////////////////////////////+
+///And/welcome/to/the/base64/coded/x509/pem/certificate/of////+
+//////////////////////////////////////////////////////////////+
+///KOMPLEX/MEDIA/LABS/////////////////////////////////////////+
+///www/dot/komplex/dot/org////////////////////////////////////+
+//////////////////////////////////////////////////////////////+
+///created/by/Markku+Juhani/Saarinen//////////////////////////+
+///22/June/2000///dw3z/at/komplex/dot/org/////////////////////+
+//////////////////////////////////////////////////////////////+
+///You/are/currently/reading/the/public/RSA/modulus///////////+
+///of/our/root/certification/authority/certificate////////////+
+//////////////////////////////////////////////////////////////+
+///Which/happens/to/be/16386/bits/long////////////////////////+
+//////////////////////////////////////////////////////////////+
+///And/fully/working/and/shit/////////////////////////////////+
+//////////////////////////////////////////////////////////////+
+///And/totally/insecure///////////////////////////////////////+
+//////////////////////////////////////////////////////////////+
+///You/can/save/this/text/to/a/file/called/foo/dot/crt////////+
+///Then/click/on/it/with/your/explorer/and/you/can/see////////+
+///that/your/system/doesn+t/quite/trust/the/komplex/root//////+
+///CA/yet+////////////////////////////////////////////////////+
+//////////////////////////////////////////////////////////////+
+///But/that+s/all/right///////////////////////////////////////+
+//////////////////////////////////////////////////////////////+
+///Just/install/it////////////////////////////////////////////+
+//////////////////////////////////////////////////////////////+
+///And/you+re/happily/part/of/our/16386/bit/public/key////////+
+///infrastructure/////////////////////////////////////////////+
+//////////////////////////////////////////////////////////////+
+///One/more/thing/////////////////////////////////////////////+
+//////////////////////////////////////////////////////////////+
+///Don+t/try/read/this/with/other/PKI/or/S/MIME/software//////+
Problems with Implementations (ctd)
– Validity period is actually December 1951 to December
2050
– At one point MS software was issuing certificates in the 17th
century
– This was deliberate
– Software reports it as December 1950 to December 1950,
but accepts it anyway
– Exponent is 1 (bogus key) but cert is accepted as valid

Problems with Implementations (ctd)


• End entity certificates are encoded without the basicConstraints
extension to indicate that the certificate is a non-CA cert
(PKIX)
– Some apps treat these certificates as CA certificates for
X.509v1 compatibility
– May be useful as a cryptographically strong RNG
– Issue 128 certificates without basicConstraints
– User other app’s CA/non-CA interpretation as one bit of a key
– Produces close to 128 bits of pure entropy
• CRL checking is broken (Microsoft)
– Older versions of MSIE would grope around blindly for a
minute or so, then time out and continue anyway
– Some newer versions forget to perform certificate validity
checks (eg expiry times, CA certs) if CRL checking enabled
Problems with Implementations (ctd)
• Applications enforce arbitrary limits on data elements
(GCHQ/CESG interop testing)
– Size of serial number
– Supposedly an integer, but traditionally filled with a binary hash
value
– Number/size of DN elements
– Size of encoded DN
– Certificate path/chain length
– Path length constraints
– Oops, we need to insert one more level of CA into the path due to a
company reorg/merger
– Ordering/non-ordering of DN elements
– Allow only one attribute type (eg OU) per DN
– Assume CN is always encoded last

Problems with Implementations (ctd)


• The lunatic fringe: Certs from vendors like Deutsche
Telekom/Telesec are so broken they would create a
matter/antimatter reaction if placed in the same room as an
X.509 spec
– “Interoperability considerations merely create uncertainty
and don't serve any useful purpose. The market for digital
signatures is at hand and it's possible to sell products
without any interoperability” – Telesec project leader
(translated)
– “People will buy anything as long as you tell them it’s
X.509” (shorter translation)
Problems with an X.509-style PKI
PKI will solve all your problems
• PKI will make your network secure
• PKI will allow single sign-on
• PKI solves privacy problems
• PKI will allow <insert requirement which customer will pay
money for>
• PKI makes the sun shine and the grass grow and the birds sing

Problems with an X.509-style PKI (ctd)


Reality vs hype
• Very little interoperability/compatibility
• Lack of expertise in deploying/using a PKI
• No manageability
• Huge up-front infrastructure requirements
– Few organisations realise just how much time, money and
resources will be required
• “PKI will get rid of passwords”
– Current implementations = password + private key
– Passwords with a vengeance
• Certificate revocation doesn’t really work
– Locating the certificate in the first place works even less
How Effective are Certificates Really?
Sample high-value transaction: Purchase $1,500 airline
ticket from United Airlines
• Site is http://www.united.com aka
http://www.ual.com
• Browser shows the SSL padlock
– Certificate is verified (transparent to the user)
– It’s safe to submit the $1,500 payment request

How Effective are Certificates Really? (ctd)


But
• Actual site it’s being sent to is itn.net
• Company is located in Palo Alto, California
– Who are these people?
– Site contains links to the Amex web site
– Anyone can add links to Amex site to their home page though
• Just for comparison
– Singapore Airlines, British Airways, and Lufthansa have
appropriate certificates
– Air New Zealand also uses itn.net
– American Airlines don’t seem to use any security at all
– Quantas don’t even have a web site
How Effective are Certificates Really? (ctd)
This is exactly the type of situation which SSL certificates
are intended to prevent
• Browsers don’t even warn about this problem because so many
sites would break
– Outsourcing of merchant services results in many sites
handling SSL transactions via a completely unrelated site
• Effectively reduces the security to unauthenticated Diffie-
Hellman
Most current certificate usage is best understood by
replacing all occurrences of the term “trusts” with “relies
upon” or “depends upon”, generally with an implied “has
no choice but to …” at the start

PGP Certificates
Certificates are key-based, not identity-based
• Keys can have one or more free-form names attached
• Key and name(s) are bound through (independent) signatures
Certification model can be hierarchical or based on existing
trust relationships
• Parties with existing relationships can use self-signed
certificates
– Self-signed end entity certificates are a logical paradox in
X.509v3
Authentication keys are used to certify confidentiality keys
• Confidentiality keys can be changed at any time, even on a per-
message basis
Alternative Trust Hierarchies
PGP web of trust

Bob knows B and D who know A and C who know Alice


 Bob knows the key came from Alice
Web of trust more closely reflects real-life trust models

SPKI
Simple Public Key Infrastructure
Identity certificates bind a key to a name, but require a
parallel infrastructure to make use of the result

SPKI certificates bind a key to an authorisation or


capability
SPKI (ctd)
Certificates may be distributed by direct communications
or via a directory
Each certificate contains the minimum information for the
job (cf X.509 dossier certificates)
If names are used, they only have to be locally unique
• Global uniqueness is guaranteed by the use of the key as an
identifier
• Certificates may be anonymous (eg for balloting)
Authorisation may require m of n consensus among signers
(eg any 2 of 3 company directors may sign)

SPKI Certificate Uses


Typical SPKI uses
• Signing/purchasing authority
• Letter of introduction
• Security clearance
• Software licensing
• Voter registration
• Drug prescription
• Phone/fare card
• Baggage claim check
• Reputation certificate (eg Better Business Bureau rating)
• Access control (eg grant of administrator privileges under
certain conditions)
Certificate Structure
SPKI certificates use collections of assertions expressed as
LISP-like S-expressions of the form ( type value(s) )
( name fred )  Owner name = fred
( name CA_root CA1 CA2 ... CAn leaf_cert )  X.500 DN
( name ( hash sha1 |TLCgPLFlGTzyUbcaYLW8kGTEnUk=| )
fred )  Globally unique name with key ID and locally unique
name
( ftp ( host ftp.warez.org ) )  Keyholder is allowed FTP access
to an entire site
( ftp ( host ftp.warez.org ) ( dir /pub/warez ) )  Keyholder is
allowed FTP access to only one directory on the site

Certificate Structure (ctd)


( cert
( issuer ( hash sha1 |TLCgPLFlGTzyUbcaYLW8kGTEnUk=|
))
( subject ( hash sha1 |Ve1L/7MqiJcj+LSa/l10fl3tuTQ=l| ) )
...
( not-before “1998-03-01_12:42:17” )
( not-after “2012-01-01_00:00:00” )
)  X.509 certificate
Internally, SPKI certificates are represented as 5-tuples
<Issuer, Subject, Delegation, Authority, Validity>
• Delegation = Subject has permission to delegate authority
• Authority = Authority granted to certificate subject
• Validity = Validity period and/or online validation test
information
Trust Evaluation
5-tuples can be automatically processed using a general-
purpose tuple reduction mechanism
<I1, S1, D1, A1, V1> + <I2, S2, D2, A2, V2>
 <I1, S2, D2, intersection( A1, A2 ), intersection( V1, V2 )
if S1 = I2 and D1 = true
Eventually some chains of authorisation statements will
reduce to <Trusted Issuer, x, D, A, V>
• All others are discarded

Trust Evaluation (ctd)


Example authorisation chain
• A may access resource X. Signed: Service Provider
• B may access resource X. Signed: A
• Service provider, please allow me to access X. Signed: B
Verification
• Service provider checks signatures from B  A  own key
• Authorisation loop requires no CA, trusted third party, or
external intervention
• Trust management decisions can be justified/explained/verified
– “How was this decision reached?”
– “What happens if I change this bit?”
X.509 has nothing even remotely like this
PKI Design Guidelines
Identity
• Use a locally meaningful identifier
– User name
– email address
– Account number
• Don't try and do anything meaningful with DNs

PKI Design Guidelines (ctd)


Revocation
• If possible, design your PKI so that revocation isn't required
– SET
– AADS/X9.59
– ssh
– SSL
• If that isn't possible, use a mechanism which provides
freshness guarantees
• If that isn't possible, use an online status query mechanism
– Valid/not valid responder
– OCSP
• If the revocation is of no value, use CRLs
PKI Design Guidelines (ctd)
Application-specific PKIs
• PKIs designed to solve a particular problem are easier to work
with than a one-size-(mis)fits all approach
• SPKI
– Binds a key to an authorisation
– X.509 binds a key to an (often irrelevant) identity which
must then somehow be mapped to an authorisation
• PGP
– Designed to secure email
– Laissez-faire key management tied to email address solves
"Which directory" and "Which John Doe" problems

PKI Design Guidelines (ctd)


In many situations no PKI of any kind is needed
• Example: Authority-to-individual communications (eg tax
filing)
– Obvious solution: S/MIME or PGP
– Practical solution: SSL web server with access control
– Revocation = disable user access
– Instantaneous
– Consistently applied
– Administered by the organisation involved, not some third party
PKI Design Guidelines (ctd)
• Example: AADS/X9.59
– Ties keys to existing accounts
– Handled via standard business mechanisms
– Revocation = remove key/close account
• Example: Business transactions
– Ask Citibank about certificate validity
vs
– ask Citibank to authorise the transaction directly
→ Use an online authorisation
– (US) Business Records Exception allows standard business
records to be treated as evidence in court
– Following standard legal precedent is easier than becoming a test
case for PKI

PKI Design Guidelines (ctd)


There's nothing which says you have to use X.509 as
anything more than a complex bit-bagging scheme
• If you have a cert management scheme which works, use it
Be careful about holding your business processes hostage
to your PKI (or lack thereof)
Digital Signature Legislation
A signature establishes validity and authentication of a
document to allow the reader to act on it as a statement
of the signers intent
Signatures represent a physical manifestation of consent
A digital signature must provide a similar degree of
security

Digital Signature Legislation (ctd)


Typical signature functions are
• Identification
• Prove involvement in the act of signing
• Associate the signer with a document
• Provide proof of the signers involvement with the content of
the signed document
• Provide endorsement of authorship
• Provide endorsement of the contents of a document authored
by someone else
• Prove a person was at a given place at a given time
• Meet a statutory requirement that a document be signed to
make it valid
Real-world vs Electronic Signatures
Real-world pen-and-ink signatures use
• A standard pen
• A standard hand/wrist
• Standard handwriting
But…
• The user is aware of the importance of their action
Proposed digital signatures use
• Different keys
• Different certificate policies
• Different CAs
Do you use a different hand/wrist to write a letter and sign
it?

Real-world vs Electronic Signatures (ctd)


The difference between plain handwriting and a signature
is informed consent
• Digital signatures artificially split key functionality because the
standards are mostly written by technologists who can’t define
law or social policy
• When do you use a digital-signature vs general-signature key?
– Signing a challenge-response authentication token?
– Signing a letter of introduction?
– Signing an inter-office memo?
– Signing a purchase order?
– Signing a receipt
• If a user has a handful of signing keys, which one do they use
on which occasion?
Real-world vs Electronic Signatures (ctd)
The credit-card approach
• You may use your VISA with approved VISA merchants
• You may use the XYZ signature key with approved XYZ
business partners
– Identrus adopt this approach
Other approaches are still awaiting legal test cases…

General Requirements for Digital Signatures


The signing key must be controlled entirely by the signer
for non-repudiation to function
The act of signing must be conscious
• The “Grandma clicks the wrong button and loses her house”
problem
• “You are about to enter into a legally binding agreement which
stipulates that ...”
Non-repudiation can best be achieved through laws
guaranteeing repudiation
• That’s “guaranteeing repudiation”, not “guaranteeing
nonrepudiation”
• c.f. Reg.E/Reg.Z for credit cards/ATM cards
General Requirements for Digital Sigs (ctd)
May require a traditional written document to back up the
use of electronic signatures
• “With the key identified by ... I agree to ... under the terms ...”
• Written German HBCI (Home Banking Computer Interface)
agreement (Ini-Brief) has
– Key owner identification information
– Date/time
– Key and hash of key
– “I certify that this key is used for my electronic signature”
Cross-jurisdictional signatures are a problem

Utah Digital Signature Act


First digital signature act, passed in 1995
The Law of X.509
• Requires public-key encryption based signatures, licensed
CA’s, CRL’s, etc etc.
Duly authorised digital signatures may be used to meet
statutory requirements for written signatures
Liability of CA’s is limited, signers and relying parties
assume the risk
Signature carries evidentiary weight of notarised document
• If your key is compromised, you’re in serious trouble
• If you hand over your key to a third party, you’re in serious
trouble
California Digital Signature Law
Very broad, allows any agreed-upon mark to be used as a
digital signature
• Western culture has no real analog for this
• Asia has chop-marks, a general-purpose mark used to
authenticate and authorise
One-sentence digital signature law:
“You can’t refuse a signature just because it’s digital”
• Many later laws followed this model

Massachusetts Electronic Records and


Signatures Bill
“A signature may not be denied legal effect, validity, or
enforceability because it is in the form of an electronic signature.
If a rule of law requires a signature [...] an electronic signature
satisfies that rule of law”
“A contract between business entities shall not be unenforceable, nor
inadmissible in evidence, on the sole ground that the contract is
evidenced by an electronic record or that it has been signed by an
electronic signature”
The Massachusetts law doesn’t legislate forms of
signatures or the use of CA’s, or allocate liability
• “Attorneys Full Employment Act of 1997”
US E-Sign Act
Electronic Signatures in Global and National Commerce
Act
Massachusetts signature law taken to extremes
• Signatures can be a “sound, symbol, or process”
– “Press 9 to sign a binding contract, or 1 to hear this message
again”
– “Click here to enter into a legally binding agreement”
• Online comparison shopping may cause problems because not
buying is a “withdrawal of consent”
– Enforceability will probably take a court case to decide
• Vendors may charge extra for physical items (disk media,
manuals, but also printed invoices)

US E-Sign Act (ctd)


Law is about electronic (rather than digital) signatures
• Journalist who contacted the House discovered that the people
involved in creating the Bill weren’t aware there was a
difference
• Bill was prepared with input from Dell, Gateway, Hewlett-
Packard, Microsoft, and other vendors
– No consumer advocacy groups were consulted
• The finished Act appears to be a means of imposing UETA
(Uniform Electronic Transactions Act, sibling of UCITA,
opposed by the attorney generals of most states) by stealth
German Digital Signature Law
Like the Utah act, based on public-key technology
Requirements
• Licensed CA’s which meet certain requirements
– CA’s must provide a phone hotline for revocation
• Identification is based on the German ID card
– This type of identification isn’t possible in most countries
– Allows pseudonyms in certificates
• Key and storage media must be controlled only by the key
owner
– Key may be generated for user by the CA if strict controls
are followed to ensure no copies are retained
• Provisions for timestamping and countersigning

German Digital Signature Law (ctd)


Signatures from other EU countries are recognised
provided an equivalent level of security is employed
Multilevel law
• Signaturgesetz (SigG) provides general framework
– Defines digital signature
– Defines role of a CA
– Defines certificates and outlines how they’re handled
• Signaturverordnung (SigV)
– Sets out operational details and responsibilities of a CA
• Signatur-Interoperabilitätspezifikation (SigI)
– Technical specification to implement the SigG and SigV
– Specifies data formats, algorithms, timestamping and
directory service mechanisms, etc etc
German Digital Signature Law (ctd)
Example
• SigG: Private key must be protected
• SigV: Private key must be protected in the following
circumstances using certain technical measures
• SigI: Here are the technical measures
Details are set out in the implementation guidelines
• Extremely detailed (over 300 pages)
• Specifies things like
– Hash and signature algorithms
– Random number generation for keys
– Personnel security
– Directory and timestamping services
• Criticised as being too detailed and complex to follow

German Digital Signature Law (ctd)


Case study: Telesec CA
• SigG/SigV-compliant CA
• $12M to set up
• 25 full-time staff
• 250 certificates issued (~$50,000 per certificate)
Italian Digital Signature Law
Similar to the German law, but all requirements are listed
in one place
• Minimum key size is 1024 bits
Everything has to be certified to various ITSEC levels
• Key generation devices must be certified to ITSEC E3 with a
HIGH level of robustness
– In practice, this forces everyone to use smart cards for key
management
• The OS must be ITSEC F-C2/E2 or TCSEC C2
• Access to the system must be controlled, users identified, usage
logged
• CAs must be ISO 9000 certified
• This severely limits the technology which can be used

Italian Digital Signature Law (ctd)


Signature mechanism must present the data to be signed in
a clear and unambiguous manner, and ask for
confirmation of signature generation
• Allows for automated signature generation provided that this is
“clearly connected to the will of the subscriber”
Certificates must contain users name, date of birth, and
company name
• Allows pseudonyms, but this must be indicated in the cert and
CA must record real identity
Includes some bizarre requirements which are at odds with
the way the rest of the world does things
Italian Digital Signature Law (ctd)
CA must
• Verify that the key hasn’t been certified by another CA
• Verify that the users possesses the private key
• Publish certificates in LDAP directories
• Publish details on themselves (company name, address, contact
details, terms and conditions, substitute CA)
Except for the fixation with (very expensive and
complicated) security certification and some strange
requirements for information in certificates, this is a
rather nice law which addresses most digital signature
issues

Swedish Electronic ID card (SEIS)


Smart-card contains three keys
• Authentication (= X.509 “digital signature”)
– Card supports a challenge-response protocol for
authentication
– Card signs a random challenge from the remote system
• Digital signature (=X.509 “nonrepudiation”)
• Encryption
Card doubles as standard ID card (photo, signature, etc)
Cards are issued by
• Government agencies
• Financial institutions
• Companies to their employees
SEIS (ctd)
Usage governed by the SEIS Certification Policy
• Backdoor digital signature law
• Covers certificate issuing process, security auditing, physical
and procedural security, key management and protection
• Key may be generated by CA for user provided strict controls
are followed
– Two-person security
– No copy of key is retained by CA
– PIN-protected device is physically handed to user by CA
– User signs document acknowledging receipt
– Activation PIN is delivered over separate channel
– User is told to immediately change the PIN
• Complex physical and procedural security procedures for cards

Singapore Electronic Transactions Act


Follows the one-sentence signature law model
• Where the law requires a paper signature, an electronic one
will do
Offer of acceptance of contracts may be expressed
electronically
Signature apparatus must be under sole control of signer
Certificate requirements
• Cannot publish a certificate known to be false
• Certificates must specify a reliance limit
Compliant CAs are not liable for certificate problems
ETSI Digital Signature Draft
ETSI TR 101 specifies technical requirements for
signatures
• Role of signer (eg Financial director) is more important than
the name
• Signature must be dated to allow later dispute resolution
References various standards efforts (eg PKIX) for further
study
Privilege attribute certificates (PACs)
• Defined by ECMA, special short-lived (1 day max) certificates
• Vouch for a certain property of the user

UNCITRAL Model Law on Electronic


Commerce
UN Comission on International Trade (UNCITRAL) model
e-commerce law
• Many acts and laws legislate a particular technology to provide
reliance for digital signatures
• The model law provides a general framework for electronic
signatures without defining their exact form
Later revisions may nail down precise forms for electronic
signatures
UN Draft Articles on Electronic Signatures
Follows the one-sentence signature law model
• Includes a rationale for each point
Defines two levels of signature
• “Electronic signature” = data attached to a message to indicate
a signers approval of the message
• “Enhanced electronic signature” = electronic signature with
extra constraints
– Unique to the signature holder
– Verifiable through a standard procedure
– Under the sole control of the signer
Extremely broad and technology-independent
Specifies (rather vague) reliance and obligation details

EU Directive on Electronic Signatures


Defines an electronic signature as linking signer and data,
created by a means solely controlled by the signer (not
necessarily a cryptographic signature)
Precedes the directive itself with the intended aims of the
directive
Makes accreditation and licensing voluntary and non-
discriminatory
• No-one can be prevented from being a CA
• Intent is to encourage best practices while letting the market
decide
EU Directive on Electronic Signatures (ctd)
Electronic signature products must be made freely
available within the EU
Electronic signatures can’t be denied recognition just
because they’re electronic
Absolves CA’s of certain types of liability
• Provides for reliance limits in certificates
Recognises certificates from non-EU states issued under
equivalent terms
Allows for pseudonyms in certificates

EU Directive on Electronic Signatures (ctd)


Recognises that a regulatory framework isn’t needed for
signatures used in closed systems
• Trust is handled via existing commercial relationships
• Parties may agree among themselves on terms and conditions
for electronic signatures
• Keys may be identified by a key fingerprint on a business card
or in a letterhead
Session-Level Security

PGP, ssh, S/WAN, satan & crack: Securing the internet


by any means necessary
— Don Kitchen

Session-level Security Overview


Most session security protocols use some variation of
1. Decide on security parameters
2. Establish shared secret to protect further
communications
3. Authenticate the previous exchange
IPSEC
IP security — security built into the IP layer
Provides host-to-host (or firewall-to-firewall) encryption
and authentication
Required for IPv6, optional for IPv4
Comprised of two parts:
• IPSEC proper (authentication and encryption)
• IPSEC key management
Domain of interpretation (DOI) nails down the precise
details for an application of IPSEC

IPSEC Architecture
Key management establishes a security association (SA)
for a session
• SA used to provide authentication/confidentiality for that
session
• SA is referenced via a security parameter index (SPI) in each
IP datagram header
AH
Authentication header — integrity protection only
Inserted into IP datagram:

Integrity check value (ICV) is 96-bit HMAC

AH (ctd)
Authenticates entire datagram:

Mutable fields (time-to-live, IP checksums) are zeroed


before AH is added
Sequence numbers provide replay protection
• Receiver tracks packets within a 64-entry sliding window
ESP
Encapsulating security protocol — authentication
(optional) and confidentiality
Inserted into IP datagram:

Contains sequence numbers and optional ICV as for AH

ESP (ctd)
Secures data payload in datagram:

Encryption protects payload


• Authentication protects header and encryption
SA bundling is possible
• ESP without authentication inside AH
• Authentication covers more fields this way than just ESP with
authentication
IPSEC Algorithms
DES in CBC mode for encryption
HMAC/MD5 and HMAC/SHA (truncated to 96 bits) for
authentication
Later versions added optional, DOI-dependent algorithms
• 3DES
• Blowfish
• CAST-128
• IDEA
• RC5
• Triple IDEA (!!!)

Processing
Use SPI to look up security association (SA)
Perform authentication check using SA
Perform decryption of authenticated data using SA
Operates in two modes
• Transport mode (secure IP), protects payload
• Tunneling mode (secure IP inside standard IP), protects entire
packet
– Popular in routers
– Communicating hosts don’t have to implement IPSEC
themselves
– Nested tunneling possible
IPSEC Key Management
ISAKMP
• Internet Security Association and Key Management Protocol
Oakley
• DH-based key management protocol
Photuris
• DH-based key management protocol
SKIP
• Sun’s DH-based key management protocol
Protocols changed considerably over time, most borrowed
ideas from each other

Photuris
Latin for “firefly”, Firefly is the NSA’s key exchange
protocol for STU-III secure phones
Three-stage protocol
1. Exchange cookies
2. Use DH to establish a shared secret
Agree on security parameters
3. Identify other party
Authenticate data exchanged in steps 1 and 2
n. Change session keys or update security parameters
Photuris (ctd)
Cookie based on IP address and port, stops flooding attacks
• Attacker requests many key exchanges and bogs down host
(clogging attack)
Cookie depends on
• IP address and port
• Secret known only to host
• Cookie = hash( source and dest IP and port + local secret )
Host can recognise a returned cookie
• Attacker can’t generate fake cookies
Later adopted by other IPSEC key management protocols

Photuris (ctd)
Client Server
Client cookie 
 Server cookie
Offered schemes
Chosen scheme 
DH keygen  DH keygen
Client identity 
Authentication for
previous data
 Server identity
Authentication for
previous data
SKIP
Each machine has a public DH value authenticated via
• X.509 certificates
• PGP certificates
• Secure DNS
Public DH value is used as an implicit shared key
calculation parameter
• Shared key is used once to exchange encrypted session key
• Session key is used for further encryption/authentication
Clean-room non-US version developed by Sun partner in
Moscow
• US government forced Sun to halt further work with non-US
version

Oakley
Exchange messages containing any of
• Client/server cookies
• DH information
• Offered/chosen security parameters
• Client/server ID’s
until both sides are satisfied
Oakley is extremely open-ended, with many variations
possible
• Exact details of messages exchange depends on exchange
requirements
– Speed vs thoroughness
– Identification vs anonymity
– New session establishment vs rekey
– DH exchange vs shared secrets vs PKC-based exchange
ISAKMP
NSA-designed protocol to exchange security parameters
(but not establish keys)
• Protocol to establish, modify, and delete IPSEC security
associations
• Provides a general framework for exchanging cookies, security
parameters, and key management and identification
information
• Exact details left to other protocols
Two phases
1. Establish secure, authenticated channel (“SA”)
2. Negotiate security parameters (“KMP”)

ISAKMP/Oakley
ISAKMP merged with Oakley
• ISAKMP provides the protocol framework
• Oakley provides the security mechanisms
Combined version clarifies both protocols, resolves
ambiguities
ISAKMP/Oakley (ctd)
Phase 1 example
Client Server
Client cookie 
Client ID
Key exchange information
 Server cookie
Server ID
Key exchange information
Server signature
Client signature 
Other variants possible (data spread over more messages,
authentication via shared secrets)
• Above example is aggressive exchange which minimises the
number of messages

ISAKMP/Oakley (ctd)
Phase 2 example
Client Server
Encrypted, MAC’d 
Client nonce
Security parameters
offered
 Encrypted, MAC’d
Server nonce
Security parameters
accepted
Encrypted, MAC’d 
Client nonce
Server nonce
SSL
Secure sockets layer — TCP/IP socket encryption
Usually authenticates server using digital signature
Can authenticate client, but this is never used
Confidentiality protection via encryption
Integrity protection via MAC’s
Provides end-to-end protection of communications sessions

History
SSLv1 designed by Netscape, broken by members of the
audience while it was being presented
SSLv2 shipped with Navigator 1.0
Microsoft proposed PCT: PCT != SSL
SSLv3 was peer-reviewed, proposed for IETF
standardisation
• Never finalised, still exists only as a draft
SSL Handshake
1. Negotiate the cipher suite
2. Establish a shared session key
3. Authenticate the server (optional)
4. Authenticate the client (optional)
5. Authenticate previously exhanged data

SSL Handshake (ctd)


Client Server
Hello 
 Hello +
optional certificate
Client key exchange 
Change cipher + 
MAC of prev. fields
 Change cipher +
MAC of prev. fields
Secure session  Secure session
SSL Handshake (ctd)
Client hello:
• Client nonce
• Available cipher suites (eg RSA + RC4/40 + MD5)
Server hello:
• Server nonce
• Selected cipher suite
Server adapts to client capabilities
Optional certificate exchange to authenticate server/client
• In practice only server authentication is used

SSL Handshake (ctd)


Client key exchange:
• RSA-encrypt( premaster secret )
Both sides:
• 48-byte master secret = hash( premaster + client-nonce +
server-nonce )
Client/server change cipher spec:
• Switch to selected cipher suite and key
SSL Handshake (ctd)
Client/server finished
• MAC of previously exchanged parameters (authenticates data
from Hello and other exchanges)
– Uses an early version of HMAC
Can reuse previous session data via session ID’s in Hello
Can bootstrap weak crypto from strong crypto:
• Server has > 512 bit certificate
• Generates 512-bit temporary key
• Signs temporary key with > 512 bit certificate
• Uses temporary key for security
Maintains separate send and receive states

SSL Data Transfer


SSL Characteristics
Protects the session only
Designed for multiple protocols (HTTP, SMTP, NNTP,
POP3, FTP) but only really used with HTTP
Compute-intensive:
• 3 CPU seconds on Sparc 10 with 1Kbit RSA key
• 200 MHz NT box allows about a dozen concurrent SSL
handshakes
– Use multiple servers
– Use hardware SSL accelerators
Crippled crypto predominates
• Strong servers freely available (Apache), but most browsers
US-sourced and crippled

Strong SSL Encryption


Most implementations based on SSLeay,
http://www.ssleay.org/
Server
• Some variation of Apache + SSLeay
Browser
• Hacked US browser
• Non-US browser
SSL Proxy
• Strong encryption tunnel using SSL
Strong SSL Browsers
Fortify, http://www.fortify.net/
Patches Netscape (any version) to do strong encryption
Original:
POLICY-BEGINS-HERE: Export policy
Software-Version: Mozilla/4.0
MAX-GEN-KEY-BITS: 512
PKCS12-DES-EDE3: false
PKCS12-RC2-128: false
PKCS12-RC4-128: false
PKCS12-DES-56: false
PKCS12-RC2-40: true
PKCS12-RC4-40: true
...
SSL3-RSA-WITH-RC4-128-MD5: conditional
SSL3-RSA-WITH-3DES-EDE-CBC-SHA: conditional
...

Strong SSL Browsers (ctd)


Patched version
POLICY-BEGINS-HERE: Cypherpunk policy
Software-Version: Mozilla/4.0
MAX-GEN-KEY-BITS: 1024
PKCS12-DES-EDE3: true
PKCS12-RC2-128: true
PKCS12-RC4-128: true
PKCS12-DES-56: true
PKCS12-RC2-40: true
PKCS12-RC4-40: true
...
SSL3-RSA-WITH-RC4-128-MD5: true
SSL3-RSA-WITH-3DES-EDE-CBC-SHA: true
...
Strong SSL Browsers (ctd)
Opera, http://www.operasoftware.com/
• Norwegian browser, uses SSLeay
Cryptozilla, http://www.cryptozilla.org/
• Based on open-source Netscape
• Strong crypto added within one day of release from the US
Exported US-only versions,
ftp://ftp.replay.com/pub/replay/pub/
• Contains copies of most non-exportable software

Strong SSL Servers


Based on SSLeay + some variant of Apache
Mostly Unix-only, some NT ports in progress
SSL portion is somewhat painful to configure
Howtos available on the net
Strong SSL Proxies
Tunnel weak or no SSL over strong SSL

SGC
Server Gated Cryptography
Allows strong encryption on a per-server basis
Originally available only to “qualified financial
institutions”, later extended slightly (hospitals, some
government departents)
Requires special SGC server certificate from Verisign
Enables strong encryption for one server (www.bank.com)
SGC (ctd)
Exportable SSL
Client Server
Hello 
 Hello + certificate
Weak encryption key 
Weak encryption  Weak encryption
SSL with SGC
Client Server
Hello 
 Hello + SGC certificate
Strong encryption key 
Strong encryption  Strong encryption

TLS
Transport layer security
IETF-standardised evolution of SSLv3
• Non-patented technology
• Non-crippled crypto
• Updated for newer algorithms
Substantially similar to SSL
• TLS identifies itself as SSL 3.1
Not finalised yet, little implementation support
TLS standards work,
http://www.consensus.com/ietf-tls/
S-HTTP
Designed by Terisa in response to CommerceNet RFP,
http://www.terisa.com/shttp/intro.html
Predates SSL and S/MIME
Security extension for HTTP (and only HTTP)
Document-based:
• (Pre-)signed documents
• Encrypted documents
Large range of algorithms and formats supported
Not supported by browsers (or much else)

SSH
Originally developed in 1995 as a secure replacement for
rsh, rlogin, et al (ssh = secure shell),
http://www.cs.hut.fi/ssh/
Also allows port forwarding (tunneling over SSH)
Built-in support for proxies/firewalls
Includes Zip-style compression
Originally implemented in Finland, available worldwide
SSH v2 submitted to IETF for standardisation
Can be up and running in minutes
SSH Protocol
Server uses two keys:
• Long-term server identification key
• Short-term encryption key, changed every hour
Client Server
 Long-term + short-term keys
Double-encrypted session 
key
 Encrypted confirmation
Encrypted data  Encrypted data

Long-term server key binds the connection to the server


Short-term encryption key makes later recovery impossible
• Short-term keys regenerated as a background task

SSH Authentication
Multiple authentication mechanisms
• Straight passwords (protected by SSH encryption)
• RSA-based authentication (client decrypts challenge from
server, returns hash to server)
• Plug-in authentication mechanisms, eg SecurID
Developed outside US, crippled crypto not even
considered:
• 1024 bit RSA long-term key
• 768 bit RSA short-term key (has to fit inside long-term key for
double encryption)
• Triple DES session encryption (other ciphers available)
DNSSEC
DNS name space is divided into zones, each zone has
resource records (RR’s)
Owner_name Type Class TTL Rdlength Rdata
• Owner = name of node
• Type = RR type
– A = Host address
– NS = authoritative name server
– CNAME = canonical name for alias
– SOA = start of zone authority
– PTR = domain name pointer
– MX = mail exchange
• Class = IN (Internet)
• TTL = time for which RR may be cached

DNSSEC (ctd)
Name servers hold zone information
• Each zone has primary and secondary servers
• Secondaries perform zone transfers to obtain new data from
primaries
Resolvers extract information from name servers
• Cached entry is returned directly
• Interative query returns referral to the appropriate server
• Recursive query queries other server and returns result
All of these points present security vulnerabilities
DNSSEC (ctd)
DNSSEC splits the service into name server and zone
manager
• Zone manager signs zone data
• Name server publishes signed data
– Compromise of name server doesn’t compromise DNSSEC
Resolvers need to store at least one top-level zone key

DNSSEC (ctd)
RR’s are extended with new types
• KEY, server public key
• SIG, signature on RR
• NXT, chains from one name in a zone to the next
– Allows authenticated denial of the existence of a name
• These RR’s have signature start and end times, require
coordinated clocks on hosts
DNSSEC (ctd)
Transaction signature guarantees the response came from a
given server
• Signature covers query and response
Also used for
• Secure zone transfer
• Secure dynamic update (replaces editing the zone’s master file)
• Offline update
– Uses authorising dynamic update key for update
– Zone data is signed later with the zone key

SNMP Security
General SNMP security model: Block it at the router
Authentication: hash( secret value + data )
Confidentiality: encrypt( data + hash )
Many devices are too limited to handle the security
themselves
• Handled for them by an element manager
• Device talks to element manager via a single shared key
Users generally use a centralised enterprise manager to talk
to element managers
• Enterprise manager is to users what element manager is to
devices
Email Security

“Why do we have to hide from the police, Daddy?”


“Because we use PGP, son. They use S/MIME”

Email Security
Problems with using email for secure communications
include
• Doesn’t handle binary data
• Messages may be modified by the mail transport mechanism
– Trailing spaces deleted
– Tabs  spaces
– Character set conversion
– Lines wrapper/truncated
• Message headers mutate considerably in transit
Data formats have to be carefully designed to avoid
problems
Email Security Requirements
Main requirements
• Confidentiality
• Authentication
• Integrity
Other requirements
• Non-repudiation
• Proof of submission
• Proof of delivery
• Anonymity
• Revocability
• Resistance to traffic analysis
Many of these are difficult or impossible to achieve

Security Mechanisms
Detached signature:

• Leaves original message untouched


• Signature can be transmitted/stored seperately
• Message can still be used without the security software
Signed message

• Signature is always included with the data


Security Mechanisms (ctd)
Encrypted message

Usually implemented using public-key encryption

Mailing lists use one public-key encrypted header per


recipient

• Any of the corresponding private keys can decrypt the session


key and therefore the message

Security Mechanisms (ctd)


Countersigned data

Encrypted and signed data

• Always sign first, then encrypt


S( E( “Pay the signer $1000” ))
vs
E( S( “Pay the signer $1000” ))
PEM
Privacy Enhanced Mail, 1987
Attempt to add security to SMTP (MIME didn’t exist yet)
• Without MIME to help, this wasn’t easy
Attempt to build a CA hierarchy along X.500 lines
• Without X.500 available, this wasn’t easy
Solved the data formatting problem with base64 encoding
• Encode 3 binary bytes as 4 ASCII characters
• The same encoding was later used in PGP 2.x, MIME, ...

PEM Protection Types


Unsecured data
Integrity-protected (MIC-CLEAR)
• MIC = message integrity check = digital signature
Integrity-protected encoded (MIC-ONLY)
Encrypted integrity-protected (ENCRYPTED)
General format:
-----BEGIN PRIVACY-ENHANCED MESSAGE-----
Type: Value Encapsulated header
Type: Value
Type: Value
Blank line
Data Encapsulated content
-----END PRIVACY-ENHANCED MESSAGE-----
PEM Protection Types (ctd)
MIC-ONLY
-----BEGIN PRIVACY-ENHANCED MESSAGE-----
Proc-Type: 4,MIC-ONLY
Content-Domain: RFC822
Originator-Certificate:
MIIBlTCCAScCAWUwDQYJKoZIhvcNAQECBQAwUTELMAkGA1UEBhMCVVMxIDAeBgNV
BAoTF1JTQSDRiNKcOCaCoLAyaXR5LCBJbmMuMQ8wDQYDVQQLEwZCFNOrDDExDzAN
...
iWlFPuN5jJ79Khfg7ASFxskYkEMjRNZV/HZDZQEhtVaU7Jxfzs2wfX5byMp2X3U/
5XUXGx7qusDgHQGs7Jk9W8CW1fuSWUgN4w==
Issuer-Certificate:
MIIB3FNoRDgCAQowDQYJKoZIhvcNAQECBQAwTzEWiGEbLUMenKraFTMxIDAeBgNV
BAoTF1JTQSBEYXRhIFNlY3VyaXR5LCBJbmMuMQ8wDQYDVQQLEwZCZXRhIDExDTAL
...
dD2jMZ/3HsyWKWgSF0eH/AJB3qr9zosG47pyMnTf3aSy2nBO7CMxpUWRBcXUpE+x
EREZd9++32ofGBIXaialnOgVUn0OzSYgugiQReSIsTKEYeSCrOWizEs5wUJ35a5h
MIC-Info: RSA-MD5,RSA,
jV2OfH+nnXFNorDL8kPAad/mSQlTDZlbVuxvZAOVRZ5q5+Ejl5bQvqNeqOUNQjr6
EtE7K2QDeVMCyXsdJlA8fA==

LSBBIG1lc3NhZ2UgZm9yIHVzZSBpbiB0ZXN0aW5nLg0KLSBGb2xsb3dpbmcgaXMg
YSBibGFuayBsaW5lOg0KDQpUaGlzIGlzIHRoZSBlbmQuDQo=
-----END PRIVACY-ENHANCED MESSAGE-----

PEM Protection Types (ctd)


ENCRYPTED
-----BEGIN PRIVACY-ENHANCED MESSAGE-----
Proc-Type: 4,ENCRYPTED
Content-Domain: RFC822
DEK-Info: DES-CBC,BFF968AA74691AC1
Originator-Certificate:
MIIBlTCCAScCAWUwDQYJKoZIhvcNAQECBQAwUTELMAkGA1UEBhMCVVMxIDAeBgNV
...
5XUXGx7qusDgHQGs7Jk9W8CW1fuSWUgN4w==
Issuer-Certificate:
MIIB3DCCAUgCAQowDQYJKoZIhvcNAQECBQAwTzELMAkGA1UEBhMCVVMxIDAeBgNV
...
EREZd9++32ofGBIXaialnOgVUn0OzSYgugiQ077nJLDUj0hQehCizEs5wUJ35a5h
MIC-Info: RSA-MD5,RSA,
UdFJR8u/TIGhfH65ieewe2lOW4tooa3vZCvVNGBZirf/7nrgzWDABz8w9NsXSexv
AjRFbHoNPzBuxwmOAFeA0HJszL4yBvhG

Continues
PEM Protection Types (ctd)
Continued
Recipient-ID-Asymmetric:
MFExCzAJBgNVBAYTAlVTMSAwHgYDVQQKExdSU0EgRGF0YSBTZWN1cml0eSwgSW5j
LjEPMA0GA1UECxMGQmV0YSAxMQ8wDQYDVQQLEwZOT1RBUlk=,66
Key-Info: RSA,
O6BS1ww9CTyHPtS3bMLD+L0hejdvX6Qv1HK2ds2sQPEaXhX8EhvVphHYTjwekdWv
7x0Z3Jx2vTAhOYHMcqqCjA==

qeWlj/YJ2Uf5ng9yznPbtD0mYloSwIuV9FRYx+gzY+8iXd/NQrXHfi6/MhPfPF3d
jIqCJAxvld2xgqQimUzoS1a4r7kQQ5c/Iua4LqKeq3ciFzEv/MbZhA==
-----END PRIVACY-ENHANCED MESSAGE-----

PEM CA Hierarchy

Hierarchy allows only a single path from the root to the end
entity (no cross-certificates)
Although PEM itself failed, the PEM CA terminology still
crops up in various products
PEM CA Hierarchy (ctd)
Policy CA’s guarantee certain things such as uniqueness of
names
• High-assurance policies (secure hardware, drug tests for users,
etc)
– Can’t issue certificates to anything other than other high-
assurance CA’s
• Standard CA’s
• No-assurance CA’s (persona CA’s)
– Certificate vending machines
– Clown suit certificates

Why PEM Failed


Why the CA’s failed
• The Internet uses email addresses, not X.500 names
– Actually, noone uses X.500 names
• CA’s for commercial organisations and universities can’t meet
the same requirements as government defence contractors for
high-assurance CA’s
– Later versions of PEM added lower-assurance CA
hierarchies to fix this
• CA hardware was always just a few months away
– When it arrived, it was hideously expensive
• CA’s job was made so onerous noone wanted it
– Later versions made it easier
Why PEM Failed (ctd)
• Hierarchy enshrined the RSADSI monopoly
– CA hardware acted as a billing mechanism for RSA
signatures
– People were reluctant to trust RSADSI (or any one party)
with the security of the entire system
Why the message format failed
• The PEM format was ugly and intrusive
– PEM’s successors bundled everything into a single blob and
tried to hide it somewhere out of the way
• The reqired X.500 support infrastructure never materialised
• RSA patent problems
Pieces of PEM live on in a few European initiatives
• MailTrusT, SecuDE, modified for MIME-like content types

PGP
Pretty Good Privacy
• Hastily released in June 1991 by Phil Zimmerman (PRZ) in
response to S.266
• MD4 + RSA signatures and key exchange
• Bass-O-Matic encryption
• LZH data compression
• uuencoding ASCII armour
• Data format based on a 1986 paper by PRZ
PGP was immediately distributed worldwide via a Usenet
post
PGP (ctd)
PGP 1.0 lead to an international effort to develope 2.0
• Bass-O-Matic was weak, replaced by the recently-developed
IDEA
• MD4 " " " " MD5
• LZH replaced by the newly-developed InfoZip (now zlib)
• uuencoding replaced with the then-new base64 encoding
• Ports for Unix, Amiga, Atari, VMS added
• Internationalisation support added

Legal Problems
PGP has been the centre of an ongoing legal dispute with
RSADSI over patents
• RSADSI released the free RSAREF implementation for (non-
commercial) PEM use
• PGP 2.6 was altered to use RSAREF in the US
• Commercial versions were sold by Viacrypt, who have an RSA
license
Later versions deprecated RSA in favour of the non-
patented Elgamal
• Elgamal referred to in documentation as Diffie-Hellman for no
known reason
Government Problems
In early 1993, someone apparently told US Customs that
PRZ was exporting misappropriated crypto code
US Customs investigation escalated into a Federal Grand
Jury (US Attorney) in September 1993
US government was pretty serious, eg:
26 February 1995: San Francisco Examiner and SF Chronicle
publish an article criticising the governments stand on
encryption and the PGP investigation
27 February 1995: Author of article subpoena’d to appear before
the Grand Jury
Investigation dropped in January 1996 with no charges laid

PGP Message Formats


Unsecured
Compressed
Signed/clearsigned
Encrypted
+ optional encoding
General format
-----BEGIN PGP message type-----
data
-----END PGP message type-----
PGP Message Formats (ctd)
Clearsigned message:
-----BEGIN PGP SIGNED MESSAGE-----

We've got into Peters presentation. Yours is next. Resistance is


useless.

-----BEGIN PGP SIGNATURE-----


Version: 2.3

iQCVAgUBK9IAl2v14aSAK9PNAQEvxgQAoXrviAggvpVRDLWzCHbNQo6yHuNuj8my
cvPx2zVkhHjzkfs5lUW6z63rRwejvHxegV79EX4xzsssWVUzbLvyQUkGS08SZ2Eq
bLSuij9aFXalv5gJ4jB/hU40qvU6I7gKKrVgtLxEYpkvXFd+tFC4n9HovumvNRUc
ve5ZY8988pY=
=NOcG
-----END PGP SIGNATURE-----

PGP Message Formats (ctd)


Anything else
-----BEGIN PGP MESSAGE-----
Version: 2.3a

hQEMAlkhsM216BqRAQf/f938A6hglX51/hwa42oCdrQDRGw6HJd+5OqQX/58JB8Y
UAlrYBHYZ5md46ety62phvbwfsNuF9igSx2943CHrnuIVtkSXZRpKogtSE1oMfab
5ivD4I+h3Xk0Jpkn5SXYAzC6/cjAZAZSJjoqy28LBIwzlfNNqrzIuEW8lbLPWAt1
eqdS18ukiOUvnQAI1QfJipGUG+Db1KnpqJP7wHUl/4RG1Qi50p3BCDIspC8jzQ/y
GsKFlckA132dMx6b80vsUZga/tmJOwrgBjSbnOJ8UzLrNe+GjFRyBS+qGuKgLd9M
ymYgMyNOqo/LXALSlLIcz3inDSC5NJj04RbRZ00w4KYAAFrxX9a1BQq1nb40/OSB
CgrPqi61jBks2NW2EPoIC7nV5xLjflZwlRjY/V5sZS6XDycJ9YOf6fOclNwCoBsB
HRshmNtMHH2tq2//OozKZ8/GHGNysN8QQWNQYElgRCgH3ou1E+CJoyoPwrMqjSYC
oGp4fezQpiI83Ve/QMMV276KntTFLRpQ2H+lLDvX9Wfjg1+xTw==
=ZuOF
-----END PGP MESSAGE-----
PGP Key Formats
Unlike PEM, PGP also defined public/private key formats

• Key trust = how much the key is trusted to sign things (set by
user)
• userID trust = how much the userID is trusted to belong to this
key
• Signing trust = copy of the signing keys trust
PGP calculates userID trust = sum of signing trusts

PGP Trust
UserID trust = trust of binding between userID and key
Key trust = trust of key owner
Example: UserID = Politician
• UserID trust = High
• Key trust = Low
Trust levels
• Unknown
• None
• Casual
• Heavy-duty
PGP Trust (ctd)
Trust levels are automatically computed by PGP

User can define the required trust levels (eg 3 casuals = 1


high)

PGP Trust (ctd)


In practice, the web of trust doesn’t really deliver
• It can also be used hierarchically, like X.509
Each key can contain multiple userID’s with their own trust
levels
• userID = Peter Gutmann, trust = high
• userID = University Vice-Chancellor, trust = none
Keys are revoked with a signed revocation which PGP adds
to the key
PGP Keyrings
One or more keys stored together constitute a keyring
Keys are looked up by
• userID (free-form name)
• keyID (64-bit value derived from the public key)

The owners key is ultimately trusted and can convey this to


other keys

Key Distribution
Key distribution doesn’t rely on an existing infrastructure
• Email
• Personal contact
– Keysigning services
• Mailed floppies
Verification by various out-of-band means (personal
contact, phone, mail)
• PGP key fingerprint designed for this purpose
First-generation keyservers
• email/HTTP interface to PGP keyring
Second-generation keyservers
• LDAP kludged to handle PGP ID’s
PGP Key Problems
KeyID is 64 least significant bits of public key
• Can construct keys with arbitrary ID’s
• Allows signature spoofing
Key fingerprints can also be spoofed

Advantages of PGP over PEM


You can pick your own name(s)
You don’t have to register with an authority
PGP requires no support infrastructure
The trust mechanism more closely matches real life
Certificate distribution can be manual or automatic (just
include it with the message)
PGP is unique among email security protocols in having no
crippled encryption
PGP’s compression speeds up encryption and signing,
reduces message overhead
MIME-based Security
Multipurpose Internet Mail Extensions
Provides a convenient mechanism for transferring
composite data
Security-related information sent as sections of a multipart
message
• multipart/signed
• multipart/encrypted
Binary data handled via base64 encoding
MIME-aware mailers can automatically process the
security informtion (or at least hide it from the user)

MIME-based Security (ctd)


General format:
Content-Type: multipart/type; boundary="Boundary"
Content-Transfer-Encoding: base64

--Boundary
encryption info

--Boundary
message

--Boundary
signature
--Boundary--

Both PEM and PGP were adapted to fit into the MIME
framework
MOSS
MIME Object Security Services
• PEM shoehorned into MIME
• MOSS support added to MIME types via application/moss-
signature and application/moss-keys

MOSS (ctd)
MOSS Signed
Content-Type: multipart/signed; protocol="application/moss-
signature"; micalg="rsa-md5"; boundary="Signed Message"

--Signed Message
Content-Type: text/plain

Support PGP: Show MOSS to your friends.

--Signed Message
Content-Type: application/moss-signature

Version: 5
Originator-ID:
jV2OfH+nnXHU8bnL8kPAad/mSQlTDZlbVuxvZAOVRZ5q5+Ejl5bQvqNeqOUNQjr6
EtE7K2QDeVMCyXsdJlA8fA==
MIC-Info: RSA-MD5,RSA,
UdFJR8u/TIGhfH65ieewe2lOW4tooa3vZCvVNGBZirf/7nrgzWDABz8w9NsXSexv
AjRFbHoNPzBuxwmOAFeA0HJszL4yBvhG

--Signed Message--
MOSS (ctd)
MOSS Encrypted
Content-Type: multipart/encrypted; protocol="application/moss-keys";
boundary="Encrypted Message"

--Encrypted Message
Content-Type: application/moss-keys

Version: 5
DEK-Info: DES-CBC,BFF968AA74691AC1
Recipient-ID:
MFExCzAJBgNVBAYTAlVTMSAwHgYDVQQKExdSU0EgRGF0YSBTZWN1cml0eSwgSW5j
LjEPMA0GA1UECxMGQmV0YSAxMQ8wDQYDVQQLEwZOT1RBUlk=,66
Key-Info: RSA,
O6BS1ww9CTyHPtS3bMLD+L0hejdvX6Qv1HK2ds2sQPEaXhX8EhvVphHYTjwekdWv
7x0Z3Jx2vTAhOYHMcqqCjA==

--Encrypted Message
Content-Type: application/octet-stream

qeWlj/YJ2Uf5ng9yznPbtD0mYloSwIuV9FRYx+gzY+8iXd/NQrXHfi6/MhPfPF3d
jIqCJAxvld2xgqQimUzoS1a4r7kQQ5c/Iua4LqKeq3ciFzEv/MbZhA==

--Encrypted Message--

PGP/MIME
PGP shoehorned into MIME
• PGP support added to MIME types via application/pgp-
signature and application/pgp-encrypted
PGP already uses ‘--’ so PGP/MIME escapes this with
‘- ’
-----BEGIN PGP MESSAGE-----
becomes
- -----BEGIN PGP MESSAGE-----
PGP/MIME (ctd)
PGP/MIME Signed:
Content-Type: multipart/signed; protocol="application/pgp-signature";
micalg=pgp-md5; boundary=Signed

--Signed
Content-Type: text/plain

Our message format is uglier than your message format!

--Signed
Content-Type: application/pgp-signature

- -----BEGIN PGP MESSAGE-----


Version: 2.6.2

iQCVAwUBMJrRF2N9oWBghPDJAQE9UQQAtl7LuRVndBjrk4EqYBIb3h5QXIX/LC//
jJV5bNvkZIGPIcEmI5iFd9boEgvpirHtIREEqLQRkYNoBActFBZmh9GC3C041WGq
uMbrbxc+nIs1TIKlA08rVi9ig/2Yh7LFrK5Ein57U/W72vgSxLhe/zhdfolT9Brn
HOxEa44b+EI=
=ndaj
- -----END PGP MESSAGE-----
--Signed--

PGP/MIME (ctd)
PGP/MIME Encrypted
Content-Type: multipart/encrypted; protocol="application/pgp-
encrypted"; boundary=Encrypted

--Encrypted
Content-Type: application/pgp-encrypted

Version: 1

--Encrypted
Content-Type: application/octet-stream

-----BEGIN PGP MESSAGE-----


Version: 2.6.2

hIwDY32hYGCE8MkBA/wOu7d45aUxF4Q0RKJprD3v5Z9K1YcRJ2fve87lMlDlx4Oj
g9VGQxFeGqzykzmykU6A26MSMexR4ApeeON6xzZWfo+0yOqAq6lb46wsvldZ96YA
AABH78hyX7YX4uT1tNCWEIIBoqqvCeIMpp7UQ2IzBrXg6GtukS8NxbukLeamqVW3
1yt21DYOjuLzcMNe/JNsD9vDVCvOOG3OCi8=
=zzaA
-----END PGP MESSAGE-----

--Encrypted--
MOSS and PGP/MIME
MOSS never took off
PGP/MIME never took off either

S/MIME
Originally based on proprietary RSADSI standards and
MIME
• PKCS, Public Key Cryptography Standards
– RC2, RC4 for data encryption
– PKCS #1, RSA encryption, for key exchange
– PKCS #7, cryptographic message syntax, for message
formatting
Newer versions added non-proprietary and non-patented
ciphers
CMS
Cryptographic Message Syntax
• Type-and-value format

Data content types


• Data
• Signed data
• Encrypted data (conventional encryption)
• Enveloped data (PKC-encrypted)
• Digested (hashed) data
• Authenticated (MAC’d) data

CMS (ctd)
Other content types possible
• Private keys
• Key management messages
Content can be arbitrarily nested
Signed Data Format
Digest (hash) algorithm(s)
Encapsulated data
Signer certificate chain(s)
Signature(s)

Presence of hash algorithm information before the data and


certificates before the signatures allows one-pass
processing

Signature Format
Signing certificate identifier
Authenticated attributes
Signature
Unauthenticated attributes
Authenticated attributes are signed along with the
encapsulated content
• Signing time
• Signature type
– “I agree completely”
– “I agree in principle”
– “I disagree but can’t be bothered going into the details”
– “A flunky handed me this to sign”
Signature Format (ctd)
• Receipt request
• Security label
• Mailing list information
Unauthenticated attributes provide a means of adding
further information without breaking the original
signature
• Countersignature
– Countersigns an existing signature
– Signs signature on content rather than content itself, so
other content doesn’t have to be present
– Countersignatures can contain further countersignatures

Enveloped Data Format

Per-recipient information
Key management certificate
identifier
Encrypted session key

Newer versions add support for key agreement algorithms


and previously distributed shared conventional keys
CMS  S/MIME
Wrap each individual CMS layer in MIME
base64 encode + wrap content
Encode as CMS data
base64 encode + wrap content
Encode as CMS signed data
base64 encode + wrap content
Encode as CMS enveloped data
base64 encode + wrap content
Result is 2:1 message expansion

S/MIME Problems
Earlier versions used mostly crippled crypto
• Only way to interoperate was 40-bit RC2
– RC2/40 is still the lowest-common-denominator default
– User is given no warning of the use of crippled crypto
– Message forwarding may result in security downgrade
• S/MIME-cracking screen saver released in 1997,
http://www.counterpane.com/smime.html
– Performs optimised attack using RC2 key setup cycles
– Looks for MIME header in decrypted data
Original S/MIME based on patented RSA and proprietary
RC2, rejected by IETF as a standard
IETF developed S/MIME v3 using strong crypto and non-
patented, non-proprietary technology
MSP
Message Security Protocol, used in Defence Messaging
System (DMS)
• X.400 message contains envelope + content
• MSP encapsulates X.400 content and adds security header

X.400 security required using (and trusting) X.400 MTA;


MSP requires only trusted endpoints
• MSP later used with MIME

MSP Services
Services provided
• Authentication
• Integrity
• Confidentiality
• Non-repudiation of origin (via message signature)
• Non-repudiation of delivery (via signed receipts)
MSP also provides rule-based access control (RBAC)
based on message sensitivity and classification levels of
sender, receiver, and workstation
• Receiving MUA checks that the receiver and workstation are
cleared for the messages security classification
• MSP rule-based access control (RBAC)  role-based access
control (also RBAC)
MSP Certificates
MSP defines three X.509 certificate types
• Signature-only
• Encryption (key management) only
• Signature and encryption (two keys in one certificate)
Certificate also includes RBAC authorisations

MSP Protection Types


MSP Signature
• MUA/MLA signs with signature-only certificate
Non-repudiation
• User signs with signature or dual-key certificate
Confidentiality, integrity, RBAC
• Encrypted with key management or dual-key certificate
Non-repudiation + confidentiality, integrity, RBAC
• Sign + encrypt using either signature and key management
certificates or dual-key certificate
Any of the above can be combined with MSP signatures
MSP Protection Types (ctd)
MSP signature covers MSP header and encapsulated
content
• Mandatory for mailing lists
User signature covers encapsulated content and receipt
request information

MSP Message Format


Originator security data
Originator key management cert chain
Encrypted RBAC information (additional to per-recipient
RBAC info)
Signature
Receipt request information
Signature on encapsulated data and receipt info
Signature cert chain
Recipient security data
Per-recipient
Key management certificate identifier (KMID)
Encrypted security classification(s) (RBAC) + secret key
Mailing list control information
MUA or MLA Signature
Encapsulated content
• RBAC is encrypted to protect it if no signatures are used
MSP in Practice
MSP is heavily tied into Fortezza hardware
• DSA signatures
• KEA key management
• Skipjack encryption
MSP later kludged to work with MIME a la MOSS and
PGP/MIME
Authentication

“What was your username again?” clickety clickety


— The BOFH

User Authentication
Basic system uses passwords
• Can be easily intercepted
Encrypt/hash the password
• Can still intercept the encrypted/hashed form
Modify the encryption/hashing so the encrypted/hashed
value changes each time (challenge/response
mechanism)
User Authentication (ctd)

Vulnerable to offline password guessing (attacker knows


challenge and encrypted challenge, can try to guess the
password used to process it)

User Authentication (ctd)


There are many variations of this mechanism but it’s very
hard to get right
• Impersonation attacks (pretend to be client or server)
• Reflection attacks (bounce the authentication messages
elsewhere)
• Steal client/server authentication database
• Modify messages between client and server
• Chess grandmaster attack
Simple Client/Server Authentication
Client and server share a key K
• Server sends challenge encrypted with K
– Challenge should generally include extra information like
the server ID and timestamp
• Client decrypts challenge, transforms it (eg adds one, flips the
bits), re-encrypts it with K, and sends it to the server
• Server does the same and compares the two
Properties
• Both sides are authenticated
• Observer can’t see the (unencrypted) challenge, can’t perform
a password-guessing attack
• Requires reversible encryption
Something similar is used by protocols like Kerberos

Unix Password Encryption


Designed to resist mid-70’s level attacks
Uses 25 iterations of modified DES

Salt prevents identical passwords from producing the same


output
crypt16
Handles 16 characters instead of 8
• 20 DES crypts for the first 8
• 5 DES crypts for the second 8
Result was weaker than the original crypt
• Search for passwords by suffix
• Suffix search requires only 5 DES crypts

LMHASH
From MS LAN Manager
Like crypt16 but without the salt
• If password < 7 chars, second half is 0xAAD3B435B51404EE
• Result is zero-padded(!) and used as 3 independant DES keys
• 8-byte challenge from server is encrypted once with each key
• 3  8-byte value returned to server
Newer versions of NT added NTHASH (MD4 of data), but
LMHASH data is still sent alongside NTHASH data
Subject to rollback attacks (“I can’t handle this, give me
LMHASH instead”)
LMHASH (ctd)
l0phtcrack, http://www.l0pht.com/l0phtcrack/
• Collects hashed passwords via network sniffing, from the
registry, or from SAM file on disk
• Two attack types
– Dictionary search: 100,000 words in a few minutes on a
PPro 200
– Brute force: All alphabetic characters in 6 hours, all
alphanumerics in 62 hours on quad PPro 200
• Parallel attacks on multiple users
• Runs as a background process

NT Domain Authentication
Joining
• Client and server exchange challenges CC and SC
• Session key = CC + CS encrypted with the machine password
(NTHASH of LMHASH of machine name)
• Result is used as RC4 key
Anyone on the network can intercept this and recover the
initial key
NT Domain Authentication (ctd)
User logon
• Client sends RC4-encrypted LMHASH and NTHASH of user
password
• Server decrypts and verifies LMHASH and NTHASH of
password
RC4 key is reused, can be recoverd in the standard manner
for a stream cipher
Auxiliary data (logon script, profile, SID) aren’t
authenticated
This is why NT 5 will use Kerberos

Attacking Domain Authentication over the


Net
Create a web page with an embedded (auto-loading) image
• Image is held on an SMB Lanman server instead of an HTTP
server
• NT connects to server
• Server sends fixed all-zero challenge
• NT responds with the username and hashed password
All-zero challenge allows the use of a precomputed
dictionary
Attacking Domain Authentication over the
Net (ctd)
Reflection attack
• NT connects to the server as before
• Server connects back to NT
• NT sends challenge to server, server bounces it back to NT
• NT responds with the username and hash, server bounces it
back to NT
Server has now connected to the NT machine without
knowing the password

Netware Authentication
Netware 3 used challenge-response method
• Server sends challenge
• Client responds with MD4( MD4( serverID/salt, password ),
challenge )
• Server stores (server-dependant) inner hash
– Not vulnerable to server compromise because of
serverID/salt
Netware 4 added public-key encryption managed via the
Netware Directory Services (NDS)
Netware Authentication (ctd)
Users public/private keys are stored by NDS, accessed with
a modification of the V3 protocol
• Server sends challenge
• Client responds with server-public-key encrypted V3 hash and
session key
• Server returns users RSA key encrypted with the session key
Once the user has their RSA key pair on the workstation,
they can use it for further authentication
• RSA key is converted to short-term Gillou-Quisquater (GQ)
key
• RSA key is deleted
• GQ key is used for authentication

Netware Authentication (ctd)


Compromise of GQ key doesn’t compromise the RSA key
GQ is much faster than RSA for both key generation and
authentication
GQ is used to authenticate the user
• Sign request with short-term GQ key
• Authenticate GQ key with long-term RSA key (from NDS)
All valuable information is deleted as quickly as possible
• Only the short-term GQ key remains active
Kerberos
Designed at MIT based on late-70’s work by Needham and
Schroeder
Relies on key distribution centre (KDC) to perform
mediated authentication
KDC shares a key with each client and server

Kerberos (ctd)
When a client connects to a server
• KDC sends to client
– Session key encrypted with clients key
– Session key + client ID encrypted with servers key
• User forwards the latter (a ticket) to the server
• User decrypts session key, server decrypts ticket to recover
client ID and session key
– Only the client can recover the client-encrypted session key
– Only the server can recover the server-encrypted session
key
Kerberos (ctd)
Ticket identifies the client and clients network address (to
stop it being used elsewhere)
To avoid long-term password storage, the users password is
converted to a short-term client key via the KDC
• KDC sends a short-term client key encrypted with the users
password to the client
• User decrypts the short-term client key, decrypts the password
• Future KDC  client communications use the short-term
client key

Kerberos (ctd)
KDC also sends out a ticket-granting ticket (TGT)
• TGT contains the client short-term key encrypted with the
KDC key
• Based on a theoretical Kerberos model which separates the
authentication server and ticket-granting server
– KDC/AS issues the TGT to talk to the KDC/TGS
Mutual Authentication
Parties exchange session-key encrypted timestamps
• Only holders of the shared session key can get the encryption
right
• Replays detected by checking the timestamp on each exchange
• Messages older than a certain time are automatically rejected
KDC’s can be replicated to increase availability
• Kerberos is designed so that most database accesses are read-
only, making replication easier

Kerberos Realms
Problems with a single KDC database
• Compromising a KDC can compromise many users
• Big KDC databases are difficult to manage
• Big databases lead to name clashes
Solution is to use multiple realms
Interrealm authentication is just an extension of the
standard Kerberos authentication mechanism
Kerberos Realms (ctd)
When a client connects to a server in another realm
• KDC authenticates client to other realm’s KDC
• Other realm’s KDC authenticates client to other realm’s server

Multi-KDC chaining is disallowed for security reasons (a


rogue KDC in the chain could allow anyone in)

Kerberos V5
Extended V4 in various ways
• Extended ticket lifetimes (V4 max = 21 hours)
• Allowed delegation of rights
• Allowed heirarchical realms
• Added algorithms other than DES
• V4 used ad hoc encoding, V5 used ASN.1
Ticket Lifetimes
V4 allowed maximum 21 hour lifetime, V5 allows
specification of
• Start time
• End time
• Renewal time (long-term tickets must be renewed periodically)
This added flexibility by providing features like postdated
tickets

Delegation
Request a TGT for different machine(s) and/or times
TGT can be set up to allow forwarding (converted for use
by a third party) and proxying (use on a machine other
than the one in the TGT)
Delegation is a security hole, Kerberos makes it optional
TGT’s are marked as forwarded or proxied to allow them
to be rejected
Realms
V4 required a KDC to be registered in every other realm
V5 (tries to) fix the problem with rogue KDC chaining by
including in the ticket all transited realms
(client ) foo.com  bar.com ( server)
vs
(client ) foo.com  hacker.com  bar.com ( server)

Realms (ctd)
Requires trusting KDC’s
• Trusted chain only goes back one level
• Current KDC can alter ID of previous KDC’s
Kerberos abdicates responbility for trust in chaining to
application developers (who will probably get it wrong)
When chaining, use the shortest path possible to limit the
number of KDC’s in the path which must be trusted
Other Changes in V5
V4 used DES, V5 added MD4 and MD5
V4 allowed offline password-guessing attacks on TGT’s
• Request a TGT for the victim
• Try passwords on the TGT
V5 adds pre-authentication data to the TGT request

Kerberos-like Systems
KryptoKnight (IBM)
• Closer to V4 than V5
• Can use random challenges instead of synchronised clocks
• Either party can contact the KDC (in Kerberos, only the
initiator can do this)
• Encryption is CDMF (40-bit DES)
Kerberos-like Systems (ctd)
SESAME (EU)
• European Kerberos clone mostly done by ICL, Bull, and
Siemens
– Uses XOR instead of DES
– XOR in CBC mode cancels out 50% of the “encryption”
– Keys are generated from the current system time
– Only the first 8 bytes of data are authenticated
• Apparently users were expected to find all the holes and plug
in their own secure code
• Later versions added public-key encryption support
• Vendor-specific versions provided enhanced security services

Kerberos-like Systems (ctd)


DCE (OSF)
• Distributed Computing Environment uses Kerberos V5 as a
security component
• DCE adds privilege and registration servers to the Kerberos
KDC
– Privilege server provides a universal unique user ID and
group ID (Kerberos uses system-specific names and ID’s)
– Registration server provides the database for the KDC and
privilege server
• DCE security is based on ACL’s (access control lists) for users
and groups
• Data exchanges are protected via native DCE RPC security
Authentication Tokens
Physical device used to authenticate owner to server
Two main types
• Challenge-response calculators
• One-way authentication data generators
– Non-challenge-response nature fits the “enter name and
password” authentication model

Authentication Tokens (ctd)


SecurID
• Uses clock synchronised with server
• Token encrypts the time, sent to server in place of password
– 64-bit key (seed), 64-bit time, produces 6-8 digit output
(cardcode)
– Card can be protected by 4-8 digit PIN which is added to
the cardcode
• Server does the same and compares the result
• Timestamp provides automatic replay protection, but needs to
be compensated for clock drift
• Proprietary ACE server protocol will be replaced by RSA-
based SecurSight in early ‘99
Authentication Tokens (ctd)
Challenge-response calculators
• Encrypt a challenge from the server and return result to server
• Server does the same and compares the result
• Encryption is usually DES
• Encryption key is random (rather than a fixed password) which
makes offline password guessing much harder

Authentication Tokens (ctd)


Possible attacks
• Wait for all but the last character to be entered, then sieze the
link
• Hijack the session after the user is authenticated
However, any of these are still vastly more secure than a
straight password
• It’s very difficult to choose an easy-to-guess password when
it’s generated by crypto hardware
• It’s very difficult to leak a password when it’s sealed inside a
hardware token
S/Key
One of a class of software tokens/one-time-password
(OTP) systems
Freely available for many OS’s,
http://www.yak.net/skey/
Uses a one-way hash function to create one-time passwords
• pass3 = hash( hash( hash( password )))
• pass2 = hash( hash( password ))
• pass1 = hash( password )
– Actual hash includes a server-specific salt to tie it to a
server
Each hash value is used only once
• Server stores value n, verifies that hash( n-1 ) = n

S/Key (ctd)
Knowing hash( hash( password )) doesn't reveal hash(
password )
Values are transmitted as 16 hex digits or 6-word phrases
Later refinements added new algorithms, more rigorous
definitions of the protocol
OPIE
One-time Passwords in Everything, developed by (US)
Naval Research Laboratory
Freely available for many OS’s,
ftp://ftp.nrl.navy.mil/pub/security/opie/
Enhancement of S/Key with name change to avoid
trademark problems

PPP PAP/CHAP
Simplest PPP authentication is PAP, password
authentication protocol
• Plaintext user name + password
Challenge handshake protocol was created to fix PAP
• Standard challenge/response protocol using a hash of challenge
and shared secret
Other PAP Variants
SPAP
• Shiva {Propietary|Password} Authentication Protocol
• PAP with a few added bells and whistles
ARAP
• Appletalk Remote Access Protocol
• Bidirectional challenge/response using DES
– Authenticates client to server, server to client
MSCHAP
• Microsoft CHAP
• DES-encrypts 8-byte challenge using LMHASH/NTHASH
• Server stores the hash rather than the plaintext password
• Subject to the usual LMHASH attacks

RADIUS
Remote authentication for dial-in user service
Provides an authentication server for one or more clients
(dial-in hosts)
Client communicates with RADIUS server via encrypted
communications using a shared secret key
RADIUS (ctd)
RADIUS protocol:
• Client forwards user access request to RADIUS server
• Server replies with
– Reject access
– Allow access (based on password)
– Challenge (for challenge-response protocol, eg CHAP)
• If challenge-response is used, client forwards challenge to user,
user sends response to client, which forwards it to server
One RADIUS server may consult another (acting as a
client)

TACACS/XTACACS/TACACS+
Based on obscure ARPANET access control system for
terminal servers, later documented and extended by
Cisco
• Forwards username and password to TACACS server, returns
authorisation response
XTACACS, Extended TACACS
• Adds support for multiple TACACS servers, logging, extended
authorisation
• Can independantly authorise access via PPP, SLIP, telnet
TACACS/XTACACS/TACACS+ (ctd)
TACACS+
• Separation of authentication, authorisation, and accounting
functions with extended functionality
• Password information is encrypted using RADIUS-style
encryption
• Password forwarding allows use of one password for multiple
protocols (PAP, CHAP, telnet)
• Extensive accounting support (connect time, location, duration,
protocol, bytes sent and received, connect status updates, etc)
• Control over user attributes (assigned IP addresse(s),
connection timeout, etc)

Sorting out the xxxxxS’s


RADIUS, TACACS = Combined authentication and
authorisation process
XTACACS = Authentication, authorisation, and
accounting separated
TACACS+ = XTACACS with extra attribute control and
accounting
Common shortcomings
• No integrity protection
• No replay protection
• Packets leak data due to fixed/known formatting
ANSI X9.26
Sign-on authentication standard (“Financial institution
sign-on authentication for wholesale financial
transmission”)
DES-based challenge-response protocol
• Server sends challenge (TVP = time variant parameter)
• Client responds with encrypted authentication information
(PAI = personal authentication information) XOR’d with TVP
Offline attacks prevented using secret PAI
Variants include
• Two-way authentication of client and server
• Authentication using per-user or per-node keys but no PAI

Public-key-based Authentication
Simple PKC-based challenge/response protocol
• Server sends challenge
• Client signs challenge and returns it
• Server verifies clients signature on the challenge
Vulnerable to chosen-protocol attacks
• Server can have client sign anything
• Algorithm-specific attacks (eg RSA signature/encryption
duality)
FIPS 196
Entity authentication using public key cryptography
Extends and clarifies ISO 9798 entity authentication
standard
Signed challenge/response protocol:
• Server sends server nonce SN
• Client generates client nonce CN
• Client signs SN and CN and returns to server
• Server verifies signature on the data

FIPS 196 (ctd)


Mutual authentication uses a three-pass protocol
• Server sends client signed SC as final step
Inclusion of CN prevents the previous chosen-protocol
attacks
• Vulnerable to other attacks unless special precautions are taken
Biometrics
Capture data via cameras or microphones
Verification is relatively easy, identification is very hard
Fingerprints
• Small and inexpensive
• ~10% of people are difficult or impossible to scan
• Associated with criminal identification
Voice authentication
• Upset by background noise, illness, stress, intoxication
• Can be used over phone lines
Eye scans
• Intrusive (scan blood vessels in retina/patterns in iris)

Biometrics (ctd)
Advantages
• Everyone carries their ID on them
• Very hard to forge
• Easy to use
Disadvantages
• You can’t change your password
• Expensive
• No real standards (half a dozen conflicting ones as well as
vendor-specific formats)
• User acceptance problems
PAM
Pluggable Authentication Modules
OSF-designed interface for authentication/identification
plugins
Administrator can configure authentication for each
application
Service Module
login pam_unix.so
ftp pam_skey.so
telnet pam_smartcard.so
su pam_securid.so

PAM (ctd)
Modules are accessed using a standardised interface
pam_start( &handle );
pam_authenticate( handle );
pam_end( handle );
Modules can be stacked to provide single sign-on
• User is authenticated by multiple modules at sign-on
– Avoids the need to manually invoke each sign-on service
(kinit, dce_login, dtlogin, etc)
• Password mapping allows a single master password to encrypt
per-module passwords
PAM in Practice
A typical implementation is Linux-PAM
• Extended standard login (checks time, source of login, etc)
• .rhosts, /etc/shells
• cracklib (for password checking)
• DES challenge/response
• Kerberos
• S/Key, OPIE
• RADIUS
• SecurID
Electronic Commerce

SET is the answer, but you have to phrase the


question very carefully

Electronic Payments
An electronic payment system needs to be
• Widely recognised
• Hard to fake
• Hold its value
• Convenient to use
• Anonymous/not anonymous
Convenience is the most important point
Cheques

Merchant doesn’t know whether the cheque is valid until


it’s cleared

Cheques (ctd)
Consumer can’t detect fraud until the statement arrives
Cost of processing errors vastly outweighs the cost of
normal actions
Credit Cards

Authentication is online
Settlement is usually offline (batch processed at end of
day)

Credit Cards (ctd)


Consumer can’t detect fraud until the statement arrives
Cost of processing errors vastly outweighs the cost of
normal actions
Merchant carries the risk of fraud in card not present
transactions
Consumer liability is limited to $50
Far more merchant fraud than consumer fraud
Credit card companies assume liability for their merchants;
banks with cheques don’t
Transactions on the Internet
Transactions are fairly conventional card not present
transactions and follow the precedent set by phone
orders
Online nature provides instant verification
Biggest problems are authentication and confidentiality

General Model of Internet Transactions

Virtually all net payment systems consist of some variant


of this
Everyone wants to be the middleman
Retail vs Business-to-business Commerce
Retail commerce
• Small dollar amounts
• Stranger-to-stranger transactions
Business-to-business commerce
• Large dollar amounts
• Based on trust relationships
• Banks play a direct role — they guarantee the transaction
– You can’t disintermediate the banks
Business-to-business commerce is where the money is
• For retail transactions, you can’t beat a credit card over SSL
Business customers will buy to reduce current costs

Payment Systems
Book entry systems
• Credit cards over SSL
• Encrypted credit cards (Cybercash)
• Virtual credit cards (First Virtual)
• e-cheques (Netcash)
• Mondex/SET
• Many, many others
Bearer certificate systems
• Scrip (Millicent)
• True digital cash (Digicash)
Netcash
e-cheques, http://www.teleport.com/~netcash

Cybercash
Encrypted credit cards, http://www.cybercash.com
Book Entry System Variations
Some systems (eg GlobeID) have the consumer (instead of
the merchant) do the messaging
Credit cards don’t handle small transactions very well.
Some options are
• Don’t handle micropayments at all
• Middleman has to act as a bank
• Use a betting protocol: 10 cent transaction = 1% chance of a
$10 transaction

Digicash
Digicash issuing protocol
User Bank (mint)
blind( note ) 
 sign( blind( note ))
unblind( sign( blind( note )))
= sign( note )

User ends up with a note signed by the bank


• Note is not tied to user
• Implemented as an electronic purse which holds arbitrary
denominations
Digicash (ctd)
Using e-cash
• Send note to merchant
• Merchant redeems note at bank
• Double spending is avoided by having the user ID revealed if
the note is banked twice (ZKP)
– The fielded system just keeps a record of already spent
notes, which is easier

Digicash (ctd)
Problems
• Banks don’t like it (anyone can be a bank)
• Governments don’t like it
• Not used much (awkward/fluctuating licensing requirements)
– Licensed as if it were an RSA-style monopoly patent
By the time they figure it out, the patent will expire (2007)
• Digicash principals are great cryptographers, not so good
business managers
• Patents are currently in limbo after Digicash Inc. collapsed
Making e-cash work
Best e-cash business model is to earn seignorage by selling
it
• Bank earns interest on real cash corresponding to digital bits
held by consumer
• US Federal Reserve earns $20B/year in interest on outstanding
dollar bills
• Phone cards and gift vouchers are a small-scale example of this
Consumers may demand interest on e-cash
e-cash is useful for small transactions (micropayments)
which other systems can’t handle
• But what do you buy over the net for 10 cents?

echecks
Background for a US audience
• Non-US automated payment processing is relatively
sophisticated
• Automatic payments (rent, utilities, wages) are handled via
direct funds transfer
• Funds are moved electronically from one account to another on
the same day
– Checks are used rarely
– Electronic check proposals are met with bafflement
echecks (ctd)
Background for a non-US audience
• US cheque and payment processing is very primitive
• “Automatic payment” frequently means the payers bank writes
a cheque and sends it to the payee
• Payments are batched and held until a sufficient number have
accumulated
– The fact that funds leave the payers account on a given day
doesn’t guarantee timely arrival in the payees account
• Cheques are used extensively
• Electronic cheques would be a significant advance on the
current situation

Electronic Cheque Design Requirements


Cheques can involve
• One or more signers
• One or more endorsers
• Invoice(s) to be paid
• Deposit to account or cash
Electronic version must be flexible enough to able to
handle all of these
e-cheque Design
e-cheques are defined using FSML (Financial Services
Markup Language)
• FSML allows addition and deletion of document blocks,
signing, co-signing, endorsing, etc.
Signatures are accompanied by bank-issued certificates
• Tie the signers key to a bank account
• Different account is used for e-cheques to protect standard
cheque account against fraud

e-cheque Design (ctd)


Private key is held in smart card (electronic cheque book)
• Card numbers each signature/cheque
– Attempts to re-use cheques will be detected
• Card keeps record of cheques signed
– Provides some degree of protection against trojan horse
software
• Card provides some degree of non-repudiation
• Use of software implementations rejected because of security
concerns
– “If hackers acquire signing keys and perpetuate fraud,
payees confidence in the system would be destroyed”
• Use of PDA’s as e-chequebooks was also considered
e-cheque Processing

Settlement is handled via existing standards


• ANSI X9.46 with FSML representation instead of cheque
image
• ANSI X9.37 cash letter contained in X9.46 encapsulation

e-cheque Processing (ctd)


Cheque signature may also bind and invoice to avoid an
attacker substituting a different invoice
Mechanisms can be extended to provide certified cheques
• Payers bank
– Verifies details of cheque
– Places hold on payers funds
– Countersigns cheque
e-cheque design is a good example of carefully designing a
protocol to meet certain security requirements
• Work around shortcomings in existing laws
• Work around shortcomings in existing security technology
e-cheque Format
Tag Field
<check> Start tag of cheque block
<checkdata> Start tag of elements logged in electronic
chequebook
<checknum> Cheque number
<dateissued> Date cheque was issued
<datevalid> Date cheque is payable
<amount> Amount of cheque (+optional currency)
<payto> Payee (+optional bank, account, etc)
</checkdata> End of elements logged
<checkbook> ID of electronic chequebook
<restrictions> Optional “duration”, “deposit only”, etc
<legalnotice> “Subject to standard cheque law”
</check> End of cheque block

e-cheque Format (ctd)


Tag Field
<signature> Start tag of signature block
<blkname> Name of this block
<sigdata> Start tag of signed data
<blockref> Name of next block
<hash alg=xxx> Hash of next block
<nonce> Random string to make blocks
unpredictable
<certissuer> Optional identity of issuing certificate
<algorithm> Hashing and signing algorithm used
</sigdata> End of signed data
<sig> Signature computed by electronic
chequebook
</signature> End of signature block
SET
Secure Electronic Transactions
Based on two earlier protocols, STT (VISA/Microsoft) and
SEPP (MasterCard/IBM)
STT
• One component of a larger architecture
• Provision for strong encryption
• Completely new system
• More carefully thought out from a security standpoint

SET (ctd)
SEPP
• General architectural design rather than a precise specification
• Lowest-common-denominator crypto
• Fits in with existing infrastructure
• More politically and commercially astute
SET (ctd)

Acquirer gateway is an Internet interface to the established


credit card authorisation system and cardholder/merchant
banks

SET Features
Card details are never disclosed to merchant
• Encrypted purchase instruction (PI) can only be decrypted by
the acquirer
– In practice the acquirer usually reveals the card details to
the merchant after approval, for purchase tracking purposes
• PI is cryptographically tied to the order instruction (OI)
processed by the merchant
• Clients digital signature protects the merchant from client
repudiation
Authorisation request includes the consumer PI and
merchant equivalent of the PI
• Acquirer can confirm that the cardholder and merchant agree
on the purchase details
SET Features (ctd)
Capture can take place later (eg when the goods are
shipped)
• User can perform an inquiry transaction to check the status
The whole SET protocol is vastly more complex than this

SET Certification

SET root CA and brand CA’s are rarely utilised and have
very high security
SET Certification (ctd)
SET includes a complete PKI using customised X.509
• Online certificate requests
• Certificate distribution
• Certificate revocation
SET certificates are implemented as an X.509 profile with
SET-specific extensions

SET Certification (ctd)


Card-based infrastructure makes certificate management
(relatively) easy
• Users are identified by their cards
• Certificates are revoked by cancelling the card
• Because everything is done online, “certificate management” is
easy
• Acquirer gateways have long-term signature keys and short-
term encryption keys
– Encryption keys can be revoked by letting them expire
SET in Practice: Advantages
SET will enable e-commerce, eliminate world hunger, and
close the ozone hole
• SET prevents fraud in card not present transactions
SET eliminates the need for a middleman (the banks love
this)
SET leverages the existing infrastructure

SET in Practice: Problems


SET is the most complex (published) crypto protocol ever
designed
• > 3000 lines of ASN.1 specification
• 28-stage (!) transaction process
– “The SET reference implementation will be available by
mid 1996”
– “SET 1.0 " " " mid 1997”
– “SET 2.0 " " " mid 1998”
• Interoperability across different implementations is a problem
SET is awfully slow (6 RSA operations per transaction)
• Great for crypto hardware accelerator manufacturers
• For comparison, VISA interchange gateway currently has to
handle 2000 pure DES-based transactions/second
SET in Practice: Problems (ctd)
Although SET was specifically designed for exportability,
you still can’t export the reference implementation
SET requires
• Custom wallet software on the cardholders PC
• Custom merchant software
• Special transaction processing software (and hardware) at the
acquirer gateway.
Electronic Commerce

SET is the answer, but you have to phrase the


question very carefully

Electronic Payments
An electronic payment system needs to be
• Widely recognised
• Hard to fake
• Hold its value
• Convenient to use
• Anonymous/not anonymous
Convenience is the most important point
Cheques

Merchant doesn’t know whether the cheque is valid until


it’s cleared

Cheques (ctd)
Consumer can’t detect fraud until the statement arrives
Cost of processing errors vastly outweighs the cost of
normal actions
Credit Cards

Authentication is online
Settlement is usually offline (batch processed at end of
day)

Credit Cards (ctd)


Consumer can’t detect fraud until the statement arrives
Cost of processing errors vastly outweighs the cost of
normal actions
Merchant carries the risk of fraud in card not present
transactions
Consumer liability is limited to $50
Far more merchant fraud than consumer fraud
Credit card companies assume liability for their merchants;
banks with cheques don’t
Transactions on the Internet
Transactions are fairly conventional card not present
transactions and follow the precedent set by phone
orders
Online nature provides instant verification
Biggest problems are authentication and confidentiality

General Model of Internet Transactions

Virtually all net payment systems consist of some variant


of this
Everyone wants to be the middleman
Retail vs Business-to-business Commerce
Retail commerce
• Small dollar amounts
• Stranger-to-stranger transactions
Business-to-business commerce
• Large dollar amounts
• Based on trust relationships
• Banks play a direct role — they guarantee the transaction
– You can’t disintermediate the banks
Business-to-business commerce is where the money is
• For retail transactions, you can’t beat a credit card over SSL
Business customers will buy to reduce current costs

Payment Systems
Book entry systems
• Credit cards over SSL
• Encrypted credit cards (Cybercash)
• Virtual credit cards (First Virtual)
• e-cheques (Netcash)
• Mondex/SET
• Many, many others
Bearer certificate systems
• Scrip (Millicent)
• True digital cash (Digicash)
Netcash
e-cheques, http://www.teleport.com/~netcash

Cybercash
Encrypted credit cards, http://www.cybercash.com
Book Entry System Variations
Some systems (eg GlobeID) have the consumer (instead of
the merchant) do the messaging
Credit cards don’t handle small transactions very well.
Some options are
• Don’t handle micropayments at all
• Middleman has to act as a bank
• Use a betting protocol: 10 cent transaction = 1% chance of a
$10 transaction

Digicash
Digicash issuing protocol
User Bank (mint)
blind( note ) 
 sign( blind( note ))
unblind( sign( blind( note )))
= sign( note )

User ends up with a note signed by the bank


• Note is not tied to user
• Implemented as an electronic purse which holds arbitrary
denominations
Digicash (ctd)
Using e-cash
• Send note to merchant
• Merchant redeems note at bank
• Double spending is avoided by having the user ID revealed if
the note is banked twice (ZKP)
– The fielded system just keeps a record of already spent
notes, which is easier

Digicash (ctd)
Problems
• Banks don’t like it (anyone can be a bank)
• Governments don’t like it
• Not used much (awkward/fluctuating licensing requirements)
– Licensed as if it were an RSA-style monopoly patent
By the time they figure it out, the patent will expire (2007)
• Digicash principals are great cryptographers, not so good
business managers
• Patents are currently in limbo after Digicash Inc. collapsed
Making e-cash work
Best e-cash business model is to earn seignorage by selling
it
• Bank earns interest on real cash corresponding to digital bits
held by consumer
• US Federal Reserve earns $20B/year in interest on outstanding
dollar bills
• Phone cards and gift vouchers are a small-scale example of this
Consumers may demand interest on e-cash
e-cash is useful for small transactions (micropayments)
which other systems can’t handle
• But what do you buy over the net for 10 cents?

echecks
Background for a US audience
• Non-US automated payment processing is relatively
sophisticated
• Automatic payments (rent, utilities, wages) are handled via
direct funds transfer
• Funds are moved electronically from one account to another on
the same day
– Checks are used rarely
– Electronic check proposals are met with bafflement
echecks (ctd)
Background for a non-US audience
• US cheque and payment processing is very primitive
• “Automatic payment” frequently means the payers bank writes
a cheque and sends it to the payee
• Payments are batched and held until a sufficient number have
accumulated
– The fact that funds leave the payers account on a given day
doesn’t guarantee timely arrival in the payees account
• Cheques are used extensively
• Electronic cheques would be a significant advance on the
current situation

Electronic Cheque Design Requirements


Cheques can involve
• One or more signers
• One or more endorsers
• Invoice(s) to be paid
• Deposit to account or cash
Electronic version must be flexible enough to able to
handle all of these
e-cheque Design
e-cheques are defined using FSML (Financial Services
Markup Language)
• FSML allows addition and deletion of document blocks,
signing, co-signing, endorsing, etc.
Signatures are accompanied by bank-issued certificates
• Tie the signers key to a bank account
• Different account is used for e-cheques to protect standard
cheque account against fraud

e-cheque Design (ctd)


Private key is held in smart card (electronic cheque book)
• Card numbers each signature/cheque
– Attempts to re-use cheques will be detected
• Card keeps record of cheques signed
– Provides some degree of protection against trojan horse
software
• Card provides some degree of non-repudiation
• Use of software implementations rejected because of security
concerns
– “If hackers acquire signing keys and perpetuate fraud,
payees confidence in the system would be destroyed”
• Use of PDA’s as e-chequebooks was also considered
e-cheque Processing

Settlement is handled via existing standards


• ANSI X9.46 with FSML representation instead of cheque
image
• ANSI X9.37 cash letter contained in X9.46 encapsulation

e-cheque Processing (ctd)


Cheque signature may also bind and invoice to avoid an
attacker substituting a different invoice
Mechanisms can be extended to provide certified cheques
• Payers bank
– Verifies details of cheque
– Places hold on payers funds
– Countersigns cheque
e-cheque design is a good example of carefully designing a
protocol to meet certain security requirements
• Work around shortcomings in existing laws
• Work around shortcomings in existing security technology
e-cheque Format
Tag Field
<check> Start tag of cheque block
<checkdata> Start tag of elements logged in electronic
chequebook
<checknum> Cheque number
<dateissued> Date cheque was issued
<datevalid> Date cheque is payable
<amount> Amount of cheque (+optional currency)
<payto> Payee (+optional bank, account, etc)
</checkdata> End of elements logged
<checkbook> ID of electronic chequebook
<restrictions> Optional “duration”, “deposit only”, etc
<legalnotice> “Subject to standard cheque law”
</check> End of cheque block

e-cheque Format (ctd)


Tag Field
<signature> Start tag of signature block
<blkname> Name of this block
<sigdata> Start tag of signed data
<blockref> Name of next block
<hash alg=xxx> Hash of next block
<nonce> Random string to make blocks
unpredictable
<certissuer> Optional identity of issuing certificate
<algorithm> Hashing and signing algorithm used
</sigdata> End of signed data
<sig> Signature computed by electronic
chequebook
</signature> End of signature block
SET
Secure Electronic Transactions
Based on two earlier protocols, STT (VISA/Microsoft) and
SEPP (MasterCard/IBM)
STT
• One component of a larger architecture
• Provision for strong encryption
• Completely new system
• More carefully thought out from a security standpoint

SET (ctd)
SEPP
• General architectural design rather than a precise specification
• Lowest-common-denominator crypto
• Fits in with existing infrastructure
• More politically and commercially astute
SET (ctd)

Acquirer gateway is an Internet interface to the established


credit card authorisation system and cardholder/merchant
banks

SET Features
Card details are never disclosed to merchant
• Encrypted purchase instruction (PI) can only be decrypted by
the acquirer
– In practice the acquirer usually reveals the card details to
the merchant after approval, for purchase tracking purposes
• PI is cryptographically tied to the order instruction (OI)
processed by the merchant
• Clients digital signature protects the merchant from client
repudiation
Authorisation request includes the consumer PI and
merchant equivalent of the PI
• Acquirer can confirm that the cardholder and merchant agree
on the purchase details
SET Features (ctd)
Capture can take place later (eg when the goods are
shipped)
• User can perform an inquiry transaction to check the status
The whole SET protocol is vastly more complex than this

SET Certification

SET root CA and brand CA’s are rarely utilised and have
very high security
SET Certification (ctd)
SET includes a complete PKI using customised X.509
• Online certificate requests
• Certificate distribution
• Certificate revocation
SET certificates are implemented as an X.509 profile with
SET-specific extensions

SET Certification (ctd)


Card-based infrastructure makes certificate management
(relatively) easy
• Users are identified by their cards
• Certificates are revoked by cancelling the card
• Because everything is done online, “certificate management” is
easy
• Acquirer gateways have long-term signature keys and short-
term encryption keys
– Encryption keys can be revoked by letting them expire
SET in Practice: Advantages
SET will enable e-commerce, eliminate world hunger, and
close the ozone hole
• SET prevents fraud in card not present transactions
SET eliminates the need for a middleman (the banks love
this)
SET leverages the existing infrastructure

SET in Practice: Problems


SET is the most complex (published) crypto protocol ever
designed
• > 3000 lines of ASN.1 specification
• 28-stage (!) transaction process
– “The SET reference implementation will be available by
mid 1996”
– “SET 1.0 " " " mid 1997”
– “SET 2.0 " " " mid 1998”
• Interoperability across different implementations is a problem
SET is awfully slow (6 RSA operations per transaction)
• Great for crypto hardware accelerator manufacturers
• For comparison, VISA interchange gateway currently has to
handle 2000 pure DES-based transactions/second
SET in Practice: Problems (ctd)
Although SET was specifically designed for exportability,
you still can’t export the reference implementation
SET requires
• Custom wallet software on the cardholders PC
• Custom merchant software
• Special transaction processing software (and hardware) at the
acquirer gateway.
Practical Issues

Of course my password is the same as my pet’s name


My macaw’s name was Q47pY!3 and I change it every 90
days
— Nick Simicich

Practical Issues
Strong, effectively unbreakable crypto is universally
available (despite US government efforts)
• Don’t attack the crypto, attack the infrastructure in which it’s
used
• " " " " implementation
• " " " " users
Many infrastructure/implementation details are treated as
black boxes by developers
• Storage protection/sanitisation
• Long-term secret storage
• Key generation
Why Security is Harder than it Looks
All software has bugs
Under normal usage conditions, a 99.99% bug-free
program will rarely cause problems

A 99.99% security-bug-free program can be exploited by


ensuring the 0.01% instance is always encountered

This converts the 0.01% failure to 100% failure

Why Security is Harder than it Looks (ctd)


Customers have come to expect buggy software
• Correctness is not a selling point
• Expensive and time-consuming software validation and
verification is hard to justify
Solution: Confine security functionality into a small subset
of functions, the trusted computing base (TCB)
• In theory the TCB is small and relatively easy to analyse
• In practice vendors end up stuffing everything into the TCB,
making it a UTCB
• Consumers buy the product anyway (see above)
Buffer Overflows
In the last year or two these have appeared in
splitvt, syslog, mount/umount, sendmail, lpr, bind, gethostbyname(), modstat, cron, login,
sendmail again, the query CGI script, newgrp, AutoSofts RTS inventory control system, host,
talkd, getopt(), sendmail yet again, FreeBSD’s crt0.c, WebSite 1.1, rlogin, term, ffbconfig,
libX11, passwd/yppasswd/nispasswd, imapd, ipop3d, SuperProbe, lpd, xterm, eject, lpd again,
host, mount, the NLS library, xlock, libXt and further X11R6 libraries, talkd, fdformat, eject,
elm, cxterm, ps, fbconfig, metamail, dtterm, df, an entire range of SGI programs, ps again,
chkey, libX11, suidperl, libXt again, lquerylv, getopt() again, dtaction, at, libDtSvc, eeprom,
lpr yet again, smbmount, xlock yet again, MH-6.83, NIS+, ordist, xlock again, ps again, bash,
rdist, login/scheme, libX11 again, sendmail for Windows NT, wm, wwwcount, tgetent(), xdat,
termcap, portmir, writesrv, rcp, opengroup, telnetd, rlogin, MSIE, eject, df, statd, at again,
rlogin again, rsh, ping, traceroute, Cisco 7xx routers, xscreensaver, passwd, deliver, cidentd,
Xserver, the Yapp conferencing server, multiple problems in the Windows95/NT NTFTP
client, the Windows War and Serv-U FTP daemon, the Linux dynamic linker, filter (part of
elm-2.4), the IMail POP3 server for NT, pset, rpc.nisd, Samba server, ufsrestore, DCE secd,
pine, dslip, Real Player, SLMail, socks5, CSM Proxy, imapd (again), Outlook Express,
Netscape Mail, mutt, MSIE, Lotus Notes, MSIE again, libauth, login, iwsh, permissions,
unfsd, Minicom, nslookup, zpop, dig, WebCam32, smbclient, compress, elvis, lha, bash,
jidentd, Tooltalk, ttdbserver, dbadmin, zgv, mountd, pcnfs, Novell Groupwise, mscreen,
xterm, Xaw library, Cisco IOS, mutt again, ospf_monitor, sdtcm_convert, Netscape (all
versions), mpg123, Xprt, klogd, catdoc, junkbuster, SerialPOP, and rdist

Buffer Overflows (ctd)


Typical case: Long URL’s

• Data at the end of the URL overwrites the program


counter/return address
• When the subroutines returns, it jumps to the attackers code
Fixing Overflow Problems
More careful programming
• Isolate security functionality into carefully-checked code
Make the stack non-executable
Compiler-based solutions
• Build bounds checking into the code (very slow)
• Build stack checking into the code (slight slowdown)
• Rearrange stack variables (no slowdown)

Storage Protection
Sensitive data is routinely stored in RAM, but
• RAM can be swapped to disk at any moment
– Users of one commercial product found multiple copies of
their encryption password in the Windows swap file
– “Suspend to disk” feature in laptops is particularly
troublesome
• Other processes may be able to read it from memory
• Data can be recovered from RAM after power is removed
Protecting Memory
Locking sensitive data into memory isn’t easy
• Unix: mlock() usable by superuser only
• Win16: No security
• Win95/98: VirtualLock() does nothing
• WinNT: VirtualLock() doesn’t work as advertised (data is
still swapped)
• Macintosh: HoldMemory()
Scan memory for data:
VirtualQueryEx()
VirtualUnprotectEx()
ReadProcessMemory()

Protecting Memory (ctd)


Create DIY swapfile using memory-mapped files
• Memory is swapped to a known file rather than system
swapfile
• File is wiped after use
Problems:
• Truly erasing disk data is impossible
• Data isn’t wiped on system crash/power loss
Protecting Memory (ctd)
Force memory to remain in use at all times
• Background thread touches memory periodically
Allocate non-pageable memory
• Requires a kernel driver
• Mapping memory from kernel to user address space is difficult

Storage Sanitisation
Problems in erasing disk data
• Defect management systems move/remap data, making it
inaccesible through normal means
• Journaling filesystems retain older data over long periods of
time
• Online compression schemes compress fixed overwrite
patterns to nothing, leaving the target data intact
• Disk cacheing will discard overwrites if the file is unlinked
immediately afterwards (Win95/98, WinNT)
– Many Windows file-wipers are caught by this
Recovering Data
One or two passes can be easily recovered by “error
cancelling”
• Read actual (digital) data
• Read raw analog signal
• Subtract expected signal due to data from actual analog signal
• Result is previous (overwritten) data
US government standard (DoD 5200.28) with fixed
patterns (all 0’s, all 1’s, alternating 0’s and 1’s) is
particularly bad
Design overwrite patterns to match HD encoding methods

Advanced Data Recovery


Ferrofluid + optical microscopes
• Defeated by modern high-density storage systems
Scanning probe microscopes overcame this problem
• Oscillating probe is scanned across the surface of the object
• Change in probe movement measured by laser interferometer
Can be built for a few thousand dollars
Commercial ones specifically set up for disk platter
analysis are available
Advanced Data Recovery (ctd)
Magnetic force microscope (MFM)

1. Read disk topography


2. Read magnetic force (adjusted for topography)

Advanced Data Recovery (ctd)


MFM’s can be used as expensive read channels, but can do
far more
• Erase bands (partially-overwritten data at the edges) retain
previous track images
• Overwriting one set of data with another causes track width
modulation
• Erased/degaussed drives can often still be read with an MFM
– Modern high-density media can’t be effectively degaussed
with commercial tools
Advanced Data Recovery (ctd)
Recommendations
• Use the smallest, highest-density drives possible
• If data is sensitive, destroy the media
– Where does your returned-under-warranty drive end up?
– For file servers, business data, always destroy the media
(there’s always something sensitive on there)

Recovering Memory Data


Electrical stress causes ion migration in DRAM cells
Data can be recovered using special (undocumented) test
modes which measure changes in cell thresholds
• At room temperature, decay can take minutes or hours
• At cryogenic temperatures, decay can take weeks? months?
A quick overwrite doesn’t help much
Solution is to only store data for short periods
• Relocate data periodically
• Toggle bits in memory
Random Number Generation
Key generation requires large quantities of unpredictable
random numbers
• Very difficult to produce on a PC
• Most behaviour is predictable
• User input can be unpredictable, but isn’t available on a
standalone server
Many implementations leave it to application developers
(who invariably get it wrong)

Bad RNG’s
Netscape
a = mixbits( time.tv_usec );
b = mixbits( getpid() + time.tv_sec + ( getppid() <<
12 );
seed = MD5( a, b );

nonce = MD5( seed++ );


key = MD5( seed++ );

Kerberos V4
srandom( time.tv_usec ^ time.tv_sec ^ getpid() ^
gethostid() ^ counter++ );
key = random();
Bad RNG’s (ctd)
MIT_MAGIC_COOKIE
key = rand() % 256;

SESAME
key = rand();

Types of Generator
Generator consists of two parts
• Polling mechanism to gather random data
• Pseudo-random number generator (PRNG) to “stretch” the
output
Physical source Various hardware generators, Hotbits
(radioactive decay), Lavarand
Physical source + SG100
postprocessing
Multi-source polling SKIP, cryptlib
Single-source polling PGP 2.x, PGP 5.x, /dev/random
Secret nonce + PRNG Applied Cryptography, BSAFE
Secret fixed value + ANSI X9.17
PRNG
Known value + PRNG Netscape, Kerberos V4, Sesame, and
many more
Example: Unix /dev/random

Example: ANSI X9.17

Relies on strength of triple DES encryption and supplied


encryption key
Randomness Sources
• Process and thread information
• Mouse and keyboard activity
• Memory and disk usage statistics
• System timers
• Network statistics
• GUI-related information
Run periodic background polls of sources
Try and estimate the randomness available, if insufficient
• Perform further polling
• Inform the user

Effectiveness of the Randomness Source


Effects of configuation
• Minimal PC hardware (one HD, one CD) produces half the
randomness of maximum PC hardware (multiple HD’s, CD,
network card, SCSI HD and CD)
Effects of system load and usage
• Statistics change little over time on an unloaded machine
• A reboot drastically affects the system state
– Reboot the machine after generating a high-value key
TEMPEST
Sometimes claimed to stand for Transient Electromagnetic
Pulse Emission Standard
Known since the 1950’s, but first publicised by van Eck in
1985
• Provided details on remote viewing of computer monitors
• Required about $15 worth of parts (for sync recovery)
• The spooks were not happy

TEMPEST Principles
Fast-rise pulses lead to harmonics radiated from
semiconductor junctions
• Used to detect bugs
– Flood the room with microwaves
– Watch for radiated responses
Anything which carries a current acts as an antenna
TEMPEST monitoring gear receives and interprets this
information
TEMPEST Sources
Computer monitor/laptop screen
• Generally radiates huge amounts of signal (range of hundreds
of metres)
• Most signal is radiated to the sides, little to the front and back
• Requires external horizontal/vertical sync insertion, since sync
frequencies are too low to be radiated
• Individual monitors can be picked out even when other similar
monitors are in use
• Jamming is often ineffective for protection
– Eavesdroppers can still zero in on a particular monitor

TEMPEST Sources (ctd)


Keyboard
• Some keyboards produce distinct RF signatures for each key
pressed
• Active monitoring
– Beam RF energy at the keyboard cable
– Reflected signal is modulated by abscence/presence of
electrical current
Ethernet
• UTP can be intercepted over some distance
TEMPEST Sources (ctd)
Printer and serial cables
Leakage into power lines
Coupling into power lines, phone lines, metal pipes
• Further radiation from there
Surface waves on coax lines

TEMPEST Protection
Extremely difficult to protect against
Stopping it entirely
• Extreme amounts of shielding on all equipment
• Run the equipment inside a Faraday cage
Stopping it partially
• FCC Class B computers and equipment
• RF filters on power lines, phone lines
• Shielded cables
• Ferrite toroids around cables to attenuate surface waves
• Radio hams have information on safely operating computers
near sensitive comms gear
Use a portable radio as a simple radiation tester
Snake Oil Cryptography
Named after magic cure-all elixirs sold by travelling
medicine salesmen
Many crypto products are sold using similar techniques
• The crypto has similar effectiveness
• This is so common that there’s a special term, “snake oil
crypto”, to describe it

Snake Oil Warning Signs


Security through obscurity
• “Trust me, I know what I'm doing”
– They usually don’t
• Most security through obscurity schemes are eventually broken
– Once someone finds out what your secret security system is,
it’s no longer a secret and no longer secure
– It’s very hard to keep a secret on the net
Proprietary algorithms and revolutionary breakthroughs
• “I know more about algorithm design than the entire world’s
cryptographers”
• Common snake oil warning signs are use of cellular automata,
neural nets, genetic algorithms, and chaos theory
• See “security through obscurity”
Snake Oil Warning Signs (ctd)
Unbreakability
• Usually claimed by equating the product to a one-time-pad
• Product isn’t a one-time-pad, and therefore not unbreakable
“Military-grade crypto”
• Completely meaningless term (cf “military-grade spreadsheet”)
– Military tends to use hardware, civilians use software
– Prefer shift-register based stream ciphers, everyone else
uses block ciphers
– Keys are generally symmetric and centrally managed,
everyone else uses distributed PKC keys
• Products should therefore be advertised as “nothing like
military-grade crypto”

Snake Oil Warning Signs (ctd)


Technobabble
• Use of terms unknown to anyone else in the industry
Used by xyz
• Every product, no matter how bad, will gain at least one big-
name reference customer
Exportable from the US
• Except for special-purpose cases (eg SGC), the US government
will not allow the export of anything which provides real
security
• If it’s freely exportable, it’s broken
Snake Oil Warning Signs (ctd)
Security challenges
• Generally set up to make it impossible to succeed

• These things always get the media’s attention, especially if the


reward is huge (chance of press coverage = 20% per zero after
the first digit)

Snake Oil Warning Signs (ctd)


Would you buy this product?
• “Our unbreakable military-grade bi-gaussian cryptography,
using a proprietary one-time-pad algorithm, has recently been
adopted by a Fortune 500 customer and is available for use
inside and outside the US”
Badly marketed good crypto is indistinguishable from
snake oil
• If you’re selling a crypto product, be careful how your
marketing people handle it
– If left to their own devices, they’ll probably sell it as snake
oil
Snake Oil in the Media
Magazine reviews are a poor gauge of software security
WinXFiles (trivially broken file encryption)
• PC Answers: Listed in “10 proven security programs”
• Windows News: Listed in “75 best Windows utilities”
• FileMine: Rated as a Featured Jewel
• Shareware Junkies: 5 stars, “a must have for anyone sharing a
computer with files they want to keep private”
• PC Format: “Unbeatable and excellent file encryption”
• TUCOWS: Rated 4 cows
• Ziff-Davis Interactive: 5 stars, “keeps files and data on your PC
as safe as if they were under lock and key”
more

Snake Oil in the Media (ctd)


continued
• The Windows95 Application List: “an excellent application for
protecting your personal files”
• RocketDownload: Four smilies
• “Simply the Best” site award
One major publication once rated a collection of
encryption programs by how good the user interface
looked
Snake Oil Case Study
Meganet Virtual Matrix Encryption
• “A new kind of encryption and a new algorithm, different from
any existing method”
• “By copying the data to a random built-in Virtual Matrix, a
system of pointers is being created. ...”
• “The worlds first and only unbreakable crypto”
• “We challenged the top 250 companies in the US to break our
products. None succeeded”
– They don’t even know Meganet exists
• “55,000 people tried to break our product”
– 55,000 visited their web page
• “Working on standardising VME with the different standards
committees”

Snake Oil Case Study (ctd)


Challenged large companies to break their unbreakable
crypto
• Enumerate each company in the PR to ensure that their name is
associated with large, publicly held stocks
Used accounts at organisations like BusinessWire and
PRNewswire to inject bogus press releases into
newswires
• Run anything at $500 for 400 words
• Claimed IBM was so impressed with their product that they
were recommending it for the AES
– IBM had never heard of them
Snake Oil (ctd)
Big-name companies sell snake oil too
Tools exist to recover passwords for
• Adobe Acrobat/PDF
• ACE archives
• ACIUS 4th Dimension
• Arj archives
• Clarion
• Claris Filemaker Pro
• CompuServe WinCim
• dBASE
• Diet compressed files
• Eudora
• ICQ
• Lotus 1-2-3
Continues

Snake Oil (ctd)


Continued
• Lotus Ami-Pro
• Lotus Organiser
• Lotus Symphony
• Lotus WordPro
• LZEXE compressed files
• MS Access
• MS Backup
• MS Excel
• MS Mail
• MS Money
• MS Outlook
• MS Project
• MS Scheduler
Continues
Snake Oil (ctd)
Continued
• MS Word
• MYOB
• Norton Secret Stuff
• Paradox
• Pegasus Mail
• Pklite compressed files
• Pkzip archives
• Q&A Database
• Quattro Pro
• QuickBooks
• Quicken
• Stacker
• Symantec Act
Continues

Snake Oil (ctd)


Continued
• Trumpet Winsock
• VBA projects
• WinCrypt
• Windows 3.1/95/98 passwords
• Windows Dial-up Networking (DUN)
• Windows NT/2000 passwords
• WinXFiles
• WordPerfect
• WS FTP
... and many, many more
Selling Security
Security doesn’t sell well to management
Many security systems are designed to show due diligence
or to shift blame
• Crypto/security evidence from these systems is very easy to
challenge in court
You get no credit if it works, and all the blame if it doesn’t
To ensure good security, insurance firms should tie
premiums to security measures
• Unfortunately, there’s no way to financially measure the
effectiveness of a security system

Selling Security to Management


Regulatory issues
• Liability for negligence (poor security/weak crypto)
• Shareholders could sue the company if share price drops due to
security breach
• US companies spend more on security due to litigation threats
Privacy/data protection requirements
Media stories of hacker/criminal attacks on systems
The best security customers
• Have just been publicly embarrassed
• Are facing an audit
Miscellaneous Topics

Buy a rifle, encrypt your data, and wait for the


revolution

Smart Cards
Invented in the early 1970’s
Technology became viable in early 1980’s
Major use is prepaid telephone cards (hundreds of millions)
• Use a one-way (down) counter to store card balance
Other uses
• Student ID/library cards
• Patient data
• Micropayments (bus fares, photocopying, snack food)
Memory Cards

Usually based on I2C (serial memory) bus


Typical capacity: 256 bytes
EEPROM capabilities
• Nonvolatile storage
• 10,000 write/erase cycles
• 10ms to write a cell or group of cells
Cost: $5

Microprocessor Cards

ROM/RAM contains card operating system and working


storage
EEPROM used for data storage
Microprocessor Cards (ctd)
Typical specifications
• 8-bit CPU
• 16K ROM
• 256 bytes RAM
• 4K EEPROM
Size ratio of memory cells:
RAM = 4 EEPROM size
= 16 ROM size
Cost: $5-50 (with crypto accelerator)

Smart Card Technology


Based on ISO 7816 standard, which defines
• Card size, contact layout, electrical characteristics
• I/O protocols
– Byte-based
– Block-based
• File structures
Terminology alert: Vendor literature often misuses
standard terms
• “Digital signature” = simple checksum or MAC
• “Certificate” = data + “digital signature”
File Structures

Files addressed by 16-bit file ID (FID)


• FID is often broken into DF:EF parts (MF is always 0x3F00)
Files are generally fixed-length and fixed-format

File Types
Transparent
• Binary blob
Linear fixed
• n  fixed-length records
Linear variable
• n records of fixed (but different) lengths
Cyclic
• Linear fixed, oldest record gets overwritten
Execute
• Special case of transparent file
File Attributes
EEPROM has special requirements (slow write, limited
number of write cycles) which are supported by card
attributes
• WORM, only written once
• Multiple write, uses redundant cells to recover when some cells
die
• Error detection/correction capabilities for high-value data
• Error recovery, ensures atomic file writes
– Power can be removed at any point
– Requires complex buffering and state handling

Card Commands
Typical commands are
• CREATE/SELECT/DELETE FILE
• READ/WRITE/UPDATE BINARY
– Write can only change bits from 1 to 0, update is a genuine
write
• ERASE BINARY
• READ/WRITE/UPDATE RECORD
• APPEND RECORD
• INCREASE/DECREASE
– Changes cyclic file position
Card Commands (ctd)
Access control
• Based on PIN of chip holder verification (CHV)
• VERIFY CHV
• CHANGE CHV
• UNBLOCK CHV
• ENABLE/DISABLE CHV
Authentication
• Simple challenge/response authentication protocol
• INTERNAL AUTHENTICATE
– Authenticate card to terminal
• EXTERNAL AUTHENTICATE
– Authenticate terminal to card

Card Commands (ctd)


Encryption: Various functions, typically
• ENCRYPT/DECRYPT
• SIGN DATA/VERIFY SIGNATURE
Electronic purse instructions
• INITIALISE/CREDIT/DEBIT
Application-specific instructions
• RUN GSM ALGORITHM
prEN 1546
Inter-sector electronic purse (IEP) standard, 1995

Both customer and merchant use smart-card based


electronic purses to handle payment transactions

prEN 1546 (ctd)


Defines the overall framework in some detail, but leaves
algorithms, payment types and parameters, and other
details to implementors
• Specifies the file layout and data elements for the IEP
• Defines commands INITIALISE IEP, CREDIT IEP, DEBIT
IEP, CONVERT IEP CURRENCY, and UPDATE IEP
PARAMETER
• Specifies exact payment routines in a BASIC-like
pseudolanguage
• All messages are “signed” (typically with a 4-byte DES MAC)
• Handles everything but purse-to-purse transactions
Includes many variants including a cut-down version for
phonecards and extra acknowledgements for transactions
Credit IEP Transaction
IEP Bank
INITIALISE bank for load  Verify currency and balance
(amount and currency)
Verify details  INITIALISE IEP for load
Sign( DEBIT account )  Verify details
Debit account
Verify details  Sign( CREDIT IEP )
Update card state
Sign( Load  Verify acknowledgement
acknowledgement )

Credit Merchant Transaction


IEP Merchant
 INITIALISE IEP for
purchase
Sign( INITIALISE merchant  Verify details
for purchase)
Verify details  Sign( DEBIT IEP)
Update card state
Sign( CREDIT Merchant )  Verify details
Record transaction for
transmission to bank
 Sign( Purchase
acknowledgement)
TeleQuick
Austrian CEN 1546 Quick electronic purse adapted for
online use
• Merchant  customer = Internet
• Merchant  bank = X.25
All communications uses strong SSL encryption and server
certificates
Conceived as a standard Quick transaction with terminals a
long way apart
• Transaction rollback in case of communications faults
• Virtual ATM must handle multiple simultaneous transactions
– Handled via host security modules (HSM’s)
• Windows PC is an insecure platform
– Move functionality into read (LCD, keypad, crypt module)

Working with Cards


ISO 7816 provides only a standardised command set,
implementation details are left to vendors
• Everyone does it differently
Standardised API’s are slow to appear
PKCS #11 (crypto token interface) is the most common
API
• Functionality is constantly changing to handle different
card/vendor features
• Vendors typically only implement the portions which
correspond to their products
• For any nontrivial application, custom handling is required for
each card type
Working with Cards (ctd)
The Smart Card Problem
• No cards
• No readers
• No software
Installation of readers and cards is too problematic
• Keyboard and mouse (or all of Windows) may stop working
• Installing more than one reader, or reinstalling/updating
drivers, is a recipe for disaster
– Drivers need to be installed in exactly the right order
– PC operations may be affected (eg other peripherals stop
working, system functions are disabled)
– Drivers/readers may cease to function entirely
• USB readers seem to be the safest bet

Working with Cards (ctd)


Even finding basic DES encryption which works is tricky
• Schlumberger Cryptoflex: Doesn’t make DES user-accessible
• Schlumberger Multiflex: Returns only 6 of 8 encrypted bytes
• IBM MFC: Encrypts a random number
• Maosco MULTOS: Uses a fixed, known key “for security
reasons”
• General Information Systems OSCAR: XOR’s the DES key
with a random number “for security reasons”
• Gemplus GPK: Restricts keys to 40 bits
PKCS #11
Object-oriented interface to any type of crypto token
• Smart card
• Crypto hardware accelerator
• Fortezza card
• USB-based token
• Handheld PC (eg PalmPilot)
• Software implementation
Programming interface is (in theory) completely
independent of the underlying token type

PKCS #11 (ctd)


Token provides various services to the caller
• Store public/private keys, certificates, secret keys,
authentication values, generic data
• Encrypt/decrypt
• Sign/signature check
• Wrap/unwrap key
• Generate key, generate random data
• Find object in token
PKCS #11 (ctd)
Services can be restricted until the user has logged on using
a PIN/password

PKCS #11 Token Objects


Token objects are structured in a hierarchical manner
Object
Key
Public Key
RSA Public Key
DSA Public Key
DH Public Key
KEA Public Key
Private Key
RSA Private Key
DSA Private Key
DH Private Key
KEA Private Key
Secret Key
DES Key
3DES Key
RC2/RC4/RC5 Key
Skipjack Key
Certificate
X.509 Certificate
Data
PKCS #11 Token Objects (ctd)
Each object has a collection of attributes, eg RSA private
key has:
• Object attributes
CKA_CLASS = CKO_PRIVATE_KEY
CKA_TOKEN = TRUE (persistent object)
CKA_PRIVATE = TRUE (needs login to use)
CKA_MODIFIABLE = FALSE (can’t be altered)
CKA_LABEL = “My private key” (object ID for humans)

• Key attributes
CKA_KEY_TYPE = CKK_RSA
CKA_ID = 2A170D462582F309 (object ID for computers)
CKA_LOCAL = TRUE (key generated on token)

PKCS #11 Token Objects (ctd)


• Private Key attributes
CKA_SENSITIVE = TRUE (attributes can’t be revealed outside token)
CKA_EXTRACTABLE = FALSE (can’t be exported from token)
CKA_DECRYPT = TRUE (can be used to decrypt data)
CKA_SIGN = TRUE (can be used to sign data)
CKA_UNWRAP = TRUE (can be used to unwrap encryption keys)

• RSA Private Key attributes


CKA_MODULUS = …
CKA_PUBLIC_EXPONENT = …
CKA_PRIVATE_EXPONENT = …
CKA_PRIME_1 = …
CKA_PRIME_2 = …
CKA_EXPONENT_1 = …
CKA_EXPONENT_2 = …

Like a rubber screwdriver or styrofoam broadsword, PKCS


#11 trades some utility in exchange for flexibility
JavaCard
Standard smart card with an interpreter for a Java-like
language in ROM
• Card runs Java with most features (multiple data types,
memory management, most class libraries, and all security (via
the bytecode verifier)) stripped out
– Can run up to 200 times slower than card native code
Provides the ability to mention both “Java” and “smart
cards” in the same sales literature

JavaCard (ctd)
Card contains multiple applets
• External client sends select command to card
• Card selects applet and invokes its select method
• Further commands sent by the client are forwarded to the
applets process method
• Applet is shut down via deselect method when a new select
command is received
Applet can access packages and services from other applets
• How to do this securely is still under debate
OCF
Open Card Framework, object-oriented framework for
smart card developers
• Class contains a blueprint for an object
• Object is an instance of a class

OCF (ctd)
class SmartCard
CardID
– Information identifying the card
CardServiceFactory
CardService: PurseCardService
CardService: FileSystemCardService
CardService: …
CardServiceRegistry
– Looks up requested CardService in CardServiceFactory
– Instantiates a new CardService object for the caller
CardServiceScheduler
– Communicates with the card terminal
– Coordinates access to card services
OCF (ctd)
class CardFile
Attributes
– TRANSPARENT, LINEAR FIXED, …
CardFilePath, CardFileInputStream, CardFileOutputStream

class Terminal
Slot
– Information on reader slot + optional display, keyboard
CardTerminalFactory
CardTerminal
CardTerminalRegistry
– As CardServiceRegistry

PC/SC
Interoperability Specification for ICC’s and Personal
Computer Systems
• Microsoft’s attempt to kill PKCS #11 (c.f. PCT vs SSL)
• Goes a long way towards solving the Smart Card Problem
PC/SC spec defines
• Physical and electrical characteristics as ISO 7816
• Interface device (IFD) handler
– Common software interface for card readers
– Sets out minimal IFD requirements (command handling,
card insertion check)
• Integrated circuit card (ICC) resource manager
– Controls all IFD’s attached to the system
PC/SC (ctd)
PC/SC spec (ctd)
• ICC service provider (ICCSP)
– Maintains context of a card session
• Crypto service provider (CSP)
– Optional manager for crypto functionality
– Separated out for export control purposes
Getting it all to work properly was a remarkable
achievement

PC/SC (ctd)

Provided as part of newer Windows releases


• Simon Bill says smartcards
PKCS #11 vs OCF vs PC/SC
Language OS Abstraction
Level
PKCS #11 Any Any High
OCF Java JVM Low
PC/SC Any Windows Low (ICCSP)
High (CSP)

PKCS #11: Powerful but complex to implement  slow to


appear
OCF: By Java programmers for Java programmers
PC/SC: Any platform you want as long as it’s Windows
(limited availability under Linux)

Smart Card Limitations


Typical cards have the following limits
• 9600bps only (~1K/sec communication rate)
– A single public-key operation can take seconds just to
communicate the data
– Many card CPU’s are also used for I/O, card can’t do
anything else while communicating
• 3.5 MHz clock (slow 8-bit CPU)
• No on-board battery (power analysis attacks)
• Limited chip size (5mm2) and thickness due to packaging
constraints
Other crypto token form factors (USB token, PCMCIA
card, iButton, Datakey) avoid these problems
Dallas iButton
Avoids most smart card problems by changing the
packaging
Device is contained in 165mm microcan
• Stainless steel case is much stronger than smart card
• Case contains built-in battery and clock
• I/O doesn’t tie up a serial port
– $10 iButton interface is cheaper than $50 card reader
Capabilities range from simple serial-number ID, real-time
clock, and data storage to crypto iButton
• 8051 processor, 32K ROM, 6K NVRAM
• 1024-bit crypto accelerator
• Real-time clock

iButton Security
iButton package allows for much better security measures
than smart cards
• Various triggers erase memory if tampering is detected

• Active face of chip is metallurgically bonded to base of can

Energy reservoir capacitor is used to zeroise memory


Timing crystal drives on-board clock
iButton Security (ctd)
Zeroisation can be triggered by
• Opening the case
• Disconnecting the battery
• Temperatures below -20º C or above 70º C
• Excessive voltage levels
• Attempts to penetrate the case to get to the chip
– Chip contains screen to prevent microprobing

iButton Programming
The device recognises two roles
• Crypto officer initialises the device
– Create transaction group(s)
– Set up information (keys, monetary value, etc)
– Set initial user PIN
– Lock transaction group(s)
• User utilises it after initialisation by crypto officer
Device contains one default group (Dallas Primary Feature
Set) initialised at manufacture
• Allows crypto officer to initialise the device
• Allows user to verify that crypto officer hasn’t altered certain
initial options
iButton Programming (ctd)
Dallas Primary contains default private key generated by
device at manufacture
• Corresponding public key is certified by the manufacturer
• Guarantees to a third party that a given initial key belongs to a
given iButton
• Users can generate further keys as required

iButton Special Features


Device provides enhanced signature capabilities using on-
board resources
• Signing time
• Transaction counter (incremented for each signature, used to
detect trojan signing software)
• Device serial number
Signing process
• User hashes data with MD5, SHA-1, RIPEMD-160, …
• iButton hashes user-supplied hash with device serial number,
transaction counter, and timestamp
• iButton signs hash using private key
• User retrieves serial number, transaction counter, timestamp,
and signature from iButton
Contactless Cards
Several levels of contactless cards
• Contact, ISO 7816
• Close-coupled, 0-2mm, ISO 10536
– Abandoned in favour of proximity cards
• Proximity, 0-10cm, ISO 14443
– Typical use: MIFARE, transport applications
• Vicinity, ~1m, ISO 15693
– Typical use: RFID
Terminology and specs mirror ISO 7816
• Card = Proximity Integrated Circuit Card, PICC
• Reader = Proximity Coupling Device, PCD

Contactless Cards (ctd)


Contactless card issues
• Power and communications link is unstable
• Background noise problems
• Low power levels available
– Boosting power increases RFI caused by carrier sidebands
– Maximum range determined by level at which RFI still
complies with emission laws
• Transaction must be rapid (100-200ms)
– Move as many people through as few turnstiles as possible
Contactless Card Communications
Power and data transmitted at 13.56 MHz
• Card coil requires only a few turns
• Coil can be printed circuit or a standard wire coil
• Card and reader communicate at 106 Kbps (13.56MHz/128)
Card communicates with reader using load modulation

• Switches a load resistor in and out of circuit


• Reader detects changes in load

Contactless Card Communications (ctd)


Load modulation types
• Type A, simple and efficient

• Type B, complex and inefficient — included for political


reasons
Contactless Card Communications (ctd)
Reader communicates with card using ASK
• 100% amplitude shift keying = turn carrier on and off
• CMOS circuits in card consume no power when not switched
Encoding uses a modified Miller code
• Carrier pauses at different positions
• To decode, card measures distance between pauses

• Detecting errors like dropped bits is very simple

Initialisation and Anticollision Handling


Card initialisation
• Reader actives all cards using wakeup frame
– Wakeup frame has special format to distinguish it from
normal frames
• Extraneous cards weeded out using collision detection
Vicinity Cards
Extend proximity card ideas
• PCD  VCD (Vicinity card device)
• PICC  VICC (Vicinity integrated circuit card)
Vicinity card requirements
• Low-cost, high volume, long range, simple cards
More commonly use type B modulation
• Less RFI allows operation over longer ranges
• Use PPM (pulse position modulation) for VCD  VICC, FSK
for VICC  VCD
– Communication rate 6.6 Kbps
– Variations on modulation, coding, and baud rate for
different applications (speed vs distance vs noise immunity
vs emission levels)

Attacks on Smart Cards


Use doctored terminal/card reader
• Reuse and/or replay authentication to card
• Display $x transaction but debit $y
• Debit account multiple times
Protocol attacks
• Card security protocols are often simple and not terribly secure
Fool CPU into reading from external instead of internal
ROM
Manipulating supply voltages can affect security
mechanisms
• Picbuster
• Clock/power glitches can affect execution of instructions
Attacks on Smart Cards (ctd)
Erasing an EEPROM cell requires a high voltage (12 vs
5V) charge
• Don’t provide the power to erase cells
• Most cards now generate the voltage internally
– Destroy the (usually large) on-chip voltage generator to
ensure the memory is never erased

Physical Attacks
Erase onboard EPROM with UV spot beam
Remove chip from encapsulation with nitric acid
• Use microprobing to access internal circuit sections
• Use electron-beam tester to read signals from the operational
circuit
Example: PIN recovery with an e-beam tester
Physical Attacks (ctd)
Modify the circuit using a focused ion beam (FIB)
workstation
• Disable/bypass security circuitry (Mondex)
• Disconnect all but EEPROM and CPU read circuitry

Attacking the Random Number Generator


Generating good random data (for encryption keys) on a
card is exceedingly difficult
• Self-contained, sealed environment contains very little
unpredictable state
Possible attacks
• Cycle the RNG until the EEPROM locks up
• Drop the operating voltage to upset analogue-circuit RNG’s
• French government attack: Force manufacturers to disable key
generation
– This was probably a blessing in disguise, since externally
generated keys may be much safer to use
Timing/Power Analysis
Crypto operations in cards
• Take variable amounts of time depending on key and data bits
• Use variable amounts of power depending on key and data bits
– Transistors are voltage-controlled switches which consume
power and produce electromagnetic radiation
– Power analysis can provide a picture of DES or RSA
en/decrypt operations
– Recovers 512-bit RSA key at ~3 bits/min on a PPro 200
Differential power analysis is even more powerful
• Many card challenge/response protocols are DES-based 
apply many challenge/response operations and observe power
signature

Voice Encryption
Built from three components

Hardware-based
• DSP with GSM or CELP speech compression
• DSP modem
Software-based
• GSM or CELP in software
• External modem or TCP/IP network connection
Mostly built from off-the-shelf parts (GSM DSP, modem
DSP, software building blocks)
TCSEC/Orange Book
Trusted Computer Security Evaluation Criteria
• Based on 10-15 years of security research
• Usage model: multiuser mainframes, terminals/users cleared at
a single level
• “Make it simple enough that even a general can understand it”
• Attempts to apply it to other areas (eg networks) via
increasingly tortuous “interpretations”

Applying the Orange Book


Maximum sensitivity Rmax Minimum user clearance Rmin

Unclassified (U) 0 Uncleared (U) 0


Unclassified but 1 Uncleared, allowed access to 1
sensitive (N) sensitive information (N)
Confidential (C) 2 Confidential (C) 2
Secret (S) 3 Secret (S) 3
Top Secret (TS) 5 Top Secret (TS)/Background 4
Investigation
Top Secret (TS)/Special 5
Background Investigation

Risk Index = Rmin – Rmax


Applying the Orange Book (ctd)
Risk Operating Mode Orange
index Book class
0 Dedicated None
0 System high C2
1 Limited access, controlled, B1
compartmented, multilevel
2 Limited access, controlled, B2
compartmented, multilevel
3 Controlled, multilevel B3
4 Multilevel A1
5 Multilevel ?

Applying the Orange Book (ctd)


Operating modes
Dedicated System exclusively used for one classification

System high Entire system operated at and all users cleared


at highest sensitivity level of information
Limited access All users not fully cleared or authorised access
to all data
Controlled Limited multilevel

Compartmented At least one compartment requiring special


access to which not all users have been cleared,
but all users cleared to highest level
Multilevel Two or more classification levels, not all users
cleared for all levels
Typical Voice Encryption System
Speech compression
• GSM compression (high-bandwidth)
• CELP compression (low-bandwidth)
Security
• DH key exchange
• DES (larger manufacturers)
• 3DES, IDEA, Blowfish (smaller manufacturers, software)
• Password/PIN authentication

Typical Voice Encryption System (ctd)


Communications
• Built-in modem (hardware)
• Internet communications (software)
Speak Freely,
http://www.fourmilab.ch/netfone/windows/
speak_freely.html
• Typical software implementation
• Uses standard software components
• Portable across several operating systems
Problems
Latency issues (dropped packets)
Authentication/MITM attacks
No standardisation

GSM
GSM subscriber identity module (SIM) contains
• International Mobile Subscriber Identity (IMSI)
• Subscriber identification key Ki
Used for authentication and encryption via simple
challenge/response protocol
• A3 and A8 algorithms provide authentication (usually
combined as COMP128)
• A5 provides encryption
GSM (ctd)

Authentication is simple challenge/response using A3 and


IMSI/Ki

GSM Security
A3 used to generate response
A8 used to generate A5 key
GSM Security (ctd)
1. Base station transmits 128-bit challenge RAND
2. Mobile unit returns 32-bit signed response SRES via A3
3. RAND and Ki are combined via A8 to give a 64-bit A5
key
4. 114-bit frames are encrypted using the key and frame
number as input to A5

GSM Security (ctd)


GSM security was broken in April 1998
• COMP128 is weak, allows IMSI and Ki to be extracted
– Direct access to SIM (cellphone cloning)
– Over-the-air queries to phone
• Some cards were later modified to limit the number of
COMP128 queries
• A5 was deliberately weakened by zeroing 10 key bits
– Even where providers don’t use COMP128, all shorten the
key
• Claimed GSM fraud detection system doesn’t seem to exist
• Affects 80 million GSM phones
GSM Security (ctd)
Key weakening was confirmed by logs from GSM base
stations
BSSMAP GSM 08.08 Rev 3.9.2 (BSSM) HaNDover REQuest (HOREQ)
-------0 Discrimination bit D BSSMAP
0000000- Filler
00101011 Message Length 43
00010000 Message Type 0x10
Channel Type
00001011 IE Name Channel type
00000011 IE Length 3
00000001 Speech/Data Indicator Speech
00001000 Channel Rate/Type Full rate TCH channel Bm
00000001 Speech encoding algorithm GSM speech algorithm
Encryption Information
00001010 IE Name Encryption information
00001001 IE Length 9
00000010 Algorithm ID GSM user data encryption V.1
******** Encryption Key C9 7F 45 7E 29 8E 08 00
Classmark Information Type 2

GSM Security (ctd)


Many countries were sold a weakened A5 called A5/2
• A5 security: Breakable in real time with 240 precomputations
• A5/2 security: None (5 clock cycles to break)
• Another attack is to bypass GSM entirely and attack the base
station or land lines/microwave links
GSM security was compromised at every level
• Deliberately weakened key generation
• Broken authentiction
– GSM MoU knew of this nearly a decade ago but didn’t
inform its members
• A5/1 was known to be weak, A5/2 was deliberately designed to
be weak
GSM represents well-designed multiple-redundant
compromise
GSM Security (ctd)
Most other cellphone security systems have been broken
too
• Secret design process with no public scrutiny or external
review
• Government interference to ensure poor security

Traffic Analysis
Monitors presence of communications and
source/destination
• Most common is analysis of web server logs
• Search engines reveal information on popularity of pages
• The mere presence of communications can reveal information
Simple Anonymiser Proxy

HTTP version at http://www.anonymizer.com


Fairly easy to defeat:

Mixes
Encrypted messages sent over user-selected route through a
network
• Packet = A( B( C( D( E( data )))))
• Each server peels off a layer and forwards the data
Servers can only see one hop

Sender and receiver can’t be (easily) linked


Attacks on Mixes
Incoming messages result in outgoing messages
• Reorder messages
• Delay messages
Message sizes change in a predictable manner
Replay message (spam attack)
• Many identical messages will emerge at some point

Onion Routing
Message routing using mixes,
http://www.itd.nrl.navy.mil/ITD/5540/
projects/onion-routing
Routers have permanent socket connections
Data is sent over short-term connections tunnelled over
permanent connections
• 5-layer onions
• 48-byte datagrams
• CREATE/DESTROY for connection control
• DATA/PADDING to move datagrams
• Limited form of datagram reordering
• Onions are padded to compensate for removed layers
Mixmaster
Uses message ID’s to stop replay attacks
Message sizes never change
• ‘Used’ headers are moved to the end, remaining headers are
moved up one
• Payload is padded to a fixed size
• Large payloads are broken up into multiple messages
• All parts of the message are encrypted
Encryption is 1024 bit RSA with triple DES
Message has 20 headers of 512 bytes and a 10K body

Crowds
Mixes have two main problems
• Routers are a vulnerable attack point
• Requires static routing
Router vulnerability solved via jondo (anonymous persona)
Messages are forwarded to a random jondo
• Can’t tell whether a message originates at a given jondo
• Message and reply follow the same path
LPWA
Lucent Personalised Web Assistant
• Provides access to web sites via LPWA proxy
• Automatically generates per-site pseudonymous personas
– User name
– Password
– Email address
• Filters sensitive HTTP headers

LPWA (ctd)
Protects users from profile aggregation, spamming
• User connects to LPWA using email address and password
• When web site asks for identification information, user types
\u (user name), \p (password), \@ (email address)
• Proxy translates these to per-site pseudonymous personas
Email forwarder forwards mail to users real email address
• Spam sources can be blocked on a per-persona basis
Steganography
From the Greek for “hidden writing”, secures data by
hiding rather than encryption
• Encryption is usually used as a first step before steganography
Encrypted data looks like white noise
Steganography hides this noise in other data
• By replacing existing noise
• By using it as a model to generate innocuous-looking data

Hiding Information in Noise


All data from analogue sources contains noise
• Background noise
• Sampling/quantisation error
• Equipment/switching noise
Extract the natural noise and replace it with synthetic noise
• Replace least significant bit(s)
• Spread-spectrum coding
• Various other modulation techniques
Examples of channels
• Digital images (PhotoCD, GIF, BMP, PNG)
• Sound (WAV files)
• ISDN voice data
Generating Synthetic Data
Usually only has to fool automated scanners
• Needs to be good enough to get past their detection threshold
Two variants
• Use a statistical model of the target language to generate
plausible-looking data
– “Wants to apply more or right is better than this mechanism.
Our only way is surrounded by radio station. When
leaving. This mechanism is later years”.
– Works like a text compressor in reverse
– Can be made arbrtrarily close to real text

Generating Synthetic Data (ctd)


• Use a grammatical model of actual text to build plausible-
sounding data
– “{Steganography|Stego} provides a {means|mechanism}
for {hiding|encoding} {hidden|secret} {data|information} in
{plain|open} {view|sight}”.
– More work than the statistical model method, but can
provide a virtually undetectable channel
Problems with steganography
• The better the steganography, the lower the bandwidth
Main use is as an argument against crypto restrictions
Watermarking
Uses redundancy in image/sound to encode information
Requirements
• Invisibility
• Little effect on compressability
• Robustness
• High detection reliability
• Security
• Inexpensive

Watermarking (ctd)
Watermark insertion
Watermarking (ctd)
Watermark detection/checking

Watermarking (ctd)
Public watermarking
• Anyone can detect/view the watermark (and try to remove it)
Private watermarking
• Creator can demonstrate ownership using a secret key
Copy Protection Working Group (CPTWG) looking at
standardisation, http://www.dvcc.com/dhsg
Defeating Watermarking
Lossy compression (JPEG)
Resizing
Noise insertion (print+scan)
Cropping
Interpretation attacks (neutralise ownership evidence)
Automated anti-watermarking software available (eg
UnZign)

Defeating Watermarking (ctd)


Presentation attacks (segmented images)

Watermarking is still in its infancy


• No watermarking standards
• No indication of security/benchmarks
• No legal recognition
Other Crypto Applications
Hashcash
• Requires finding a collision for n bits of a hash function
– “Find a message for which the last 16 bits of the SHA-1
hash are 1F23”
• Forces a program to expend a (configurable) amount of effort
before access is granted to a system or service
• Useful for stopping denial-of-service attacks
– n varies as the system load goes up or down
– Can be used as a spam-blocker

Other Crypto Applications (ctd)


PGP Moose
• Signs all postings to moderated newsgroups
– Signature is added to the message as an X-Auth header
• Unsigned messages (spam, forgeries) are automatically
cancelled
• Has so far proven 100% effective in stopping newsgroup
spam/forgeries
Crypto Politics and Export Controls

In God we trust. All others we monitor


— NSA motto

Crypto Politics
It's almost impossible to avoid this
Some larger companies have special legal divisions set up
just for this
Any real policy information is obtained through (US)
freedom of information act (FOIA) lawsuits rather than
official press releases
• Claimed policy and actual policy are often complete opposites
Data Storage vs Session Encryption Key
Recovery
Legitimate need for stored data recovery in case of
accident/lost keys/termination of employment
• Use secret sharing scheme for emergency access
No legitimate need (or commercial incentive) for
communications session recovery
• If there’s a problem, re-transmit the data
Strong push by governments to convince companies that
data storage recovery = communications recovery
• Key recovery has been given so many names (key escrow, law
enforcement access, key recovery, data recovery, trusted third
parties, etc etc) that it’s now known by the general term GAK
(Government Access to Keys)

Early History
1977 NSA tried to block NSF funding of crypto research
Attempt to intimidate IEEE over security conference
1978 NSA uses Invention Secrecy Act to classify crypto patents
1979 Bobby Ray Inman’s “The sky is falling” speech: NSA should
control crypto research
1982 NSA blocked NBS request for public-key equivalent of DES
1984 NSDD-145 moves control of computer security from NBS to
NSA (NSA memo calls NSDD-145 “NSA-engineered”)
1986 NSDD-145 extended to allow NSA jurisdiction over private
databases (Dialog, Compuserve)
NSA tries to decertify DES
CCEP (Commercial COMSEC Endorsement Program) using
NSA-designed tamperproof hardware (eg Blacker)
Early History (ctd)
1987 Computer Security Act moved control of crypto back to NBS
1988 NSA tries to block publication of Khufu block cipher
1989 NSA/NIST memorandum of understanding moves control of
crypto back to the NSA
1990 NSA designs signature-only PKC for NIST, begins work on
Clipper
1991 NIST announces DSS and NSA-designed SHS
Industry reaction was almost universally negative

Digital Telephony
Law Enforcement Requirements for the Surveillance of
Electronic Communications, 1992
• Real-time, full-time monitoring capability
• Intercepts undetectable to all parties (including service
providers)
• Multiple simultaneous intercepts possible
• Decoding or decryption of all communications
• Supplementary information provided is:
– Directory number, associated directory number, line
equipment number, call type/bearer capability, service
profile identifier, PBX directory number, PBX station
identifier, electronic serial number (ESN), mobile
identification number (MIN), termainl equipment identifier,
and service site information (for cellphone tracking)
Digital Telephony
FBI spent two years promoting it (Operation Root Canal)
Digital Telephony & Privacy Improvement Act passed as
Communications Assistance for Law Enforcement Act
(CALEA), October 1994.
• FBI cost estimate (1992-1994): $150M $300M $500M
• Industry cost estimate (1994): $3B
• More recent study (1998): $8B per year
– $12M per court-approved wiretap
CALEA still isn’t reality
• Cost
• Technical difficulty

Clipper
1992 AT&T announces the AT&T 3600 Telephone
Security Device (TSD), a commercial DES-
encrypted phone add-on
The NSA goes ballistic
• NSA convinces AT&T to use Clipper in the TSD in exchange
for guaranteed government purchases
• US government buys entire DES-based TSD production run of
9000, pays AT&T to retrofit them with Clipper
15 April 1993, White House announces Clipper
• CCEP laundered for public acceptability
• Third-party access guaranteed through Law Enforcement
Access Field (LEAF)
– Originally LEEF, then LEAF, now DRF
Clipper (ctd)
128-bit LEAF contains session key encrypted with Clipper
family key and per-chip key

Clipper (ctd)
Clipper in operation
• Other party and third party decrypt LEAF with family key
• Both use checksum to detect bogus LEAF
• Third party looks up chip key in database, decrypts session key
• To increase public acceptability the key database is stored by
two different agencies
• Communicate secure in the knowledge that only the worlds
largest spy agency is listening
– Capstone documents are marked “Top Secret Umbra” =
coded for signals intelligence, not for security/privacy
Clipper Weaknesses
80-bit key is too weak
Skipjack algorithm used in Clipper had no public scrutiny
16-bit checksum can be defeated
Cipher operation mode (OFB) allows message forgery
Chip ID served to neatly tag and identify every
communication

Reaction to Clipper
80% of Americans opposed it
Of over 300 submissions, only 2 were supportive
Clipper adopted as Escrowed Encryption Standard (EES),
FIPS 185, in February 1994
• The legal machinations required to get this adopted fill a 200-
page law journal article
Noone bought Clipper
• AT&T shut down its product line
• FOIA’d documents obtained later showed that the government
had a secret key escrow policy which was the exact opposite of
the publicly claimed Clipper policy
Fortezza
Based on Capstone (Clipper + DH + DSA + SHA)
• Key exchange uses KEA, modified DH/DSA
• Data encryption uses Skipjack, NSA-designed block cipher
Specifics were a moving target
1991 = Pre Message Security Protocol (PMSP), device = smart
card
1993 = MOSAIC, device = Tessera card
1994 = Multi-Level Information Systems Security Initiative
(MISSI)
Later = Fortezza

Fortezza (ctd)
Used to implement MSP in the Defence Message System
(DMS), the DoD’s Internet
• DMS provided financial encouragement for Fortezza
(Netscape, Oracle received $5M encouragement each)
• Fortezza cards are expensive
– $70 each for the government
– ~$250 each for everyone else
• Cards require PCMCIA readers and software
– Have to shut down workstation to insert/remove card
– Fortezza drivers conflict with other PCMCIA drivers
• Attempts were made to sell Fortezza to foreign governments
Later Fortezza versions removed GAK and added more
useful (but classified) ciphers (eg Baton)
Skipjack
Skipjack and KEA were declassified in early 1998
• Expensive and scarce hardware necessitated software
implementations, which would have been reverse-engineered
• Release was a denial-of-service attack on the worlds
cryptographers
• 32-round, fairly conventional block cipher
– Breakable if limited to 31 rounds
• Implementations were available worldwide within hours
• KEA specification contains errors, can’t be implemented as per
the specification

Post-Clipper Crypto Restrictions


Commercial key escrow, June 1995
• Anti-Electronic Racketeering Act
– Outlaw distribution of encryption software
Clipper II, November 1995
• Software key escrow
• Up to 64-bit exportable with backdoors
Lotus Notes, January 1996
• 64-bit key with 24 bits held by the NSA
• Swedish government didn’t discover this until 1998
Post-Clipper Crypto Restrictions (ctd)
Policy laundering, 1996
• Persuade the OECD to adopt US-style restrictions
• Special OECD ambassador appointed to lobby OECD nations
• OECD rejected US position
Clipper 3, May 1996
• More escrow based on X.509
NRC report, May 1996
• Don’t restrict crypto
• Allow DES export
• Crypto debate can be carried out in public

Post-Clipper Crypto Restrictions (ctd)


Clipper 3.1, July 1996
• Even more software escrow
• Allow export now if you build in backdoors later
– Even if backdoors were available now, there’d be no way to
manage them
After 1996, an endless series of trivial revisions to Clipper
3.x
• “Dance of the seven (hundred) veils”
Boiling the Frog
Intent is to buy off the loudest opponents until only the
ones who can be safely ignored remain
• Banks and financial institutions pacified with SGC
• Fortune 500 pacified with special export dispensations
• Subsidiaries of US companies pacified with case-by-case
export of strong crypto
• Hospitals, some governments pacified with occasional special-
case exports when they complain loudly enough
Only software companies losing foreign sales and civil
liberties groups now remain

US to Relax Export Controls


This exact same announcement has been made (on average)
every three months since April 1994
• The pace has accelerated in the last year or two
The same DES export announcement has been recycled
more than half a dozen times
• “encryption products using keys of up to 56 bits will be
allowed for export”
• “a relaxation of controls for non-recovery encryption products
up to 56-bit key length DES”
• “allowed export of encryption whose keys are as long as 56
bits”
(one is from 1996, one from 1997, one from 1998)
US to Relax Export Controls (ctd)
In 1994 you couldn’t buy general-purpose strong crypto
from the US
... dozens of export control press releases later...
In 1999 you still can’t buy general-purpose strong crypto
from the US

Export Controls
The four rules of US export controls
1. They don’t make any sense
2. They change constantly
3. If you get it wrong, you go to jail
4. The enforcers have no sense of humour
Corollary
• If there’s a rule you don’t like, wait. It’ll change
• If there’s a rule you like, wait. It’ll change
US Export Controls
Based on International Traffic in Arms Regulations
(ITAR), 1943 law designed to stop Nazi Germany and
Imperial Japan from obtaining US technology
Redone as EAR (Export Administration Regulations) in
1996
• ITAR was handled by the State Department who allowed
almost nothing out
• Intent of EAR was to transfer controls to the more business-
friendly Commerce Department
– Unfortunately the State Department baggage came with
them
– Government showed software companies a piece of prime
real estate, then moved the boundary markers into a swamp
once they’d signed the cheque

US Export Controls (ctd)


Export controls don’t exist as a conventional law
• In the third week of August of each year, the US president
declares a national emergency under the International
Emergency Economic Powers Act, 1933 (based on the Trading
with the Enemy Act, 1917), with the duration of the emergency
being one year
– The emergency being used is the Great Depression
• Using the powers given to him by the act, he issues a
presidential decree which extends the export controls for
another year
• The following year at the same time, the charade is repeated
• The constitutionality of this has been called into question
Effects of Export Controls
Export controls are completely ineffective in stopping
anyone from acquiring any type of encryption
• Anyone who wants it can get strong encryption anywhere
within minutes
• Average public key sizes used when users have a choice
– 1024 bits in 1996
– 2048 bits in 1998
• (corresponding to the default “strong” encryption key size in
PGP 2.x and 5.x)
but...

Effects of Export Controls (ctd)


Export controls are highly effective in ensuring that the
masses have no real security
• The majority of all crypto in use worldwide is crippled or
broken
– 77% of Thawte users are using weak encryption
– 60% of them are in the US
– For most of its existence, Verisign issued weak (512-bit)
keys to users outside and inside the US
Effects of Export Controls (ctd)
In practice, US companies have strong encryption,
everyone else has weak encryption
Practical example of export control effects is demonstrated
by CIA hacking into European parliament computers in
1996 (Sunday Times):
“includes details of the private medical and financial records of
many MEPs and officials, and discussion documents on
confidential issues, including trade, tariff and quota
agreements. The breach came to light when officials believed
that American negotiators had been given advance warning of
confidential European Union positions in last year’s trade
negotiations”
“They were able to exploit the fact that parts of the system were
manufactured by two American firms”

Economic Effects of Controls


The worldwide crypto market is in the low billions, but
sales which require crypto are in the hundreds of billions
• 500 non-US firms sell 700 crypto products
• A web search for encryption produced over 50,000 hits
• RSADSI claims 300 million RSA products are used worldwide
Controls lead to lost sales and slowed growth in
encryption-dependent industries
Cost savings due to intranets and extranets can’t be realised
Economic Strategy Institute report estimates the US crypto
policy will cost US industry $50B in 2003, $65B in 2004
• Imposing GAK would cost $140B as everyone switched to
foreign products
Legal Challenges to US Controls
Three main challenges, intent is to get a Supreme Court
ruling on export controls
Karn Case, 1995
• “Applied Cryptography” can be exported as a book but not as a
floppy disk
Bernstein Case, 1995
• ITAR/EAR is unconstitutional since it violates the First
Amendment
Junger Case, 1996
• Export controls prevent the teaching of crypto to foreign
students (still being decided)

French and Russian Crypto Controls


French controls are based on the “decret de 18 avril 1939”
• On a scale of 1 to 8, encryption is rated 2
– Netscape is the second most dangerous weapon type
recognised by the French government
• Modified constantly over the years, “decret 86-250 du 18 fev
1986” explicitly mentions encryption software, “loi 90-1170 du
29 decembre 1990” requires approval for encryption use from
the Prime Minister
– “If you don’t tell us you’re using PGP, noone will bother
you. If you ask us for permission to use it, we will refuse”
— J. Vincent-Carrefour, head of the SCSSI
French and Russian Crypto Controls (ctd)
• Companies big enough to afford it would go to some lengths to
sidestep the controls
– Daily couriers carried information from Paris to London
– Information was encrypted and sent to corporate HQ
– Replies were decrypted and carried back to Paris
• 40-bit encryption was allowed in 1996 after a French
researcher demonstrated how easy it was to break
French controls were removed in 1999 after publication of
a European Parliament report detailing massive US
communications interception and surveillance initiatives
in Europe (Echelon)
• Sole effect of controls was to make US industrial espionage
easier

French and Russian Crypto Controls (ctd)


Russian controls created by presidential decree (ukaz),
April 1995
• Places encryption under the control of Federal Agency for
Governmental Communications and Information (FAPSI), a
department of the (former) KGB
• Requires that all commercial banks dealing with the Central
Bank of Russia, and by extension all businesses dealing with
that, use only FAPSI-approved encryption
• Provides a nice guaranteed money-earner for the (ex-)KGB.
“The severity of Russian law is compensated for by it’s
non-mandatoryness”. Individuals and companies openly
use and sell encryption with no repercussions
Non-US controls
Based on Cold War COCOM controls
• Predated PC’s, fax machines, the Internet, etc. Regarded as
archaic and unrealistic
• Run from the US embassy in Paris, seen as merely an
extension of the US State Department
– COCOM-era “New Zealands Export Controls” are actually
US documents with US wording and spelling
• Growing resentment in Europe where controls were seen as
US-imposed trade barriers

Wassenaar and Software Export


COCOM was disbanded in March 1994, reformed as the
Wassenaar Arrangement in November 1996
• Wassenaar = COCOM with the section numbers changed (and
localised spelling)
Wassenaar has four main purposes, of which one is to “not
impede bona fide civil transactions”
In recognition of COCOM’s unrealistic nature, Wassenaar
created blanket exceptions for public domain and mass-
market software
• Software exceptions are implemented via the General Software
Note (GSN)
Wassenaar and Software Export (ctd)
General Software Note (GSN)

(This note overrides any control within section D of


Categories 0 to 9)

Categories 0 to 9 of this list do not control


'software' which is either:

a. Generally available to the public by being:


1. Sold from stock at retail selling points,
without restriction, by means of:
a. Over-the-counter transactions;
b. Mail order transactions; or
c. Telephone order transactions; and
2. Designed for installation by the user without
further substantial support by the supplier; or
b. 'In the public domain'

Wassenaar and Software Export (ctd)


‘In the public domain’ is defined as:
'Technology' or 'software' which has been made
available without restrictions upon its further
dissemination (copyright restrictions do no remove
'technology' or 'software' from being 'in the public
domain')

(‘technology’ and ‘software’ are further defined)


This allows almost unrestricted crypto software export
Doctoring Wassenaar
After Wassenaar was finalised, Australia and New Zealand
altered it as follows:
With the exception of Category 5, Part 2 (Information
Security), Categories 0 to 9 of this list do not
control 'software' which is either:

The altered form of the Wassenaar text reverses the


original intent and directly contravenes the requirement
that the controls not impede bona fide exports
• Noone has ever been able to explain why this alteration was
made, or by whom

Enforcing the Controls


“My life as an international arms courier”, 1995
• Attempt to export US-exportable crypto device
• Noone knew how to handle this “routine” export
“Anyone trying to follow the regulations is forced to jump
through pointless hoops so obscure that even the people
charged with enforcing them don’t know what to make of
them”
“My life as a Kiwi arms courier”, 1998
• Noone in NZ knows what to do either
Enforcing the Controls (ctd)
Typical effects of controls on US companies
• “Order placed by large foreign company“
• “Advised that approval would be unlikely”
• “Contract went to foreign competitor”
There are hundreds of these cases, totalling hundreds of
millions of dollars

Menwith Hill
World’s largest regional sigint interception centre (RSOC)
Located in UK, staffed by 1,200 US personnel
Intercepts communications from all over Europe for
transmission back to the US
• 28 radomes
• Outgoing comms capacity for 100,000 simultaneous phonecalls
• Taps into UK microwave trunk at Hunters Stones, this carried
almost all UK long-distance calls in the 1970’s and 1980’s
• NSA director Studeman statement on state of interception in
1992
– 2 million intercepted messages/hour
– 17.5 billion intercepted messages/year
Menwith Hill (ctd)
SILKWORTH
• 56 satellites which intercept long-distance microwave links
• Mercury, Mentor, Trumpet, satellites controlled via the
RUNWAY radomes
MOONPENNY
• Unauthorised interception of standard satellite
communications

Echelon
US interception stations similar to Menwith Hill are
scattered worldwide
Intelsat
• Morwenstow, UK (Atlantic satellites)
• Rossman, North Carolina (Atlantic satellites)
• Sugar Grove, West Virginia (Atlantic satellites)
• Yakima, Washington (Pacific satellites)
• Geraldton, West Australia (Indian ocean satellites)
• Waihopai, NZ (Pacific satellites)
Echelon (ctd)
Other satellites
• Bad Aibling, Germany
• Bude, Cornwall
• Kojarena, Australia
• Leitrim, outside Ottawa
• Menwith Hill again
• Misawa, Japan
• Sabana Seca, Puerto Rico (US)
• Shoal Bay, outside Darwin

Echelon (ctd)
Radio
• Many sites in the US and UK
• Bamaga, Australia
• Diego Garcia, Indian Ocean
• Tangimoana, New Zealand
Currently around 120 satellite collection systems are in
operation
• 40 targeting western commercial communications satellites
• 30 controlling space-based interception satellites
• 50 targeting Soviet communications satellites (some have since
been reassigned to commercial western satellites)
Echelon (ctd)
US satellites monitor terrestrial radio, microwave, and
cellphone communications
• Operated by CIA and NSA, launched by NRO
– Ferret in 1960’s
– Canyon, Rhyolite, Aquacade in 1970’s
– Chalet, Vortex, Magnum, Orion, Jumpseat in 1980’s
– Mercury, Mentor, Trumpet in 1990’s
– Cost ~$1B each
• Orion, Vortex intercept telecoms, Trumpet intercepts
cellphones
• Britain (Project Zircon) and France (Project Zenon) have
attempted similar schemes

Echelon (ctd)
• Ground stations located in
– Buckley ANGB, Denver, Colorado
– Menwith Hill, UK
– Bad Aibling, Germany
– Pine Gap (Merino), Australia
Information is collected and processed at Regional Sigint
Operations Centres (RSOC)
• European RSOC, Bad Aibling, Germany
• Central RSOC, Fort Gordon, Georgia
• Pacific RSOC, Kunia, Hawaii
• Atlantic RSOC, Menwith Hill, UK
• Southern RSOC, Lackland AFB, Texas
Echelon (ctd)
Stations intercept private phonecalls, faxes, telexes, emails,
and other communications and forward them to the NSA
All communications are automatically scanned for
keywords (PATHFINDER at Menwith Hill) and/or voice
patterns (VOICECAST at Menwith Hill)
• Economic information is forwarded to US companies by the
Office of Intelligence Liason
• Preferential beneficiaries are the defence contractors
(Lockheed, Boeing, Loral, TRW, Raytheon) who built Echelon
Echelon is covered in the European Parliament reports
“Assessing the Technologies of Political Control” and
“Interception Capabilities 2000”

Blind Signal Demodulation


Signal demodulation without the cooperation of the
sender/receiver
Avoids the need for adapative equalisation or other
initialisation and training
• Automatically adapts to modulation techniques such as QAM
• Can adapt to unknown baud rates (V.34 can employ any of six
symbol rates)
• A decade ago this was regarded as impossible to do
Implemented in standard DSP hardware (modems) or
ASICs (digital video, digitalmicrowave)
Modem signals can be demodulated in software using a
Pentium MMX/MicroSparc
Blind Signal Demodulation (ctd)
Typical commercial blind demodulation equipment
• Voice Channel Demodulator
– Input = E1 or E3
– Output = all leased-line and dialup modem, fax, voice, and
digital data signals with all data and protocols (eg V.42bis
compression, PPP, and Internet protocols like POP for a
modem link) decoded

Blind Signal Demodulation (ctd)


• Signals Analysis Workstation
– Input = Any type of link signal (FDM basebands, IF
signals, PCM bitstreams, DS1 bitstreams, Ethernet)
– Output = modem, fax, pager, cellphone, voice data decoded
and ready for use
– “VGC content identification, signalling recognition, train-
on-data capability. Easy to use GUI with extensive online
help”
• Ex-NSA satellite interception gear is occasionally sold as
surplus
Data Analysis
Custom hardware used to speed analysis
• Paracel Fast Data Finder (FDF) contains 6,000 to 12,000
custom processors
• “The fastest, most accurate adaptive information filtering
system in the world”
• Typical application compares 1GB of data against 50,000
match profiles every day
• Standard US test benchmark involves locating information
about “Airbus subsidies”
NSA-developed N-gram analysis, a general method to
retrieve data according to topic
• “Find every document covering the same topic as this one”
• System works on very large data sets and in presence of errors

Undersea Cable Tapping


Operation IVY BELLS tapped Soviet cables in the Sea of
Okhotsk
• Tapping occurred from 1972 to 1982, when an NSA employee
sold the details to the Soviet Union
• From 1979 to 1992 a cable in the Barents Sea was similarly
tapped
• Submarine crews who placed the taps earned presidential
citations
• Every year from 1994 to 1997, crews have received similar
commendations (noone knows what for)
Echelon in Action
German company Enercon GmbH develops a new type of
wind energy generator
Shortly afterwards, US company Kennetech filed a patent
for exactly identical technology in the US
• Obtained a court order preventing Enercon from operating in
the US
Loss to Enercon: 100 million DM, 300 jobs
• Enercon now uses secure communications methods
Enercon data was probably intercepted via the NSA RSOC
in Bad Aibling, Germany
• GCHQ ordered UK patent office to use 256-bit public-key
encryption to communicate with European patent office in
Munich, Enercon may have been using similar “security”

Other Typical Echelon Uses


• Aiding transfer of $200M Indonesian deal from NEC to AT&T
(Der Spiegel)
• Forwarding details of Thomson-CSF deal in Brazil to
Raytheon (Baltimore Sun)
• Obtaining Japanese research on advanced automobiles for
Ford, GM, and Chrysler (Mainichi)
• Providing information to US negotiators facing Japanese car
companies in trade dispute (New York Times)
• Providing information on APEC deals to Democratic Party
campaign contributors (Insight Magazine)
• Intercepting Mexican trade representatives during NAFTA
negotiations (Financial Post (Canada))
• Intercepting Canadian negotiations for sale of 3 reactors to
South Korea (Financial Post (Canada))
• Monitoring activities of Robert Maxwell (Financial Mail (UK))
Other Typical Echelon Uses (ctd)
NSA also targets private individuals
• NSA maintained 1,056 pages of files on Princess Diana
(Washington Post)
• NSA produced 39 internal publications on Diana
• Information was collected over a period of years
• “NSA systematically intercepts international communications,
both voice and cable”
— NSA Director Lt.General Lew Allen testifying before
Congress

Other Typical Echelon Uses (ctd)


“Within Europe, all email, telephone, and fax communications are
routinely intercepted by the United States National Security
Agency”
— European Parliament report “Assessing the Technologies of
Political Control”
• This report prompted the French government to remove its
crypto restrictions
“The end of the Cold War has not brought to an end the Echelon
eavesdropping system. This system has become a weapon of
economic warfare”
— Rossiyskaya Gazeta (Russian state-funded daily paper)
Echelon is “this incredible communications vaccuum cleaner”
— Il Mondo
Other Typical Echelon Uses (ctd)
“Former intelligence officials say tips based on spying […] regularly
flow from the Commerce Department to US companies to help
them win contracts overseas”
— The Baltimore Sun

Interception Capabilities 2000


European parliament report on Echelon
• Annual communications interception budget = $15-20 billion
• “Comprehensive systems exist to access, intercept and process
every important modern form of communications, with few
exceptions”
• Intent of US diplomatic initiatives over crypto controls was
motivated by intelligence collection requirements, “a long-term
program which undermines the communications privacy of
European governments, companies, and citizens”
• “Documents obtained under the Freedom of Information Act
indicate that [crypto] policymaking was led exclusively by NSA
officials, sometimes to the complete exclusion of police or judicial
officials”
Echelon and Export Controls
With the end of the cold war, intelligence agency concerns
switched from INFOSEC and COMSEC to JOBSEC
Increasingly, economic and industrial data rather than
military data was targeted
• Some countries had been doing this for decades
• Intelligence agencies feed information obtained from foreign
companies back to favoured local companies
– Whole books have been written about these things
• If crypto were freely available, this goldmine of economic,
industrial, and trade information would dry up
– Despite the rhetoric about terrorists and pornographers and
other bogeymen, it’s really about money

Echelon and Export Controls (ctd)


Export controls are utterly ineffective on an individual
basis, but extremely effective for blanket surveillance
and espionage
• Export controls help criminals and terrorists by leaving
information systems vulnerable to attack
If crypto becomes widespread, the spooks will lose a $x00-
million-dollar investment in surveillance technology
• The export controls will never go away if the spooks can help it
“The real aim of current policy is to ensure the continued
effectiveness of US information warfare assets against
individuals, businesses and governments in Europe and
elsewhere” — Ross Anderson
Cloud Cover
Confidentiality Key Infrastructure (CKI)
Designed by CESG, the trading name of GCHQ
Design goals
• PKI provides trust infrastructure for keys
• CKI provides backdoor access infrastructure for keys

Cloud Cover (ctd)


CA’s are replaced by certificate management authorities
(CMA’s)
• CMA’s provide shared key generation capability
• Can recover the confidentiality keys used by both parties
• Can recover signature keys distributed via confidentiality keys
• Can revoke the ability of parties to communicate in private
CMA’s were referred to as “trusted third parties”, yet
another new synonym for GAK
Problems with Cloud Cover
Provides no benefit over PKI, and many liabilities
Very complex protocol
• Generation of a simple shared key is a laborious, multi-step
process
• Nothing works without the CMA’s cooperation
Assumes the only threat is from outsiders
• UK security incident statistics show ~95% of attacks are by
insiders
• Cloud Cover facilitates these attacks immensely

Problems with Cloud Cover (ctd)


Attempt made to sell Cloud Cover to the National Health
Service (NHS)
• Rejected by the British Medical Association and NHS
Flaws found in the protocol
• Still being pushed by CESG in hospital “pilot projects”
DTI Proposals
Only GAK CA’s (and signatures) will be recognised by
law
Government is allowed secret access to GAK’d keys
• Access is granted by request, not by court-ordered warrant
• GAK accesses/usage must be kept secret
Non-UK (non-GAK) signatures will not be recognised
• Under the EU digital signature reciprocity rules, UK signatures
will not be recognised anywhere else
• “We need to make sure all our laws and rules are e-commerce
friendly”

DTI Proposals (ctd)


UK companies/individuals are given a choice:
• Submit to warrantless secret surveillance of private
communications
or
• Opt out of e-commerce
“This is not mandatory key escrow”
DTI were awarded the 1998 Big Brother Award (national
government category) for their efforts
GAK Problems
Critically dependent on the honesty of criminals in
complying with GAK requirements
Trivially defeated by
• Use of non-GAK software
• Double encryption with strong crypto hidden by GAK crypto
• Use of doctored GAK software (Clipper protocol failure)
• “If the sender and receiver collaborate to defeat KR [key
recovery], there is no technical method from preventing this”
— NSA study on key recovery
Many keys can’t be GAK’d
• Session keys are set up and discarded on the fly
• Securely transporting this continuous flood of keys to a GAK
centre is practically impossible

GAK Problems (ctd)


Building the infrastructure is well beyond the state of the
art
• Law enforcement requires 24/7 access to keys, usually in real
time
• After 10 years of work on X.509 we can’t even move public
keys around yet

Handing over keys demonstrates a lack of presumption of


privacy  no warrant necessary
• ECPA ruled that cordless phones, radio communications have
no expectation of privacy
NSA Study on Key Recovery
“Threat and Vulnerability Model for Key Recovery”,
February 1998
• Rogue users will bypass any KR mechanism
• Rogue KR agents/LE agents pose “the most formidable threat”
Summary of report: GAK won’t have any effect on the bad
guys, and greatly jeopardises the good guys
• Governments will try to implement it anyway
German government:
“A US-style ‘key recovery’ system cannot be reconciled with
national security interests”

GAK in Practice
Example of centrally-managed key centre: Bank with
25,000 employees
• Used centrally-managed mainframe passwords
• 30 full-time employees barely coped
GAK schemes are vastly more complex than this simple
password example
Conservative US estimate is 90M keys escrowed per year
• Using the banking key centre figures, this would require
100,000 people to manage
• A secret shared by 100,000 people isn’t terribly secret
GAK in Practice (ctd)
The fact that GAK is so far beyond the state of the art is
probably the biggest protection against it being
implemented any time soon

You might also like