Professional Documents
Culture Documents
Remote Access VPN:- Also called as Virtual Private dial-up network (VPDN) is mainly used in scenarios
where remote access to a network becomes essential. Remote access VPN allows data to be accessed
between a company’s private network and remote users through a third party service provider; Enterprise
service provider. E.g Sales team is usually present over the globe. Using Remote access VPN, the sales
updates can be made.
Site to Site VPN – Intranet based: This type of VPN can be used when multiple Remote locations are
present and can be made to join to a single network. Machines present on these remote locations work as if
they are working on a single network.
Site to Site VPN – Extranet based: This type of VPN can be used when several different companies need to
work in a shared environment. E.g. Distributors and service companies. This network is more manageable
and reliable.
pre-shared keys, RSA Digital signatures most popular, RSA Encrypted Nonces
5.Can you explain the basic of encryption in VPN?
VPN can optionally use encryption. Traditionally it use IPSEC with an encryption method such as AES or 3DES. Encryption
take a plain text and a key and then applies an algorithm to produce a ciphertext. The keys can be static or negotiated.
Symmetric cryptography uses the same secret (private) key to encrypt and decrypt its data whereas
asymmetric uses both a public and private key. Symmetric requires that the secret key be known by
the party encrypting the data and the party decrypting the data. Asymmetric allows for distribution of
your public key to anyone with which they can encrypt the data they want to send securely and then it
can only be decoded by the person having the private key. This eliminates the need of having to give
someone the secret key (as with symmetric encryption) and risk having it compromised. The issue
with asymmetric is that it is about 1000 times slower than symmetric encryption which makes it
impractical when trying to encrypt large amounts of data. Also to get the same security strength as
symmetric, asymmetric must use strong a stronger key than symmetric.
http://www.encryptionanddecryption.com/algorithms/symmetric_algorithms.html
http://www.encryptionanddecryption.com/algorithms/asymmetric_algorithms.html
additional check page down
10.Can you explain different components in PKI? PKI Client, Certification authority (CA), Registration authority,
Certificates, Certification distribution systems.
PKI Client: PKI Client is software which enables the USB of eToken operation and implementation of eToken
solutions which are PKI based. Certificate-based strong two-factor authentication, encryption and digital signing are
included in eToken solution. It is secure, portable and secure by using PKI client.
Certificate Authority: CA is an entity used for the purpose of issuing digital certificates which are used by the other
parties. Many public key infrastructure schemes are used in characterizing CAs. The matching private key is available
publicly. This key is kept under secret by the end user who is the generator of the key pair.
Registration Authority: A registration Authority verifies the requests of users for a digital certificates and
communicates to the certificate authority to issue the certificate. RA is a part of PKI.
Certificates: Certificates are utilized for authentication of network access as strong security for authentication is
provided by them for users and computers. Less secure password-based authentication methods are eliminated by
the certificates.
An attachment to an electronic message used for security purposes. The most common use of a digital
certificate is to verify that a user sending a message is who he or she claims to be, and to provide the
receiver with the means to encode a reply. The most widely used standard for digital certificates is X.509.
Additional details;
An individual wishing to send an encrypted message applies for a digital certificate from a Certificate Authority (CA).
The CA issues an encrypted digital certificate containing the applicant's public key and a variety of other identification
information. The CA makes its own public key readily available through print publicity or perhaps on the Internet.
The recipient of an encrypted message uses the CA's public key to decode the digital certificate attached to the
message, verifies it as issued by the CA and then obtains the sender's public key and identification information held
within the certificate. With this information, the recipient can send an encrypted reply.
http://www.spitzner.net/digcerts.html
http://www.au-kbc.org/research_areas/crypto/pkiFAQ.html
12.Can you explain tunneling?
Tunneling is a mechanism provided to transfer data securely between two networks. The data is split into smaller
packets and passed through the tunnel. The data passing through the tunnel has 3 layers of encryption. The data is
encapsulated. Tunneling can be approached by Point to Point tunneling protocol.
Virtual private network technology is based on the idea of tunneling. VPN tunneling involves establishing and
maintaining a logical network connection (that may contain intermediate hops). On this connection, packets
constructed in a specific VPN protocol format are encapsulated within some other base or carrier protocol, then
transmitted between VPN client and server, and finally de-encapsulated on the receiving side.
For Internet-based VPNs, packets in one of several VPN protocols are encapsulated withinInternet Protocol
(IP) packets. VPN protocols also support authentication and encryption to keep the tunnels secure.
Voluntary Tunneling
Users computer is an end point of the tunnel and acts as tunnel client. Here the client or user issues a request to
configure and create a voluntary tunnel. They require a dial up or LAN connection. Example of dial up connection is
internet at home where a call is made to the ISP and connection is obtained.
Compulsory tunneling
In compulsory tunneling, instead of the user a vpn remote access server configures and creates a tunnel. Hence, the
end point is the Remote sever not the user.
Tunnels that are created manually are static tunnels. Tunnels that are auto discovered are dynamic tunnels. In
dynamic tunneling, tcp connections can be checked dynamically. If no connections exist that are routed through the
tunnel, a check for more suitable gateway can be done. Static tunneling may at times require dedicated equipments
17.Can you explain encapsulating, carrier and passenger protocol?
Encapsulating protocol The protocol used to provide the new packet around the original data packet.
Through tunneling techniques, you can pass non-IP packets or private IP addressed packets through a public IP
network. You can even route NetBEUI—the famous non- routable protocol—once it’s been encapsulated for
tunneling through a VPN. What happens is the new data frame, or packet, is, in fact, a legal packet with proper
addressing to travel through the network. Hidden safely within the payload portion of this new frame is the original
L2F, L2TP, and PPTP are all three Layer 2 tunneling protocols that support Access VPN solutions by tunneling PPP.
VPNs were usually in Layer 2. Now, someone asked me were IPSec is in the OSI Layer. it's in Layer 4.
http://vpnblog.info/pptp-vs-l2tp.html
http://www.sans.org/security-resources/malwarefaq/pptp-vpn.php
Point to Point protocol helps communication between 2 computers over a serial cable, phone line or other fiber optic
lines. E.g. Connection between an Internet Service Provider and a host. PPP also provides authentication. PPP
operates by sending Request packets and waiting for Acknowledge packets that accept, reject or try to change the
request. The protocol is also used to negotiate on network address or compression options between the nodes.
The Point-to-Point Tunneling Protocol (PPTP), developed by Microsoft in conjunction with other technology
companies, is the most widely supported VPN method among Windows clients. PPTP is an extension of the Internet
standard Point-to-Point protocol (PPP), the link layer protocol used to transmit IP packets over serial links. PPTP
uses the same types of authentication as PPP (PAP, SPAP, CHAP, MS-CHAP v.1/v.2 and EAP).
PPTP establishes the tunnel but does not provide encryption. PPTP encrypted using Microsoft Point-to-Point
Encryption (MPPE) protocol to create a secure VPN. PPTP has relatively low overhead, this making it faster than
some other VPN methods.
Most old vulnerabilities in PPTP are fixed these days and you can combine it with EAP to enhance it to require
certificates as well. One advantage of using PPTP is that there is no requirement for a certificate infrastructure.
However EAP does use digital certificates for mutual authentication (both client and server) and higher security.
How works: A PPTP tunnel is instantiated by communication to the peer on TCP port 1723. This TCP connection is
then used to initiate and manage as second GRE(generic routing encapsulation) tunnel to the same peer.
Generic Routing Encapsulation is a protocol for Point-to-Point Protocol. The encapsulation of a variety of
network layer protocol packet types inside IP tunnels is done by GRE. This is done by creating virtual point-
to-point link to routers which are pointed over an IP internetwork. It is completely stateless protocol based.
Soon after it is configured, the GRE tunnel interface comes up and stays up until a valid tunnel resource
address or interface is up.
Generic Routing Encapsulation is a protocol for Point-to-Point Protocol. The encapsulation of a variety of
network layer protocol packet types inside IP tunnels is done by GRE. This is done by creating virtual point-
to-point link to routers which are pointed over an IP internetwork. It is completely stateless protocol based.
Soon after it is configured, the GRE tunnel interface comes up and stays up until a valid tunnel resource
address or interface is up.
26.Can you explain CHAP? What is CHAP (Challenge-Handshake Authentication protocol)?
Password Authentication Protocol is one of the simple authentication protocols which are used for the
purpose of authenticating a user to a network access server. This is used by Internet service providers.
Point-to-Point Protocol uses PAP. Validating a user is the process of authenticating a user to access the
server resources. The remote servers of network operating system remote servers support PAP.
Unencrypted ASCII passwords are transmitted by Password Authentication Protocol over a network and are
treated as insecure. In case of non-supporting a stronger authentication protocol, like CHAP, the PAP is
used for the purpose of authentication.
30.Can you explain the broader steps of how L2F establishes the tunnel?
L2TP is an emerging IETF standard and one of the key building blocks for VPNs in the dial access space.
L2TP combines the best features of Cisco’s Layer 2 Forwarding (L2F) and Microsoft’s Point-to-Point
TunnelingProtocol (PPTP), enabling mobile workforces to connect to their corporate intranets or extranets
whereverand whenever they require.
L2TP is a standard way to build Access VPNs that simulate private networks using a shared infrastructure,
rnet. These Access VPNs offer access for mobile users, telecommuters, and small offices through dial,
ISDN, xDSL, and cable.Benefits of L2TP include per-user authentication, dynamic address allocation from
The L2TP message is encrypted with either Data Encryption Standard (DES) or Triple DES (3DES) by using
encryption keys generated from the Internet Key Exchange (IKE) negotiation process.
SSTP
Secure Socket Tunneling Protocol (SSTP) is a new tunneling protocol that uses the HTTPS protocol over TCP port
443 to pass traffic through firewalls and Web proxies that might block PPTP and L2TP/IPsec traffic. SSTP provides a
mechanism to encapsulate PPP traffic over the Secure Sockets Layer (SSL) channel of the HTTPS protocol. The use
of PPP allows support for strong authentication methods, such as EAP-TLS. SSL provides transport-level security
with enhanced key negotiation, encryption, and integrity checking.
When a client tries to establish a SSTP-based VPN connection, SSTP first establishes a bidirectional HTTPS layer
with the SSTP server. Over this HTTPS layer, the protocol packets flow as the data payload.
37.Explain isakmp ?
ISAKMP defines the procedures for authenticating a communicating peer, creation and management of Security
Associations, key generation techniques, and threat mitigation (e.g. denial of service and replay attacks). As a
framework,[1] ISAKMP is typically utilized by IKE for key exchange, although other methods have been implemented
such as Kerberized Internet Negotiation of Keys. A Preliminary SA is formed using this protocol; later a fresh keying
is done.
ISAKMP defines procedures and packet formats to establish, negotiate, modify and delete Security Associations. SAs
contain all the information required for execution of various network security services, such as the IP layer services
(such as header authentication and payload encapsulation), transport or application layer services, or self-protection
of negotiation traffic. ISAKMP defines payloads for exchanging key generation and authentication data. These
formats provide a consistent framework for transferring key and authentication data which is independent of the key
generation technique, encryption algorithm and authentication mechanism.
ISAKMP is distinct from key exchange protocols in order to cleanly separate the details of security association
management (and key management) from the details of key exchange. There may be many different key exchange
protocols, each with different security properties. However, a common framework is required for agreeing to the
format of SA attributes, and for negotiating, modifying, and deleting SAs. ISAKMP serves as this common framework.
ISAKMP can be implemented over any transport protocol. All implementation must include send and receive
capability for ISAKMP using UDP on port 500
http://en.wikipedia.org/wiki/ISAKMP
ISAKMP - IKE, Okaley, Phase 1, Phase 2, SAs, key regeneration, 3DES, DES, MD5, SHA-
Data origin authentication, data integrity and replay protection are provided by the Authentication Header
protocol. Data confidentiality is not provided by Authentication Header. Data integrity with checksum which is
a message authentication code is ensured by AH protocol. A secrete shared key is included by AH protocol
for the purpose of ensuring data origin authentication. AH protocol uses a sequence number field for
ensuring replays protection, within the header of AH protocol.
Encapsulating Security Payload is a protocol for the inter security architecture. It is the key protocol, which is
targeted to provide a mixed service of security in IPv4 and IPv6. The ESP seeks for providing confidentiality
and integrity by implementing protecting data using encryption and places this data in the portion that is
assigned for data of IP ESP. The same mechanism can be used based on the requirements of security of
the user. The process can be utilized for encrypting either a transport-layer segment or an entire IP
datagram. The protected data encapsulation is used for providing necessary confidentiality for the entire
original datagram.
The Internet Key Exchange protocol, IKE, is used as a method of distributing these "session keys", as well as
providing a way for the VPN endpoints to agree on how the data should be protected.
45.Can you explain IKE phases? Can you explain IKE modes?
IKE phase 1's used to authenticate the two VPN gateways or VPN Clients to each other, by confirming that the
remote gateway has a matching Pre-Shared Key.
However since we do not want to publish too much of the negotiation in plaintext, we first agree upon a way of
protecting the rest of the IKE negotiation. This is done, as described in the previous section, by the initiator sending
a proposal-list to the responder. When this has been done, and the responder accepted one of the proposals, we
try to authenticate the other end of the VPN to make sure it is who we think it is, as well as proving to the remote
gateway that we are who we are.
Authentication can be accomplished through Pre-Shared Keys, certificates or public key encryption. Pre-Shared
Keys is the most common authentication method today. PSK and certificates is supported by the Amaranten
Firewall VPN module.
During IKE phase 2 we will also extract new keying material from the Diffie-Hellman key exchange in phase-1, to
provide session keys to use in protecting the VPN data flow.
If PFS, Perfect Forwarding Secrecy, is used, a new Diffie-Hellman exchange is performed for each phase-2
negotiation. While this is slower, it makes sure that no keys are dependent on any other previously used keys; no
keys are extracted from the same initial keying material. This is to make sure that, in the unlikely event that some
key was compromised, no subsequent keys can be derived.
Once the phase-2 negotiation is finished, the VPN connection is established and ready for use.
http://www.amaranten.com/support/user%20guide/VPN/IPSec_Basics/Overview.htm
46.Can you explain transport and tunnel mode in detail with datagram packets?
48.Can you explain the difference between trusted and untrusted networks?
A VPN is an encrypted connection over a public network between terminating points of two or more private
networks.
Internet Protocol security (IPSec) is a framework of open standards for helping to ensure private, secure
communications over Internet Protocol (IP) networks through the use of cryptographic security services. IPSec
supports network-level data integrity, data confidentiality, data origin authentication, and replay protection. Because
IPSec is integrated at the Internet layer (layer 3), it provides security for almost all protocols in the TCP/IP suite, and
because IPSec is applied transparently to applications, there is no need to configure separate security for each
application that uses TCP/IP.
Network-based attacks from untrusted computers, attacks that can result in the denial-of-service of
applications, services, or the network
Data corruption
Data theft
User-credential theft
AES/Rijndael encryption
Rijndael is a block cipher, designed by Joan Daemen and Vincent Rijmen as a candidate algorithm for the AES. AES
stands for Advanced Encryption Standard. AES is a symmetric key encryption technique which will replace the
commonly used Data Encryption Standard (DES). The Advanced Encryption Standard algorithm approved by NIST in
December 2001 uses 128-bit blocks.
The cipher currently supports key lengths of 128, 192, and 256 bits. Each encryption key size causes the algorithm to
behave slightly differently, so the increasing key sizes not only offer a larger number of bits with which you can
scramble the data, but also increase the complexity of the cipher algorithm.
Blowfish
Since then Blowfish has been analyzed considerably, and is gaining acceptance as a strong encryption algorithm.
Blowfish was designed in 1993 by Bruce Schneier as a fast, free alternative to existing encryption algorithms. Since
then it has been analyzed considerably, and it is slowly gaining acceptance as a strong encryption algorithm. Blowfish
is unpatented and license-free, and is available free for all uses.
The only known attacks against Blowfish are based on its weak key classes.
CAST
CAST stands for Carlisle Adams and Stafford Tavares, the inventors of CAST. CAST is a popular 64-bit block cipher
which belongs to the class of encryption algorithms known as Feistel ciphers.
CAST-128 is a DES-like Substitution-Permutation Network (SPN) cryptosystem. It has the Feistel structure and
utilizes eight fixed S-boxes. CAST-128 supports variable key lenghts between 40 and 128 bits.
CAST-128 is resistant to both linear and differential cryptanalysis. Currently, there is no known way of breaking CAST
short of brute force. CAST is now the default cipher in PGP.
Digital Encryption Standard (DES) is a symmetric block cipher with 64-bit block size that uses using a 56-bit key.
In 1977 the Data Encryption Standard (DES), a symmetric encryption algorithm, was adopted in the United States as
a federal standard.
DES encrypts and decrypts data in 64-bit blocks, using a 56-bit key. It takes a 64-bit block of plaintext as input and
outputs a 64-bit block of ciphertext. Since it always operates on blocks of equal size and it uses both permutations
and substitutions in the algorithm. DES has 16 rounds, meaning the main algorithm is repeated 16 times to produce
the ciphertext. It has been found that the number of rounds is exponentially proportional to the amount of time
required to find a key using a brute-force attack. So as the number of rounds increases, the security of the algorithm
increases exponentially.
For many years, DES-enciphered data were safe because few organizations possessed the computing power to
crack it. But in July 1998 a team of cryptographers cracked a DES-enciphered message in 3 days, and in 1999 a
network of 10,000 desktop PCs cracked a DES-enciphered message in less than a day. DES was clearly no longer
invulnerable and since then Triple DES (3DES) has emerged as a stronger method.
Triple DES encrypts data three times and uses a different key for at least one of the three passes giving it a
cumulative key size of 112-168 bits. That should produce an expected strength of something like 112 bits, which is
more than enough to defeat brute force attacks. Triple DES is much stronger than (single) DES, however, it is rather
slow compared to some new block ciphers. However, cryptographers have determined that triple DES is
unsatisfactory as a long-term solution, and in 1997, the National Institute of Standards and Technology (NIST)
solicited proposals for a cipher to replace DES entirely, the Advanced Encryption Standard (AES).
IDEA
IDEA stands for International Data Encryption Algorithm. IDEA is a symmetric encryption algorithm that was
developed by Dr. X. Lai and Prof. J. Massey to replace the DES standard. Unlike DES though it uses a 128 bit key.
This key length makes it impossible to break by simply trying every key. It has been one of the best publicly known
algorithms for some time. It has been around now for several years, and no practical attacks on it have been
published despite of numerous attempts to analyze it.
IDEA is resistant to both linear and differential analysis.
RC2
RC2 is a variable-key-length cipher. It was invented by Ron Rivest for RSA Data Security, Inc. Its details have not
been published.
RC4
RC4 was developed by Ron Rivest in 1987. It is a variable-key-size stream cipher. It is a cipher with a key size of up
to 2048 bits (256 bytes). The algorithm is very fast. Its security is unknown, but breaking it does not seem trivial
either. Because of its speed, it may have used in certain applications. It accepts keys of arbitrary length. RC4 is
essentially a pseudo random number generator, and the output of the generator is exclusive-ored with the data
stream. For this reason, it is very important that the same RC4 key never be used to encrypt two different data
streams.
RC6
RC6 is a symmetric key block cipher derived from RC5. It was designed by Ron Rivest, Matt Robshaw, Ray Sidney,
and Yiqun Lisa Yin to meet the requirements of the Advanced Encryption Standard (AES) competition. RC6
encryption algorithm was selected among the other finalists to become the new federal Advanced Encryption
Standard (AES).
SEED
SEED is a block cipher developed by the Korea Information Security Agency since 1998. Both the block and key size
of SEED are 128 bits and it has a Feistel Network structure which is iterated 16 times. It has been designed to resist
differential and linear cryptanalysis as well as related key attacks. SEED uses two 8x8 S-boxes and mixes the XOR
operation with modular addition. SEED has been adopted as an ISO/IEC standard (ISO/IEC 18033-3), an IETF RFC,
RFC 4269 as well as an industrial association standard of Korea (TTAS.KO-12.0004/0025).
Serpent
Serpent is a very fast and reasonably secure block cipher developed by Ross Anderson, Eli Biham and Lars
Knudsen. Serpent can work with different combinations of key lengths. Serpent was also selected among other five
finalists to become the new federal Advanced Encryption Standard (AES).
TEA
Tiny Encryption Algorithm is a very fast and moderately secure cipher produced by David Wheeler and Roger
Needham of Cambridge Computer Laboratory. There is a known weakness in the key schedule, so it is not
recommended if utmost security is required. TEA is provided in 16 and 32 round versions. The more rounds
(iterations), the more secure, but slower.
Triple DES
Triple DES is a variation of Data Encryption Standard (DES). It uses a 64-bit key consisting of 56 effective key bits
and 8 parity bits. The size of the block for Triple-DES is 8 bytes. Triple-DES encrypts the data in 8-byte chunks. The
idea behind Triple DES is to improve the security of DES by applying DES encryption three times using three different
keys. Triple DES algorithm is very secure (major banks use it to protect valuable transactions), but it is also very
slow.
Twofish
Twofish is a symmetric block cipher. Twofish has a block size of 128 bits and accepts keys of any length up to 256
bits.Twofish has key dependent S-boxes like Blowfish.
Twofish encryption algorithm was designed by Bruce Schneier, John Kelsey, Chris Hall, Niels Ferguson, David
Wagner and Doug Whiting. The National Institute of Standards and Technology (NIST) investigated Twofish as one
of the candidates for the replacement of the DES encryption algorithm.
Symmetric algorithms encrypt and decrypt with the same key. Main advantages of symmetric algorithms are its
security and high speed. Asymmetric algorithms encrypt and decrypt with different keys. Data is encrypted with a
public key, and decrypted with a private key. Asymmetric algorithms (also known as public-key algorithms) need at
least a 3,000-bit key to achieve the same level of security of a 128-bit symmetric algorithm. Asymmetric algorithms
are incredibly slow and it is impractical to use them to encrypt large amounts of data. Generally, symmetric algorithms
are much faster to execute on a computer than asymmetric ones. In practice they are often used together, so that a
public-key algorithm is used to encrypt a randomly generated encryption key, and the random key is used to encrypt
the actual message using a symmetric algorithm. This is sometimes called hybrid encryption.
Symmetric algorithms (Symmetric-key algorithm) use the same key for Encryption and Decryption. Symmetric
algorithms require that both the sender and the receiver agree on a key before they can exchange messages
securely. Symmetric-key algorithms can be divided into stream algorithms (Stream ciphers) and Block algorithms
(Block ciphers). Asymmetric algorithms use a different key for encryption and decryption, and the decryption key
cannot be derived from the encryption key.
Symmetric-key algorithms are generally much less computationally intensive than asymmetric key algorithms. In
practice, this means that a quality asymmetric key algorithm is hundreds or thousands of times slower than a quality
symmetric key algorithm.
1. The problem with secret keys is exchanging them over the Internet or a large network while preventing them from
falling into the wrong hands. Symmetric-key algorithms require sharing the secret key - both the sender and the
receiver need the same key to encrypt or decrypt data. Anyone who knows the secret key can decrypt the message.
The weakness of symmetric-key algorithms is that if the secret key is discovered, all messages can be decrypted. So,
secret key need to be changed often and kept secure during distribution and while using.
Receiver can not verify the that a message has not been altered.
Receiver can not make sure that the message has been sent by the claimed sender.
Data integrity and repudiation problems are solved with digital signatures while key distribution problem is solved
using RSA encryption or the DH key agreement algorithm.
The symmetric-key algorithms can't be used for authentication or non-repudiation purposes. Instead hash functions
are commonly used, e.g. MD5.
2. There are two methods of breaking conventional/symmetric encryption - brute force and cryptanalysis. Brute force
is just as it sounds; using a method (computer) to find all possible combinations and eventually determine the
plaintext message. Cryptanalysis is a form of attack that attacks the characteristics of the algorithm to deduce a
specific plaintext or the key used. One would then be able to figure out the plaintext for all past and future messages
that continue to use this compromised setup.
Diffie-Hellman
Diffie-Hellman is the first asymmetric encryption algorithm, invented in 1976, using discrete logarithms in a finite field.
Allows two users to exchange a secret key over an insecure medium without any prior secrets.
Diffie-Hellman (DH) is a widely used key exchange algorithm. In many cryptographical protocols, two parties wish to
begin communicating. However, let's assume they do not initially possess any common secret and thus cannot use
secret key cryptosystems. The key exchange by Diffie-Hellman protocol remedies this situation by allowing the
construction of a common secret key over an insecure communication channel. It is based on a problem related to
discrete logarithms, namely the Diffie-Hellman problem. This problem is considered hard, and it is in some instances
as hard as the discrete logarithm problem.
The Diffie-Hellman protocol is generally considered to be secure when an appropriate mathematical group is used. In
particular, the generator element used in the exponentiations should have a large period (i.e. order). Usually, Diffie-
Hellman is not implemented on hardware.
Digital Signature Algorithm (DSA) is a United States Federal Government standard or FIPS for digital signatures. It
was proposed by the National Institute of Standards and Technology (NIST) in August 1991 for use in their Digital
Signature Algorithm (DSA), specified in FIPS 186 [1], adopted in 1993. A minor revision was issued in 1996 as FIPS
186-1 [2], and the standard was expanded further in 2000 as FIPS 186-2 [3]. Digital Signature Algorithm (DSA) is
similar to the one used by ElGamal signature algorithm. It is fairly efficient though not as efficient as RSA for
signature verification. The standard defines DSS to use the SHA-1 hash function exclusively to compute message
digests.
The main problem with DSA is the fixed subgroup size (the order of the generator element), which limits the security
to around only 80 bits. Hardware attacks can be menacing to some implementations of DSS. However, it is widely
used and accepted as a good algorithm.
ElGamal
The ElGamal is a public key cipher - an asymmetric key encryption algorithm for public-key cryptography which is
based on the Diffie-Hellman key agreement. ElGamal is the predecessor of DSA.
ECDSA
Elliptic Curve DSA (ECDSA) is a variant of the Digital Signature Algorithm (DSA) which operates on elliptic curve
groups. As with Elliptic Curve Cryptography in general, the bit size of the public key believed to be needed for
ECDSA is about twice the size of the security level, in bits.
XTR
XTR is an algorithm for asymmetric encryption (public-key encryption). XTR is a novel method that makes use of
traces to represent and calculate powers of elements of a subgroup of a finite field. It is based on the primitive
underlying the very first public key cryptosystem, the Diffie-Hellman key agreement protocol.
From a security point of view, XTR security relies on the difficulty of solving discrete logarithm related problems in the
multiplicative group of a finite field. Some advantages of XTR are its fast key generation (much faster than RSA),
small key sizes (much smaller than RSA, comparable with ECC for current security settings), and speed (overall
comparable with ECC for current security settings).
Symmetric algorithms encrypt and decrypt with the same key. Main advantages of symmetric algorithms are their
security and high speed. Asymmetric algorithms encrypt and decrypt with different keys. Data is encrypted with a
public key, and decrypted with a private key. Asymmetric algorithms (also known as public-key algorithms) need at
least a 3,000-bit key to achieve the same level of security of a 128-bit symmetric algorithm. Asymmetric algorithms
are incredibly slow and it is impractical to use them to encrypt large amounts of data. Generally, symmetric algorithms
are much faster to execute on a computer than asymmetric ones. In practice they are often used together, so that a
public-key algorithm is used to encrypt a randomly generated encryption key, and the random key is used to encrypt
the actual message using a symmetric algorithm. This is sometimes called hybrid encryption.