You are on page 1of 170

Wireless Security and Privacy: Best Practices and Design Techniques

By Tara M. Swaminatha, Charles R. Elden

Publisher : Addison Wesley


Pub Date : September 13, 2002
ISBN : 0-201-76034-7
Pages : 304

RIPPED BY “BUSTER”
Foreword
Wireless security is becoming increasingly important as wireless applications and systems are widely adopted.
Numerous organizations have already installed or are busy installing Wireless Local Area Networks (WLANs).
These networks, based on the IEEE 802.11b standard, are very easy to deploy and inexpensive. Other important
trends in wireless adoption include the introduction of wireless e-mail with devices such as the BlackBerry and
the Palm VII, rampant digital cell phone use (including the use of Short Message Service [SMS]), and the advent
of Bluetooth devices. Wireless is clearly here to stay.

But all is not well in the wireless universe. The risks associated with the adoption of wireless networking are
only now coming to light. A number of impressive attacks are possible and have been heavily publicized,
especially in the IEEE 802.11b arena. Since October 2000, at least ten major wireless security stories have
played out (see Table F.1). These stories were covered by the New York Times, the Wall Street Journal, CNN,
and NBC Nightly News, among others. Apparently, the world finds wireless security both interesting and
important.

Table F.1. A Chronology of Wireless Security Topics, Issues, and Stories (Incomplete)

When Who What Web

October Jesse Walker of Several problems in http://www.cs.umd.edu/~waa/wireless.html


2000 the University of WEP
Maryland

January U.C. Berkeley Seminal work on http://www.isaac.cs.berkeley.edu/isaac/wep-faq.html


2001 researchers Nikita WEP insecurity
Borisov, Ian
Goldberg, and
David Wagner

March University of Several access http://www.cs.umd.edu/~waa/wireless.pdf


2001 Maryland control and
researchers Bill authentication
Arbaugh, problems in
Narendar 802.11b
Shankar, and
Justin Wan

June Tim Newsham A key generation http://www.lava.net/~newsham/wlan/


2001 from @stake algorithm problem
leading to
dictionary attacks

August Scott Fluhrer, A cryptographic


2001 Itsik Mantin, and flaw in the RC4
Adi Shamir key setup algorithm
used by WEP

August Avi Rubin from Implementation of http://www.nytimes.com/2001/08/19/technology/19WIRE.html


2001 AT&T Research the WEP crack
and Adam
Stubblefield of
Rice University

October Bob Fleck from ARP cache http://www.cigital.com/news/wireless-sec.html


2001 Cigital's Software poisoning attacks
Security Group that work against
802.11 networks

February Arunesh Mishra Several flaws in http://www.cs.umd.edu/~waa/lx.pdf


2002 and Bill Arbaugh 802.1X (still in
from the committee)
University of
Maryland

May Avi Rubin of X10 Wireless http://www.nytimes.com/2002/04/14/technology/14SPY.html


2002 AT&T Research camera
vulnerabilities

The most interesting thing about wireless security is the opportunity presented by the very recent adoption of
wireless technology. New users of wireless technology have a chance to build things properly and securely as
they adopt wireless networks and create applications to run on them. That's not to imply that this will be easy,
because it will not be. This book presents an important, and a necessary, introduction to critical issues in wireless
security, something that will be extremely useful to those adapting wireless technology. Armed with a solid
understanding of reality, readers of this book are unlikely to fall prey to hype.

As far as base technology is concerned, wireless security appears to be following the usual "penetrate and patch"
route. This is unfortunate, but perhaps unavoidable. Early wireless security is focused almost exclusively on
cryptography and secure trans-mission—with unfortunate results thus far. WEP security, the cryptography built
in to 802.11b, for example, is completely broken and offers very little real security. In fact, one might argue that
using WEP is worse than using no cryptography at all, because it can lull users into a completely unfounded
sense of security. Given that our wired networks are in such bad shape, perhaps the notion of attaining "wired
equivalent privacy" is ironically accurate after all!

An over reliance on cryptography springs from a misunderstanding of the fact that cryptography is a tool with
which to approach security (and not security itself). This misunderstanding is deeply entrenched in many other
subfields of security, especially software security, where "magic crypto fairy dust" is sprinkled liberally over
designs in hope of attaining an easy security solution. Alas, software security is not that easily accomplished.
Neither is wireless security.

The Gates memo of January 2002 highlights the importance of building secure software to the future of
Microsoft. But software security reaches far beyond shrink-wrapped software of the sort that Microsoft produces.
Software has worked its way into the very heart of business and government and has become essential in the new
millennium. Software applications will clearly play a crucial role in the successful evolution of wireless systems.
This is a critical fact that, to their credit, the authors understand and highlight in this book.

Mature software security practices and sound systems security engineering should be used when designing and
building wireless systems. Security measures must be implemented throughout the wireless software
development lifecycle, or wireless applications risk running afoul of the same security pitfalls that currently
afflict wired applications. The difficulty in constructing a secure wireless system lies in the medium's limitations:
Devices are smaller, communications speeds are slower, and consumers are more demanding. These limitations
force a trade-off between security and functionality. The trick to sound security is to begin early, know your
threats (including language-based flaws and pitfalls), design for security, and subject your design to thorough
objective risk analyses and testing.

Preface
When you're on a journey and the end keeps getting farther and farther away, you realize that
the real end is the journey.

—Karlfried Graf Durckheim

This book provides wireless and security professionals with a foundation on which to design secure wireless
systems. Most security problems are handled reactively rather than proactively; this does not have to be the case
for wireless security. In the past decade, advances in software development have outpaced advances in software
security. Wireless technology—still in its infancy—affords the opportunity for proactive security that keeps pace
with development.

Wireless Security and Privacy is intended for three types of readers:

1. Security experts interested in wireless issues

2. Wireless experts interested in security issues

3. Business professionals and consumers generally interested in wireless security

We focus on the practices and methodology required to establish comprehensive wireless security. Wireless
application developers, wireless device users, service providers, and security professionals are among those who
will benefit from the information and analysis presented.

The message presented in this book differs greatly from that offered by most other security texts, which are
typically dedicated to dissecting attacks and retroactively presenting lessons learned. Their message is, "Security
should have been a priority from the beginning." Our message is, "It's not too late."

In the wired Internet world, applications are released at breakneck speed while security measures lag far behind.
Security is considered an isolated step, taken only when time permits. Wireless or wired, applications are pieces
of software. Wireless developers can apply certain lessons the wired development community has learned about
software security. Secure software practices are an important first step toward building secure systems. If
security is taken into consideration before wireless applications become widely available, the myriad problems
that have occurred with wired applications can be avoided. Provisions for security must be developed throughout
the lifecycles of wireless applications and systems.

Software applications, e-business opportunities, revenues, and reputations have suffered because development
teams and businesses have not focused sufficiently on security. It is no accident that phrases such as Internet time
have become common. The pace at which new technologies are developed is increasing at an exponential rate.
Hardware and software capabilities, communications speeds, and pervasiveness within society have changed the
face of IT. Developers, architects, and industry analysts could not have predicted with any degree of certainty the
extent to which the wired industry would develop.

If wireless trends mirror current software trends, wireless applications and services will likely become as
commonplace as desktop Internet applications. While the world waits for wireless devices and infrastructure to
develop and deliver the capabilities of desktop hardware and wired networks, security professionals and wireless
architects have a unique opportunity to coordinate their efforts and direct trends in the wireless world.
Developers have the responsibility to design secure wireless applications. This can be accomplished only if
efforts commence immediately. Software security best practices can help guide the development of effective
wireless applications.

It is almost impossible to overestimate the amount of time and money that will be saved if wireless security is set
forth as a guiding tenet of wireless architecture. Security will become a best practice that cannot be ignored and a
critical element of all application development, with or without wires. Confining security to a single module and
considering it only after market (or not considering it at all) should be unthinkable. Security is a process. As
such, it must begin in the first stages of design and continue throughout the development cycle. Security must
also be constantly reevaluated, even after an application's release.

When the wired Internet first emerged, its primary uses were research and development. Applications emerging
on the market were intensely popular and mushroomed in scope and number. Application security, unfortunately,
did not have an opportunity to keep pace. Wireless Internet on PDAs will not begin in the same fashion. Rather,
it will be used in its early stages for delivering service-oriented, timesaving applications. Most existing wireless
applications fall into that very category. The most popular versions of applications accessed through desktop
browsers will be available in lightweight versions. Research will not be the primary focus, as consumers demand
robust, convenient applications on wireless devices.

The message of this book bears repeating: "It's not too late." However, this message has a second part: "The time
to start is now."

The wireless industry has been afforded a luxury that was unavailable to the wired industry: precedence turned
into foresight. The catch? Consumers now share this same foresight. Consumers are increasingly aware of the
risks they assume in using wired and wireless applications. They have been burned in the wired world and will
not be cavalier in their use of wireless applications. Wireless developers must be able to sell their products based
on the merits of usability, security, privacy, and reliability. Building verifiable security measures into a product
will give it a competitive differentiator. Applications that cannot sufficiently prove their security will quickly
become obsolete. Today's wireless application developers must understand that security will soon become a
consumer mandate.

Investigation into security practices cannot stop at applications, however. Wireless devices, networks, and
applications warrant close examination so that problems can be predicted and prevented.

This book is divided into four sections: Establish a Foundation, Know Your System, Protect Your System, and I-
ADD (meaning Identify, Analyze, Define, and Design). The first introduces basic security principles, wireless
technologies, and their applications. The last three explain the three phases involved in designing a robust
security solution.

Part I: Establish a Foundation

Establish a Foundation is as important to security development as it is to life in general. Beginning an endeavor


by learning about all its components prevents many headaches down the road. Furthermore, being mindful of
security throughout an entire development process is crucial. Several standard—but often ignored—security
principles that apply to the wired Internet world hold important implications for the wireless world.

Chapter 1—Wireless Technologies

Chapter 1 introduces the general principles governing wireless issues today. Wireless experts may find that they
do not need this review. If you choose to skim or skip this chapter, however, you should read the case studies at
the very end because they are referred to throughout the entire text. The chapter presents a high-level overview of
wireless issues and technologies, with the intent of familiarizing you with topics essential in understanding the
rest of the book.

Chapter 2—Security Principles

Chapter 2 introduces general security practices and common industry concepts. Security experts can skim this if
they feel comfortable with its content. These key principles are important for understanding more complex
processes introduced later in the text. In this chapter we introduce a method for developing a security analysis
process called I-ADD. This process is based on industry practices but standardizes and organizes the approach. I-
ADD is fleshed out beginning in Chapter 9, "Identify Targets and Roles."

Part II: Know Your System

Know Your System presents the first essential step in developing appropriate wireless security practices. This
section puts its message into action by demonstrating the results of research efforts paramount to investigating
system components when developing a secure system. Technologies, devices, and languages are discussed in
great detail so that they can be woven into a security framework.

Chapter 3—Technologies

Chapter 3 takes you through the first phase of our process by presenting detailed information on wireless
technologies such as 802.11b, Bluetooth, and Wireless Application Protocol (WAP). Each technology falls in a
different place on the wireless technology spectrum and has its own security implications. In the initial phases of
developing a comprehensive security solution, knowing the ins and outs of all components is extremely helpful.
This chapter shows you what type of information is valuable to know about wireless technology. You have to
conduct an exhaustive search of all the system's components before determining which affect security.

Chapter 4—Devices

Much in the same fashion as Chapter 3, Chapter 4 delves into physical and logical aspects of wireless devices.
PDAs, cell phones, and laptops with wireless network cards are discussed. As part of the Know Your System
section, this chapter teaches you the device intricacies that affect security solutions. Specific devices are
investigated, and general recommendations are made. Security implementations must investigate the specific
devices and client software on the devices that could affect security in any way. This chapter introduces some of
these, but pursuant to its goal of teaching a process, not just a static solution, it educates you about the device
issues that have to be considered when developing a comprehensive security package.

Chapter 5—Languages

Chapter 5 is more technical than its two predecessors. Project managers using this book to guide a security
implementation may want to refer a developer or development team leader to this chapter. The chapter will not
make you an expert wireless developer but shows you those components of wireless development languages that
affect security implementations. Designating a team member as the language expert is essential in any wireless
project. The language expert should know the security implications of the language backwards and forwards.
This chapter helps get the language expert on her way. The languages discussed are presented in light of their
potential security downfalls. Mitigations are suggested, and implementations are not complete without consulting
this chapter.

Part III: Protect Your System

Protect Your System presents the intermediary step in the security process: developing a risk model. This enables
a person with knowledge of a system to decide how best to protect it. By outlining the roles associated with a
system, its threats, vulnerabilities, and attacks, you can develop a robust plan. The threat model you develop will
help integrate security throughout a system's development lifecycle.

Protect Your System discusses technologies or procedures that affect wireless systems. Although these
technologies or procedures may not be directly applicable to any particular architecture or system, the
information provided indicates the issues and add-ons to be considered in mitigating security risks.

Chapter 6—Cryptography

In many cases, cryptography is confused with total security. If cryptography is not understood properly, it can be
assumed to accomplish far too much or far too little. This chapter serves as an introduction to applied
cryptography. Its purpose is to inform you of basic cryptographic principles that should be understood in
developing a wireless security solution. This chapter is more technical than others but provides an introductory
view for the layperson. It is important to be able to use cryptography as a component of a security solution
without making the mistake of thinking that simply encrypting wireless network traffic will solve all security
problems.

Chapter 7—COTS

When looking for security, we sometimes fall into another trap—commercial off-the-shelf products (COTS).
COTS products offer a false sense of security in some cases. They should be used when necessary and can offer a
partial security solution, but they should be understood first and used with great care. This chapter investigates
some popular, wireless industry COTS products and examines their role in protecting a wireless application or
system.

Chapter 8—Privacy

No discussion of security is complete without considering privacy. Although distinct entities, the two are
intertwined in many ways. This chapter teaches the wireless and security professional about the privacy policy
and legal issues surrounding wireless technology security at the present time. Understanding the policies under
which you are developing a security solution is essential. Furthermore, it is good solid business practice to
understand the privacy concerns of consumers and be able to accommodate the changing needs of a wireless user
population.

Part IV: I-ADD

The concepts governing wireless security issues are neither new nor distinct from those governing wired issues.
In both cases, several steps are involved: Threats must be assessed, risk must be determined, vulnerabilities must
be analyzed, and a plan for designing accordingly, based on the first three steps, should be developed.

Chapter 9—Identify Targets and Roles

Using systems set forth in our case studies, as well as generic wireless systems, Chapter 9 conducts an exhaustive
search for potential targets. In this "whiteboard" phase of the analysis, you learn how to dissect components to
determine what might be compromised. When this list is completed, you proceed to identify the roles or
individuals associated with any of the case study systems that may attempt to compromise or take control of the
identified targets. This information gives you a starting block from which to launch the rest of your analysis.

Chapter 10—Analyze Attacks and Vulnerabilities

When targets and roles have been identified, known attacks, vulnerabilities, and theoretical attacks are analyzed.
This analysis examines how these threats affect the resources we want to protect. From this analysis, potential
mitigation techniques and protection mechanisms are determined.

Chapter 11—Analyze Mitigations and Protections

Chapter 11 is where the security plan develops. It is also the culmination of our investigation. Mitigations are
implemented against risks, and a robust system ensues. Although the most daunting part of the overall picture,
developing the security model, falls into place when you understand the framework, the threats against it, and
how to protect it. We systematically proceed through the threat model already developed and discuss how to
build security in to places where we have found holes.

Chapter 12—Define and Design

Inevitably, there are difficult trade-offs and decisions you must make. This chapter revisits the case studies,
applies a security model to each, and discusses which components of a security system are necessary, based on
what needs to be protected in each case. We apply all the concepts taught in the book and come up with solutions
for our cases.

The Advantages of Reading This Book


After reading this book, you should have a solid understanding of the technical basics of security and wireless
issues. In addition, you should know how to develop reliable security mitigations in wireless systems, based on
the results of a process that includes learning a system, assessing its risks, and developing an appropriate security
framework. Situations will arise in which security and functionality trade-offs are necessary. Those decision
makers armed with a full understanding of the risks involved will have a distinct advantage. Should business
requirements dictate that certain vulnerabilities remain unmitigated, appropriate contingency plans can be
developed. In the event of a system compromise, business can continue as usual because security was an integral
part of the system's development. Uninformed counterparts, however, will likely be busy fighting fires and
attempting to force security measures into their existing infrastructures.
For more information on this book or for other books useful in designing and developing systems, please consult
the Addison-Wesley Web site at http://www.awprofessional.com.

Part I: Establish a Foundation


Chapter 1. Wireless Technologies
A journey of a thousand miles begins with one step.

—Chinese proverb

Writing a chapter on wireless security is as easy as writing one on computers, the Internet, first aid, or applying
to college. Each could be the subject of a dissertation, a semester-long course, or an entire book. To study a
technology without learning its security risks and adaptations is to be remiss. The first component of our
investigation is wireless architectures. We describe a typical system and examine its parts: devices, technologies,
and network arrangements. We then lay out case studies of conceptual wireless systems so that we can discuss in
real terms, throughout the book, security issues and best practices for building a secure system. Our secondary
goal is to present information about wireless systems and about security principles. Our primary goal is to teach
the process of securing a system from the ground up.

You should be able to read this text and come away with

• The experience of walking, step by step, through a top-to-bottom security risk assessment and
mitigation plan

• Knowledge of wireless issues that are important to understand to protect wireless systems

An Introduction to Wireless Architecture


The basic components of a wireless architecture remain the same throughout different systems. Each
implementation and technology, of course, sees variations and different options, but by way of introduction, they
resemble one another. The architecture we use to represent a generic framework depicts wireless devices
retrieving information from a server on the wired Internet by way of its own communication across a wireless
network (see Figure 1.1). Other implementations and purposes for wireless communication are discussed
throughout the book.

Figure 1.1. A generic system architecture

The first component on our diagram is the device. Devices in a wireless system can be cell phones, Personal
Digital Assistants (PDAs), laptops with wireless network cards, or any device that communicates without wires.
These devices operate over wireless networks and communicate with towers called bearers. These bearers have
popped up alongside highways across the world. Unlike the telephone or radio towers of old, bearers are not
connected by aboveground wires visible from the road. Bearers are in charge of passing information sent
wirelessly to a wired network. They receive data and transmit it to a component that forwards it via wires to the
wired Internet. This component is sometimes a wireless gateway and other times another specialized server
designed to be the pivot point between wireless and wired communication. A gateway performs translations
among protocols, sessions, encryption, and all else necessary to prepare wireless data for transmission over the
wired Internet to its destination.

In our architecture diagram, we will examine one typical scenario—the wireless device requesting information
from a Web page on a Web server. The gateway, in this situation, translates the request into one that is readable
by the Internet and sends it to the appropriate server. The server processes the request and returns the information
to the gateway via the wired Internet. At this point, the wireless gateway performs the necessary transformations
again and transmits the data to a bearer, which, in turn, forwards it to the device. The device renders the
information on its display screen, and an iteration of this communication cycle is complete. Wireless
technologies are generally the final link between an existing wired network, its resources, and new-generation
wireless devices.

Usage Models
The architecture we just described is generic; myriad usage models would fit in to this architecture. The
following usage models focus on the first part of the communication cycle, the device to the network or devices
to other devices, but the models also serve as important illustrators of the capabilities in wireless devices.

Internet Bridge
The Internet bridge architecture best fits a usage model wherein a wireless network serves as a bridge between
the wireless device and the wired Internet. In this model, the mobile device is used to wirelessly connect to a
wired system (refer to Figure 1.1).

Conference
Users at a conference often want to share information. In a keynote session, for example, a presenter may want to
share her slides with the audience members who come up afterwards to pose questions (see Figure 1.2). If the
audience members and presenter have wireless devices, they can exchange documents and business cards
immediately.

Figure 1.2. The conference usage model

Multipurpose Phone
A cell phone is used in place of a landline at home, connected to a fixed line. When its owner is on the go, the
phone is used as a mobile device for initiating and receiving calls. When the same phone is close to another,
similar phone, they can participate in two-way communication and avoid a phone service charge, provided that
they obtain service from the same provider.
Synchronizer
Mobile devices can automatically synchronize among one another to provide an easy way to stay organized. A
user's desktop, PDA, wireless-enabled laptop, and cell phone can be synchronized as soon as information is
entered into any of them. A laptop can be automatically updated when information is received in a cell phone,
just as a PDA can propagate its new business card contact information to the other devices.

Now we will examine in more detail the important components of a wireless architecture: devices and
technologies. The security implications of each will be discussed still further throughout the text, as well as how
to avoid the risks associated with each component.

Devices
Wireless devices as they exist today will be obsolete by the time wireless communication becomes the status
quo. Recall the desktop PCs with hard drives less than 1MB capacity and monitors with black screens and green
lettering. This is how antiquated today's devices will seem in the next generations of devices and technology—
which presents quite a dilemma. Developers and designers have to make a decision: Code to existing devices to
maximize their functionality, or code more generally so that as devices evolve, they will be compatible with
legacy code. Unfortunately, the answer is both. In all fairness, it is virtually impossible to predict what the
devices of the future will look like, how they will function, and how they will be used. However, developers
must bear in mind that applications have to be scalable. Without a doubt, they will need to be scaled in the very
near future.

The wired world sees computers and their components and accessories coming down in price rapidly. Their
wireless counterparts are in such a state of flux that some pieces are dropping in price while others are
temporarily rising. Consumers who buy the most recent devices on the market pay the early-adopter tax of
expensive equipment and communications costs, as well as the tax of having devices that will not necessarily
become the standard.

Cell Phones and Personal Digital Assistants (PDAs)


It is best to think of wireless devices in terms of the generation to which they belong. First-generation cell
phones, for example, were popular in the last five years of the twentieth century (see Figure 1.3). These phones
were analog, had poor reception, were typically large and cumbersome, and weighed more than the commonest
second-generation phones of the early twenty-first century. Access plans were expensive per minute, coverage
was strictly limited, and concepts such as browsing the World Wide Web on your phone were foreign and
futuristic.

Figure 1.3. PDAs, cell phones, and wireless laptops

Towards the end of the 1990s, wireless network providers began to forge digital networks across the globe.
These networks supported much sleeker devices, from digital phones with configurable ringing tones and
primitive Web-browsing capabilities to PDAs in the form of Palm Pilots, BlackBerries, Pocket PCs, or Visors.
These PDAs serve as combination organizers and limited computers. They afford early adopters the ability to
browse the Web, albeit slowly, synchronize information between a PC and a PDA, and send text messages or e-
mails via wireless networks. Hybrid devices are also available, combination mobile phones and PDAs such as the
Microsoft Smart Phone 2002 and the Handspring Treo.

Several debates have ensued since the advancement of cellular phones and PDAs into a third generation of
devices. Which will become the treasured device? Cell phones are decreasing in size but need larger display
screens. PDAs are being produced with larger display screens but are not getting smaller. Will one become
obsolete? Will they be interconnected with short-range wireless technologies so that the cell phone calls an ISP
for a PDA to connect to the Internet? These answers may be known even by the time this text is published.

Third-generation devices, often referred to as 3G, are still in conceptual phases. While the world waits for better
network operations and faster communication speeds, devices are being developed and tested. The initial wave of
3G devices is gaining market share.

Wireless Laptops
Another wireless device that bears mentioning is a wireless-enabled laptop (refer to Figure 1.3). Laptops are
certainly heavier and much more cumbersome than phones or PDAs, but they perform with significantly higher
capabilities and potential. They can serve as both connected and nonconnected devices, depending on where they
are in relationship to a wireless network. Wireless LAN (WLAN) access via a laptop or desktop is becoming an
attractive alternative to the cluttered, cable-ridden offices that are the norm. Each of these technologies will need
to be compatible with wireless systems, which introduces the first security problem into the mix.

The numerous devices, software available on devices, networks, service providers, vendors, device
manufacturers, implementations, and technologies expected to interoperate in the wireless world present a
serious risk. For all systems to be compatible with one another, holes will inevitably be left open. Several
characteristic features of wireless devices will continue to evolve.

Consumer Issues
When examining device characteristics, there are consumer issues and technical issues. Some consumer issues
cross over into technical ones, and vice versa. For the sake of discussion, though, it is easiest to distinguish
consumer issues as those usability and external functionality issues associated with a wireless device. Technical
issues are those issues concerning operation of the device, the hardware it comprises, and the software that runs
on it. Consumer issues are important to consider in making purchasing decisions for consumers and businesses
alike.

Display Screen and Input Devices

The most obvious characteristic, frequently given only cursory attention, is the size of the display screen on a
wireless device. Anyone who has attempted to run an application or use functionality on a device can attest that
the screen size is a limiting factor if it is not large enough. Factors that are not always considered but have
implications for development are

• Resolution

• Colors

• Behavior under certain conditions, such as heat (left on a car dash in the summer) or cold (left in a trunk
during the winter)

• Backlighting

• Contrast

• Behavior with different amounts of light

Another related issue is how a user operates the device. Some devices work with numeric keypads, some with
touch screens and a stylus, some with detachable keyboards. All these seemingly insignificant factors contribute
to a device's functionality and should be considered when choosing devices for specific scenarios.
Peripherals and Expansion

Peripherals and expansion are important consumer issues to consider. If a device has the capability of having
components attached to it, it provides extra functionality but also potential vulnerabilities. Peripherals, for
example, can turn cell phones into PDAs or give PDAs modem capabilities. By extension, cell phones could
become scanners, credit card readers, or cameras, given the right equipment. Not all devices have this capability,
and depending on system and user needs, the ability to add external interfaces or functionality may be adding
value or creating danger. Some technologies, such as Bluetooth, a short-range wireless technology, open the
possibility for devices to become peripherals of one another, enabling users to customize on-the-fly
configurations to suit their needs on a changing basis.

Transport

A definite consumer issue is that of transportability. Can I fit the device in my hand? my pocket? or my
briefcase? Each option comes with its own constraints and benefits. If a device can be held in one hand, it can be
transported easily. It can also, however, be easily stolen, lost, or perhaps broken. Devices that fit in a briefcase
are less likely to be lost or misplaced and offer greater potential for processing and storing information. These
devices are more attractive to a user who is more concerned with performance than portability. Features inherent
to each device make its users more or less able to perform desired tasks. If a user does not save time and energy
with a device, she will discard it quickly in favor of one that is either more portable or more capable.

Battery Life

The single biggest limiting factor in wireless devices is physics—the battery. If the Energizer bunny were
wireless, devices with wireless capabilities would take on a new life of their own. As for the "It keeps going and
going" part, batteries in wireless devices do nothing of the sort. Unbeknownst to the novice user, the types of
applications performed on a device directly affect its battery life. A cell phone in use takes far more battery
power than a cell phone lying idle. A PDA can outlast a cell phone in terms of power and lifetime, but its
applications have to be developed with careful consideration to battery usage. Security capabilities can be
significant drains on battery power and can be discarded as wasteful in favor of faster processing without
functions like robust encryption. (See Chapter 6, "Cryptography," for a more detailed discussion.)

Communication

Some wireless devices include a property known as peer-to-peer capability. This capability enables device users
to form fast, easy connections with each other and to "beam" information to each other. The peer-to-peer feature
makes use of an infrared (IR) port on a device, allowing users, in effect, to create their own personal networks.
IR communication requires that users be within a short distance of each other but offers significant, attractive
possibilities. One limitation of this type of communication is that systems built to make use of IR are often
closed systems. Palm and Microsoft systems are closed and therefore cannot talk to each other. Most of their
applications use the same standards for transport but not for application exchange. This prevents communication
across platforms. If one user has a Palm Pilot, she can likely talk to other Palm users but not necessarily to
someone with a Pocket PC. If each device ran an application that enabled the two to exchange information, it
would be possible, but the operating systems themselves could not interact. This technology has not been fully
developed and may not come to fruition before other forms of wireless networking do. The idea is popular,
however, and should be considered when developing wireless applications.

Technical Issues
All operating systems (OSs), development tools, applications, and browsers present complex issues to be
investigated. Until one device or operating system manufacturer begins to assert itself as an authority or leader,
all of them will be limited in their capability to operate seamlessly with others. Palm Pilots operate with Palm
OS, RIM BlackBerry devices operate with their own proprietary operating system, and Pocket PCs operate with
the Microsoft platform Windows CE. Some vendors produce devices and license these operating systems for
their own use, but these are not highly configurable and are best used as they are delivered. Each device has its
own niche in the current market. Palm is easy to use, RIM allows wireless e-mail access, and CE is highly
compatible with Microsoft products.
Software Development Kits (SDKs) are another important factor to investigate when examining a device. If
application designers find more comprehensive SDKs for one device than for another, they may choose to
streamline their development process and gear applications to the device with the better SDK.

When considering all the ramifications of analyzing a device, it is important to note the three major players
involved: the hardware manufacturer for the device, the operating system or software vendor, and the wireless
network service provider. The hardware manufacturer has extensive control over what can go on the device. In
most cases, this company provides a limited set of options from which service providers can choose. Often, too, a
partnership exists between the operating system vendor and hardware manufacturer (or the two may be from the
same company). The joint OS and hardware production efforts present options to service providers, who choose
limited customization options.

The browser, for instance, on a mobile device such as a cell phone or PDA is preconfigured when it is purchased,
and the ability to update it is still evolving. Surely, in the near future, this will be a must in devices. The ability to
tweak the software installed on a device is a necessity for properly protecting oneself from discovered security
vulnerabilities. Just as in the wired world, there will be patches and updates, and there will need to be easy ways
for them to be applied by average users. Currently, some devices allow this capability with hardware that can be
connected to a PC, but the demand for faster, wireless methods of doing so is increasing.

In the cell phone realm, for instance, one wireless service provider's phones come with a software application
that provides a type of list to the device, including identification information about those bearers with which the
phone can communicate. As new bearers are installed, the software must be updated for the phone to be able to
communicate with the new towers. Currently, this function can be performed only at a retail store or an
authorized reseller. Updating this software is a cumbersome and undesirable process. If the software could be
updated transparently each time a person initiated a call, say, on the first of each month, the device would be
more attractive. Also, the device would incur a new set of security risks.

The service provider often dictates which features will be available on a device in order for it to agree to provide
service for device owners. This can be something simple, such as the preset bookmarks in a browser, or
something more complex, such as limiting the gateways the device can access to the provider's own.

The involvement by each of these entities in configuring devices is often at the expense of a user's ability to
customize a device or protect herself against compromise. In most cases, this cannot be prevented, but it is
important to remember when assessing the risks involved in a certain system.

Network Arrangements and Technologies


Wireless technology has been around since the days before radios became popular. The astronauts in a space
shuttle communicate to mission control without wires, just as a cordless phone allows people anywhere on the
globe to communicate without the burden of wires tying them to a fixed location. Wireless technologies as
defined in this text, however, are newer and more sophisticated, operating at higher speeds and conforming to
different standards. In this text we introduce three types of wireless technologies:

• 802.11b

• The Wireless Application Protocol (WAP)

• Bluetooth

Some wireless technologies are standards-based wireless transport protocols, some are application protocols, and
some are both, but all are technologies. They represent a broad range of technologies, and continually evolving
ones at that. It is essential to realize that the only appropriate way to secure a system is to investigate it
thoroughly. This book does so with the technologies presented, but its goal is to teach the method for doing so,
not to provide answers that will be pervasive in their details throughout the next thirty years. The methods taught
here for analyzing a system are invaluable and are tailored specifically to wireless systems. They should be
further customized as standards, protocols, and technologies change in the future. Each technology is discussed
in much greater detail in Chapter 3, "Technologies."
To place the technologies in appropriate contexts, we will simultaneously examine different network
arrangements typically used in wireless systems:

• Personal area networks (PANs)

• Local area networks (LANs)

• Wide area networks (WANs)

The technologies discussed here cross all three network arrangements, but some are more closely associated with
one. The discussion of wireless LANs comprises the bulk of this section because it is the most commonly seen
implementation of a wireless network for most home and office environments. In the discussion of PANs, it is
best to focus on Bluetooth technology. In the discussions of WANs and LANs, it is best to discuss a combination
of 802.11b and WAP. These technologies cover different layers of the architecture of each type of network.

802.11b
The most comprehensive technology examined here, 802.11b is a standard developed by the Institute for
Electronics and Electrical Engineers (IEEE). 802.11b is the IEEE standard for wireless communications in its
revised form, which includes higher communication rates. At the time of the writing of this book, industry
analysts indicate that it may be the technology that outlasts the rest and prospers long into the future. (Our
opinion is that this may well be true, but parts of the other technologies examined may likely be incorporated into
whichever one surfaces as the front-runner for the long term. Furthermore, 802.11b will need to be accompanied
by a strict security design paradigm to constitute a viable "total" solution.)

802.11b is backwards compatible with its predecessor, 802.11, and lays out a standard for wireless
communication that offers technical specifications about architecture and services, as well as design
implementation guidelines. The specifications define components and network configurations and describe the
relevant layers of the International Standard Organization's (ISO) Open System Interconnect (OSI) reference
model, as well as security implications. Although it may be touted as the most versatile and robust wireless
standard that is emerging, 802.11 has been found to have significant problems.

Two reports released in the early part of 2001 unveiled some holes in 802.11 networks. The reports, from the
University of California, Berkeley, and the University of Maryland, examined the components of the 802.11
system that attempt to mirror a wired system in terms of security. The examinations led to discoveries of distinct
problems and, intentionally or unintentionally, were followed by a steady stream of articles pointing to other
insecure measures in the system. Two hundred dollars worth of equipment from Radio Shack was all it took two
wily individuals to compromise countless wireless networks in the San Francisco Bay area. These discoveries do
not necessarily point to the demise of the technology but rather the need for application security on top of easily
implemented networks to bolster weak security measures that come out of the package with these systems.

The Wireless Application Protocol (WAP)


WAP operates at a higher level than transmission protocols such as 802.11b. It provides a protocol used to
implement communications in an architecture that requires a gateway to provide the translations between
wireless and wired communication. WAP has faced much criticism since its emergence but is currently the leader
in cell phone wireless configurations in the United States. WAP browsers, WAP-enabled phones, and WAP
development are en vogue, especially in Europe.

WAP includes specifications for application environments, transmission, and session handling, as well as
security functionality. In its transport layer, WAP includes a wireless version of Secure Socket Layer (SSL)
called Wireless Transport Layer Security (WTLS). WTLS helps secure communication from wireless devices
through bearers to WAP gateways (see Figure 1.4). The problem starts at this point. The gateway is a place
where the secure communication is vulnerable because the gateway has to translate to communication configured
to the standards of the wired Internet. WAP receives its biggest criticism from this. It does not provide end-to-
end security and opens holes. WAP specifications use a special language for development of Web pages readable
by small devices that communicate on high-latency, low-speed networks.

Figure 1.4. WAP architecture


Wireless Wide Area Networks
Both 802.11b and WAP are used in varying network arrangements. (For the purposes of this text, when referring
to a network as wide area network, we mean wireless wide area network. When referring to a WAN connected
via wires, we call it a wired WAN. The exception to this is that we employ the acronym WLAN for wireless LANs
because this size network is discussed more frequently throughout the book and we want to be clear.) WANs are
typically used in the cases of service providers, not corporate or home networks. Loosely, WANs enable devices
to connect via wireless technologies and protocols to the Internet, an intranet, or an e-mail system. Depending on
the connecting entity, WANs are administered by Internet service providers (ISPs), application providers, or
corporations.

WANs are expanding to provide wireless coverage for more areas of the world. Most major cities enjoy WAN
service, but rural and less developed areas have not yet seen the benefits of WAN. Remote or underdeveloped
areas without wired WANs can benefit from the resources that Internet access brings to an area: infrastructure,
communication, necessary supplies, and aid. Also, these great improvements can be realized without the need for
costly infrastructure (that is, fiber optic cable) being laid over great distances. Certainly, some cost is involved in
implementing a WAN solution to areas with limited resources, but the cost is less than that of wired WAN
solutions.

Local Area Networks


A wireless local area network (WLAN) is an electronic data communications system providing an extension, or
alternative, to a wired LAN. WLANs use a variety of communication mechanisms to replace the traditional
cables and wires of a LAN. In a traditional LAN, data is transmitted as electronic pulses or signals along a
physical wire or carrier. Some systems have a continuous signal or carrier running on the wire, such as a tone on
the phone line, and the data is superimposed or modulated onto the carrier signal. In a simplified example of the
phone, the transmitter makes slight variations to the tone frequency, and the receiver is then set to detect these
variations and retrieve the transmitted data. Similarly, in a WLAN, there are transmitters, receivers, and a carrier
on which data is modulated.

Currently, two general mechanisms are being employed as the data carrier in WLANs: radio frequency (RF) and
infrared (IR). Both mechanisms allow for the trans-mittal and reception of electronic data through the air,
minimizing the need for cable and wire connectivity between devices. Several RF technologies are being utilized,
and two IR technologies for WLANs. The technologies are Narrowband Technology, Spread Spectrum
Technology, Frequency-Hopping Spread Spectrum Technology, Direct-Sequence Spread Spectrum Technology,
Direct Infrared Technology, and Diffuse Infrared Technology.

Personal Area Networks and Bluetooth


PANs are networks that focus around an individual. Loosely, a PAN could comprise a cell phone in someone's
shirt pocket, his PDA, and his wireless-enabled laptop (see Figure 1.5). The three devices would communicate
among one another, forming an ad hoc PAN. The cell phone could dial in to his ISP, offering Internet
connectivity to both the PDA and the laptop. The laptop could then send a *.pdf document to the PDA, and the
cell phone could disconnect. Each component of the PAN serves a unique purpose, requiring that the three
function together to form an intelligent network, but by distributing tasks and functionality, not requiring one
heavy cumbersome device.
Figure 1.5. A personal area network (PAN)

The technology that might make this possible is Bluetooth. Bluetooth is a once highly acclaimed technology that
has suffered great setbacks. The intention of its major contributors was to develop a standard so that wireless
devices could interoperate easily and cheaply in a short-range distance. Adding Bluetooth to a device was
supposed to add about five dollars to its cost. Estimates have been off, though, and it is actually costing
manufacturers closer to thirty dollars to build the technology into their devices. Perhaps bad luck, perhaps a
fortuitous omen, at a conference in April 2001, Bluetooth supporters intended to set a record and have the largest
number of Bluetooth users connected at the same time. Not only did they fail to reach their record numbers, but
also, because of a system failure, none were able to connect.

Despite setbacks and more than a 30-month delay in mass deployment, Bluetooth slowly gains ground, and
developers have yet to abandon it. Bluetooth differs from 802.11b in several ways:

• It can only connect devices within 10 meters of each other.

• It operates similarly to InfraRed Data Assocation (IrDA).

• Its goal is for devices to form networks among each other quickly and easily.

The security ramifications of this are phenomenal. Security is not built in to devices as far as connection
capability is concerned. It is left to application developers and, perhaps even scarier, users.

Wireless LAN Appeal


Now that we have introduced a few technologies and network arrangements, there is a question begging to be
answered: Why use a wireless LAN? Four main factors describe the appeal of a wireless LAN: mobility,
flexibility, cost, and scalability.

Mobility

WLANs can provide users with access to public and private network resources from anywhere within the
coverage area. The coverage area may vary quite a bit, depending on the system being used. Some large
commercial wireless access providers provide Internet access to large coverage areas. The BellSouth Network
and Mobitex cover entire metropolitan areas in most of the United States. We concentrate here on corporate and
on home or private wireless networks.

Flexibility

WLANs allow connectivity where running additional cables or wires may be unfeasible or cost-prohibitive.
WLANs also allow the configuration or location of the terminals to be tied no longer to the network access; users
can rearrange office areas at will and remain connected to the network. Home owners or renters benefit from this
flexibility by not having to run network lines throughout a house or an apartment. Yet, they still have access to
centralized printers or high-speed Internet access points such as Inte-grated Service Digital Network (ISDN) or a
cable modem. These devices can remain wherever they have been installed, and the user's computer can be
moved anywhere in the house and still retain access.
Cost

Although the initial cost of WLAN equipment is currently higher than conventional LAN equipment, the cost in
installations that are fluid, or temporary and frequently changing, is soon recuperated in rewiring and time.

Scalability

WLAN technologies can be configured in varying topologies to meet the needs of the specific application or
installation. These topology configurations are easily changed; new devices or users can be added without
affecting existing users or devices. This is truer for some technologies than others, but in general, WLAN
technologies are more readily scalable than their LAN counterparts.

Case Studies
Throughout this section, we focus mainly on WLANs. We present several case studies that shape our discussions
about security throughout the book. For the sake of clarity, it is helpful to revisit these case studies as you read
the book.

Corporate WLANs may or may not provide access to the public Internet and usually have a range of 150–300
feet from the access points. In a corporate environment this is more than adequate in most situations. It allows
employees to access the network from anywhere in a building or complex, including courtyards, lunch areas, and
the like. The following case studies of this technology describe examples of where this mobility can be of
benefit. We have identified four case studies where WLANs are used in place of a wired network: a hospital, an
office complex, and a university campus for corporate or business application, and a personal or home WLAN.
These case studies serve as the foundation for examining the security aspects of wireless systems in later
chapters.

The Hospital
A hospital complex has installed a WLAN system. This hospital complex includes the hospital itself, external
buildings housing doctors' offices, and buildings associated with external support facilities. Doctors see patients
either in their own office or in the hospital and have complete access to the patients' records on a handheld PDA.

On a typical visit, a doctor, Anne, brings up a chart for her patient, Reggie, and sees that he is in for a follow-up
because of an injury to his knee during a weekend bicycle mishap. Anne pulls up the radiologist's report from the
preceding visit. The report indicates that the proximal end of the femur sustained a chip at the knee during the
accident and that the cartilage is slightly separated from the connective tissue. The radiologist recommended
surgery to repair the damage before it becomes worse. Anne approves and seconds the recommendation.

Reggie accesses his own PDA to query his schedule and receive information on his condition from the doctor's
system. Anne's PDA system automatically queries her office to update any changes to her schedule, and she
cross-references the schedule with that of the anesthesiologist and the hospital's surgery room. Within a matter of
seconds, an acceptable date is identified and the surgery is scheduled. The doctor's office system contacts the
insurance company and notifies the appropriate case manager of the surgery, and all relevant parties are notified
of the event via e-mail while the doctor and patient are discussing the procedure. In the interim, Anne prescribes
pain and anti-inflammatory medication for the patient. The prescription is filled out on her PDA and sent to her
office computer. Reggie's preferred pharmacist is already in the system, so the prescription is sent to the
pharmacy and will be waiting for him when he arrives.

Reggie leaves the examining room and walks to the receptionist's desk, where his record is brought up. The
receptionist asks whether he would like the co-pay automatically charged to the credit card on file, to which
Reggie responds yes. This is captured by the microphone on the counter and processed by the speech recognition
system. The credit card is charged, and the insurance company is sent an electronic bill. Reggie picks up a receipt
from the receptionist and is on his way.

Anne now travels to the hospital to make her rounds. She visits a patient, Chris, who is recovering from a car
accident. She recommends a change to Chris's medication, inputting the change in her PDA. In turn, the PDA
queries the hospital system and returns a potential harmful interaction between the suggested medication and
something provided by the emergency room physician. The PDA displays alternatives, Anne selects one, and the
change is indicated on Chris's chart and displayed on Anne's PDA while being transmitted to the hospital.
Finally, the change is sent to Anne's office system, updating Chris's records.

The nurse who administers the medication carries a notebook, or PDA, on the medication cart. Even though she
has already started her rounds, the change is reflected on her screen as she brings up Chris's chart, where the
change is highlighted. The nurse smiles to herself (glad that she no longer has to read the doctors' illegible
handwriting) and greets Chris warmly as she enters his room.

The Office Complex


An advertising corporation, AdEx Inc., has installed a wireless LAN system throughout its multistory building in
Reston, Virginia. It has installed access points at key locations to provide complete coverage throughout the
building. Employees are provided laptop computers with docking stations at their work areas. Both the docking
stations and the laptops are equipped with wireless LAN access devices. The conference rooms are equipped
with projection systems connected to the LAN so that employees can take their laptop to a conference room,
connect to the projection system over the network, and control the presentation via their laptop.

An AdEx sales team, headed by Kathleen, is proposing a new marketing campaign to a potential new client,
NitroSoft. The team has been working on the presentation for several weeks. Before the presentation, Kathleen
takes the NitroSoft group to lunch. During lunch, the NitroSoft people receive a message on their PDAs
announcing a new acquisition that has relevance to the team's presentation. One of the people in the NitroSoft
group, Louis, mentions the announcement to Kathleen, who takes out her PDA and asks him to send her a copy
of the announcement. Louis sends the copy, along with additional background, to her PDA. Kathleen forwards
the information to one of her staff members, with instructions on how to incorporate the new information into the
presentation.

After lunch, Kathleen and the NitroSoft group return to AdEx and head to the conference room for the
presentation. On the way, Kathleen checks her PDA and receives word that her team will be able to incorporate
the new information but that it will take 20 more minutes. They inform her that the changes fit well in the second
half of the slides. Kathleen tells them that she will begin with the original presentation and then switch to the new
presentation halfway through, if they complete it and are satisfied with the results. Otherwise, she will stick with
the original presentation.

Kathleen and the NitroSoft group arrive at the conference room and settle in for the presentation. The AdEx sales
team continues working as Kathleen begins the presentation. She monitors her PDA and receives confirmation
that the team has incorporated the new information and is satisfied with the result. At a convenient point in the
presentation, Kathleen loads the updated slides and continues. The NitroSoft group is impressed by the efficiency
and speed with which the team was able to incorporate new information. AdEx and NitroSoft close the deal that
day.

The University Campus


A university has implemented a campuswide wireless network through which students and faculty can conduct
university business. Having prepared his lecture in his office, a professor, Steve, grabs his laptop and heads to the
classroom. On arriving at the classroom, he places his laptop on the podium and accesses the lecture hall's
projector over the network so that his notes are projected onto the screen at the front of the classroom.

Students have the option of turning in assignments traditionally (on paper) or electronically (which Steve
prefers). Assignments are due at the start of class, and Steve has his graduate student teaching assistants (TAs),
Brian and Jessie, look over the assignments during the class. Near the end of the lecture, Steve checks his e-mail
and receives feedback from the TAs on which parts of the assignment gave the most problems or warrant
discussion. Steve expands on these aspects while the assignment is still fresh in the students' minds. As class
comes to a close, Steve posts the next assignment, that day's lecture notes, and an answer sheet for the preceding
assignment to the class Web page. Students who have a wireless device or laptop can check the page
immediately and ask Steve any questions they have, or they can access it at their leisure and e-mail the TAs or
him for assistance, if necessary.

One night a week, Brian and Jessie hold a campuswide NetMeeting, where they discuss previous assignments
and answer students' questions. Brian and Jessie each have a NetMeeting-enabled wireless laptop, so they can
participate and respond to students' questions, regardless of where they are, enabling them to be more productive
with their time.

The Home
A home or private WLAN using 802.11b technology has the same range as corporate WLANs, so home users
can access their network from anywhere in their house, yard, or, in most cases, their own neighborhood. Home or
private WLANs using HomeRF technology have a coverage area up to approximately 150 feet. For this example,
it does not matter which system is used because the general capabilities are the same.

Imagine a family in a single-family house in a Chicago suburb. The father, Doug, is a financial advisor for a local
credit union. The mother, Emily, holds a part-time job doing online research for a local law firm and volunteers
at the local day care. They have two children, an 11-year-old son, Joe, and a 14-year-old daughter, Rachel. They
own three desktop computers, and Doug has a laptop, which he uses at home for work. They have a laser printer
for printing reports and newsletters for the day care center, a photo-quality ink jet printer, and a cable modem,
obtained through a special offer with the local cable service.

The printers and cable modem are connected to the parents' desktop, located in the den/office. A wireless access
point is also connected to this computer so that the children's computers and Doug's laptop can make use of the
printers and high-speed Internet access. Joe and Rachel each have a desktop in their rooms for doing homework
and research on the Web for school. Joe's best friend, who lives across the street, has a wireless adapter in his
desktop, so he can connect into the network to utilize the high-speed Internet access for school research, as well
as play the occasional network game with Joe. Doug is the only one who can make use of true mobility with his
laptop and access the network while lounging in bed, on the couch, or on the back deck. However, the entire
family benefits from their computers' capability to utilize the same printers and the cable modem without the
need for running cables throughout the house.

With these case studies in mind, let us next investigate the security concepts necessary for building a robust
security solution throughout the development of a wireless system. Chapter 2, "Security Principles," presents
industry-accepted and widely adopted security concepts and practices. These practices help shape our discussion
and investigation into the specific security issues associated with wireless systems, devices, and applications.

Chapter 2. Security Principles


The early bird gets the worm, but it's the second mouse that gets the cheese.

—Steven Wright

Security is not the result of properly designed and developed software or systems. Rather, it is part of the process
used throughout the entire software lifecycle, from design to development, testing, deployment, and
obsolescence. Security should be considered before any design is contemplated and long before any code is
written or circuits are wired. Too often, security is an afterthought. Developers go straight for the code or the
bench because that is where their efforts can first be recognized and because that is considered the "cool stuff."

In this code first, think about it later mentality, the first step is to try to implement the core functionality. Now,
we are not saying that following a rapid prototype methodology is wrong, quite the contrary. Performing up-front
development to ensure that the technically challenging portions of the development are feasible has merit.
However, in a rapid prototyping approach, after the prototype is completed, the final design is developed from
scratch, and often the previous development work is discarded, either in whole or to a great extent. The practice
we discourage is development in which developers begin expanding and retrofitting to add other features, such as
multiplatform support, copy protection, or user interfaces, after functionality is in place. At this stage, some
developers even think about scalability or security, although, unfortunately, this does not come until much later
in too many cases. Using this latter approach, it is no wonder that software and electronic devices end up with
many security vulnerabilities and that more time and money are spent on testing and patching than on the initial
design and development.

The good news is, this does not have to be the case. In this text, we give developers, consumers, and managers
the information necessary to understand and evaluate the security risks and vulnerabilities of any system, be it
consumer applications, wireless devices, or wired networks. At the heart of all these systems—and, we daresay,
almost any device these days—is the software. Rarely does an hour go by when we do not rely on some piece of
software, either in an application or imbedded in firmware in a clock, coffeepot, streetlight, car ignition system,
or home heating system. The list is endless.

With this great reliance on software, you would think that the process of developing and testing the software that
goes into these products would be well-defined and refined. The truth is, software development practices stray
far from this ideal. The consumer demand for more functionality at a lower cost is giving developers the false
notion that they should not take the time to design the software properly, in favor of express coding and
producing to meet delivery deadlines.

This logic is seriously flawed. By taking the time (however much or minimal it need be) to follow basic
guidelines, developers can be assured that their products will work properly, safely, reliably, securely, faster, and
cheaper than if they had begun the development process first. The ability to identify, evaluate, and mitigate the
risks (security or otherwise) in a given system requires expert knowledge of both security and the system being
evaluated. General knowledge enables you to ask intelligent questions to evaluate at least the high-level risks
and, more importantly, to know when a situation requires the advice of a security expert.

We are not going to talk in depth about software security here. However, you should understand that software is
the piece of any system that makes the whole puzzle either complete or incomplete. Also, software is usually the
component that makes the system susceptible to security vulnerabilities and is the means, if not the cause, of
exploiting hardware vulnerabilities. It will suffice to say that software is the most critical component of any
system, and this criticality deserves appropriate consideration. Consumers are beginning to demand that software
work as advertised, every time, anytime. The Addison-Wesley book Building Secure Software by John Viega and
Gary McGraw covers this topic in detail. We recommend the book highly to anyone in the business of
developing, using, or purchasing software that must work and must work as intended.

Security Principles
Six principles are commonly used to describe the phases for evaluating a system's security or vulnerability. These
principles are Authentication, Access Control and Authorization, Nonrepudiation, Privacy and Confidentiality,
Integrity, and Auditing. They are presented here in no particular order and are heavily interdependent. In fact, the
relative importance of each depends on the system or component being evaluated.

Authentication
Authentication is the principle that users, processes, or hardware components be able to identify other users,
processes, or hardware components in the system as who or what they say they are, and vice versa.

When a wireless device requests service from a local wireless service provider, it presents the system with user
credentials. This varies from device to device, but a cellular phone, for example, sends the device's ECN
(Electronic Control Number) and DCN (Digital Control Number). The local service provider uses these
credentials to authenticate the device as an authorized user of the system. Usually, the DCN is a telephone
number, but it can be thought of as a userid. The ECN is linked to the device and is a digital serial number,
similar to a password. The ECN is used with the intention of confirming that the device claiming to be an
individual's phone with that individual's userid is, in fact, the phone assigned to that individual. Changing a
cellular phone's DCN is relatively easy and, in many cases, can be done via the device's keypad. Changing the
ECN is a much more difficult task, often requiring proprietary hardware and software. Therefore, a service
provider who verifies that both are correct has some assurance that the individual is who she claims to be, that
authentication has been established.

Why should anyone care about authentication? Better yet, why should anyone sacrifice functionality for accurate
authentication? If a user is using a PDA to transfer funds between a bank account and an investment account with
a broker, the user wants to be sure that the bank knows that it is not someone else falsely claiming to be that user
and transferring funds to his own bank account. Users want the bank to authenticate that the correct user is
making the request.

Access Control and Authorization


Access Control and Authorization is the principle that a process or hardware component be capable of
controlling access to whatever resources that process or hardware component represents or controls.

Access Control and Authorization are closely tied to Authentication. This service provides access control by
requiring a user to provide authentication to verify that he is authorized to use the service. Access control and
authorization can also be seen on the wireless device itself. Many phones have a lockout feature; the user must
provide an access code before the device can be used. This feature provides protection against an unauthorized
person's accessing a cell phone and assuming the owner's identity when using the cellular service.

Bringing this idea home, picture a wireless network setup in your home. You track your finances by using an
accounting program on your desktop computer. You also trade stocks with an online brokerage service and store
other personal data on the machine. You do not want anyone to access your wireless network without your
permission, so you implement access control to authenticate anyone attempting to access your network.

Nonrepudiation
Nonrepudiation is the principle that a user or process be identifiable and accountable for its actions in a manner
that prohibits the user or process from denying its involvement at a later date.

To explain nonrepudiation, we will describe a familiar transaction, charging something to a credit card. A vendor
requests a credit card, swipes the card in a reader, and enters the transaction amount. The card reader then
contacts the card provider and verifies that the card is valid and the amount requested is acceptable for that
person's credit profile. Finally, an authorization message is returned. The reader prints the transaction on a
carbonized receipt that provides multiple copies of the transaction. The receipt is presented to the user for her
signature. The vendor verifies that the signature matches the one on the card. Both the cardholder and the vendor
retain a copy of the receipt. The dual receipts, with the card provider's authorization and signature, provide
nonrepudiation. The cardholder cannot deny that she made the transaction, because the vendor has a copy of the
receipt with her signature. The vendor cannot modify the transaction (for example, alter the amount), because the
cardholder has a copy of the receipt. The nonrepudiation in this example easily transfers to a requirement for
wireless m-commerce applications. How to implement such a requirement, however, is not as obvious.

Privacy and Confidentiality


Privacy and Confidentiality is the principle that a user, process, or hardware component have the entitlement to
protect its information from unauthorized disclosure.

Lately, this topic has generated a lot of press associated with the protection of credit card information over the
Web, legal authorities' subpoenaing e-mail and online purchase records, and the monitoring of ISPs to determine
users' surfing habits. It has become clear that consumers, and not just government agencies, are concerned about
privacy and confidentiality on wired and wireless networks.

Privacy and confidentiality are a tricky and often contentious issue. Consumers who want to surf the Web in
anonymity (or with increased privacy) need to be able to conduct commerce over the Internet, where
nonrepudiation for a transaction is necessary. At the same time, a transaction must be confidential.

Certain government agencies would prefer that all communications and transactions be visible, at least, to their
specific agency. To make this a reality, several foreign governments have gone so far as to outlaw the use of
cryptography. To counter this, privacy fanatics have circumvented even this extreme measure by utilizing
steganography instead of cryptography. Steganography is the use of techniques to hide information within other,
innocuous-looking data files or streams. Several worldwide conferences on steganography are becoming very
well attended. Although steganography provides excellent privacy and confidentiality (after all, it is hard to read
what you cannot find), it usually requires a relatively large amount of innocuous data to hide a relatively small
amount of payload. Therefore, although intriguing, this technique is not well suited for commercial wireless use,
so we will not explore this aspect of confidentiality directly. Rather, we will point out that this idea lends itself to
other security uses, such as digital watermarking.

Integrity
Integrity is the principle that a user, process, or hardware component have the capability to verify the accuracy of
what is sent or delivered and that the process or hardware component has not been altered in some way.

Integrity has always been of prime importance for consumers conducting transactions electronically. For the
most part, it is taken for granted. Today, for example, taxpayers can complete their taxes on the Web and submit
them electronically. How many complete a parallel copy of the taxes manually, or at least offline, to ensure that
the computations are correct and the proper information is being recorded on the proper forms? Certainly not a
majority. Couple this with the thought that these returns are then processed on 25-year-old IRS computer
systems, and the lack of integrity should make you shudder. Consumers demand that the processes and services
they use provide reliability because they deal with critical information that can have serious consequences if its
integrity is not maintained.

Auditing
Auditing is the principle that the activities of a user, process, or hardware component be reviewed to ensure that
whatever was performed was appropriate for the given entity.

Auditing can be both a reactive and proactive process—reactive in that audit logs may be examined at a later date
as a forensic measure to identify the source of a security problem or to determine the extent of the exposure,
proactive in that audit logs may be examined at or near real time to detect abnormal behavior or prevent someone
from attempting to bypass security measures. Clearly, the latter is preferable, but examining logs or monitoring
user activity in real time is resource-intensive. If this type of monitoring is deemed appropriate, what is
monitored must be carefully planned.

Return for a moment to the cell phone ECN/DCN discussion referring to authentication. If service providers truly
believed that their ECN/DCN combination for authentication of the end user could not be replicated, they would
not perform as much auditing as they do. Service providers routinely implement an automated auditing system to
monitor user access for anomalies—for example, the same ECN/DCN combination accessing the system at the
same time and the same ECN/DCN pair accessing the network from different locations (say, New York and
Miami) within a short time. This activity may be perfectly legitimate, but it falls outside the normal usage pattern
and would be flagged in an audit log to be reviewed by one of the service provider's security or auditing staff.

If security could be analyzed and implemented in a vacuum, without other considerations, applying this criteria is
all that would be necessary to implement a secure system in any situation. However, in practical applications,
security is only one aspect of a complete system. To evaluate the effect that implementing security has on a
system, the development or operational principles and the management principles associated with the system
must also be considered. We will now briefly discuss these principles to ensure your understanding of the trade-
offs made during an analysis of a system and the subsequent implementation of a system, with the proper balance
between these principles and security.

Development and Operation Principles


Seven principles commonly describe the phases for evaluating a system during design, under development, or in
operation. These principles are Functionality, Utility, Usability, Efficiency, Maintainability, Scalability, and
Testability. These, too, are in no particular order, may vary in relevance, and are heavily interdependent.

To illustrate this interdependence, we use a common Project Management trigon that shows the software
development trade-off (see Figure 2.1). The attributes of a project may be represented by a point within the
trigon. The customer can choose to optimize any two, but the third is dictated at its worst possible value. (The
closer you move toward a property, the greater that property's value. At the points for Functionality, Cost, and
Time, you have Maximum Functionality, Minimal Cost, and Minimal Time, respectively.) Of course, anyone
who has ever done project management knows that you are always tasked to optimize all three.

Figure 2.1. The Project Management trigon


Functionality
Functionality is the principle that a system or component perform different tasks of relevance toward a goal.

As illustrated in Figure 2.1, functionality is a primary factor in making development trade-offs and the main
factor governing many other principles in a system. It is no wonder, therefore, that developers want to develop
the functionality first (by starting coding or by prototyping circuit boards of a system). To do this, they make
many trade-offs, including sacrificing security. As the illustration indicates, however, functionality can be
instituted in the early stages, but developers will pay later in time or cost, or both.

Utility
Utility is the principle describing the extent to which a system or component meets the goals established.

There is some debate about whether utility is any different from functionality in a development environment. We
believe that it is, and the best way to explain the difference is with the following example. An engineer has to
diagnose the logic within a microchip; the tool she has available is a Swiss Army knife. Although the knife has a
lot of functionality, it does not offer much utility for the problem at hand.

Usability
Usability is the principle that a system or component be intuitive and simple to use.

For example, the first home DOS computers would boot to a prompt C:\> with a blinking cursor when turned
on. To write a letter using a word processor, the user would have to change directories to the location of the word
processor:
C:\> cd c:\progfiles\wp
Then, to start the program, the user would have to type another command:
C:\progfiles\wp> wp
The alternative was for users to learn how to change the path statement in the autoexec.bat file or create a batch
file to perform the steps for them. In addition to this new step, they would have to learn the 8.3 character-naming
scheme, thus making file management for today's average user nearly impossible. The learning curve to make
these first machines useful was steep indeed.

Unfortunately, these usability issues are not just bad memories of the past. Today, you can find similar cases with
palmtop devices that force the user to learn a new alphabet or cellular devices that require users to enter text from
a numeric keypad.

Efficiency
Efficiency is the principle that a system or component use both internal and external resources effectively.

The priority of efficiency has changed greatly in the past few years. With PC memory and hard drive prices
dropping, the recent past saw little need for efficient code. However, with the available real estate in mobile
devices being offered at a premium, efficiency is increasingly higher on developers' priority list.

Maintainability
Maintainability is the principle that a system be easy and cost-effective for a user, or others on her behalf, to
access and modify for upgrades, troubleshooting, and preventative maintenance.

For example, a particular model of a popular sports car manufacturer, which shall remain nameless, required that
the engine be removed to change the rear spark plugs. This resulted in very poor maintainability. Owners had to
pay for the labor and for parts such as gaskets and oil to have this procedure done at 15,000–20,000 mile
intervals to maintain performance. Interestingly, rather than change the design, this manufacturer encouraged the
development of new spark plugs that did not need to be changed for 100,000 miles, thereby increasing
maintainability. Even though this solution would not have immediately come to mind for most people, it did
increase maintainability. This also illustrates that there are many ways to obtain a satisfactory result when you
keep an open mind about trade-offs.

Scalability
Scalability is the principle that a system be extendable or usable for an expanded customer base.

A good example would be that of a local telecommunications provider who advertised the ability to bring high-
speed Internet service to area businesses without the need to run fiber-optic cable. Instead, it claimed to use
wireless microwave from its network to the customers'. The architecture, although great in theory, was not as
scalable as advertised. When the telecommunications provider tried to expand its service beyond a relatively
small customer base, its wireless network was choked by the demand, and service was, shall we say, less than
acceptable.

Testability
Testability is the principle that the requirements of the system be well-defined and specific so that tests can be
created to prove directly that the requirements have been met.

A classic example would be a requirement that the system be easy to use. The phrase easy to use is meaningless
without concrete, specific definition. How would you test this? How can testers know for whom it is supposed to
be easy to use? (The requirements would be starkly different for a computer engineer than for a retired
government clerk.) When it comes to security issues, we have seen requirements such as "the system shall be
secure." Part of the process of ensuring that you have good and valid requirements is to determine how these
requirements will be tested. Again, if the effort is placed up-front, there will be real benefit later in the
development process and in the final product.

Management Principles
Four principles commonly describe the phases for evaluating a system's business aspects. They are Schedule,
Cost, Marketability, and Margin. At the risk of losing some of the developers reading this book, we feel that we
must cover these areas because looking at the security and development trade-offs without looking at the
business aspects can encourage certain trade-offs that may adversely affect the ultimate system. These business
concerns directly affect the security/functionality trade-offs made during design, development, and production.
The biggest problem is that if the system is not properly designed at the beginning, the time and cost involved in
retrofitting it to perform as desired can be overwhelming. Unfortunately, these trade-offs often come to light
several stages into the project, usually at the testing phase. Our point is this: You cannot test functionality,
security, or any other -ity into a system; it must be designed in from the start and carried throughout the entire
process.

Schedule
Schedule is the principle that, to bring the project to completion, the system have a plan associated with the
activities. A schedule relates resources to tasks and provides a means to manage resources in relation to time and
effort. It also provides detailed resource requirements, including the number and type of resources for given parts
of a project. A schedule also allows managers to determine critical points or milestones in the development
process and ensure that the project is given adequate attention at these times.

Cost
Cost is the principle that the system have tangible costs associated with its development and maintenance (and
exit strategy) and that these costs be known and linked to specific parts of the development. Knowing these costs
enables managers to anticipate cash flow needs and capital expenditures as a project progresses.

Marketability
Marketability is the principle that the system have a consumer base, that there be a need for this product or
service, and that the product or service have a differentiator to distinguish it in a positive manner from its
competitors.

Margin
Margin is the principle, or more accurately, measure, that the product be sold for an amount greater than the costs
associated with producing, distributing, and selling the product. This difference—between what a consumer is
willing to pay for a given product or service and the cost to deliver that product or service—is the profit or
margin.

Security analysis examines all these common principles and determines the best trade-offs to protect the system
effectively while maintaining control over other principles of interest. To be complete and useful, any system
analysis from a risk perspective must consider all these factors to determine overall risk. Many who claim to
perform security, software, or some other form of risk analysis consider only portions of these principles.
Security or software analysis cannot truly be useful unless all the principles are considered when
recommendations or mitigations are provided.

The Security Analysis Process—I-ADD


At first glance, security analysis and planning can seem to be a daunting task. After all, countless books have
been written and entire careers based on each one of these primary elements. How can all of them be analyzed
simultaneously? Well, they do not have to be. Further, if the security analysis process is begun early in the
project, particularly at the design phase, it is even easier because the analysis can follow, complement, and
bolster the standard design process already in place. There are four phases to a successful security analysis
process. We define this process as the I-ADD security analysis process, or "How I Add Security to a System."
The phases are

1. Identify targets and roles.

2. Analyze attacks and vulnerabilities, generating mitigations and protections.

3. Define a strategy for security, mindful of security/functionality/management trade-offs.

4. Design security in from the start.


The key is that the I-ADD security analysis process is iterative and recursive. These two attributes enable you to
perform a task that would otherwise be nearly insurmountable in scale.

Identify
The first phase in the process is to identify the system's high-level functional blocks. A typical wireless system
has six high-level functional blocks (see Figure 2.2 for an example). After the high-level functional blocks are
identified, an examination of each is performed with the intent of identifying the resource or information targets
within each block that should be protected.

Figure 2.2. A typical wireless system

An alternative way to proceed through this process is to identify user roles in the block instead of the threats
against it. If you are generating requirements documents, the former method is more useful in producing
requirements that are testable. For instance, "The system must protect the user's credit card information" is better
than "The system must protect against a malicious attacker who is monitoring the line and attempting to capture
the user's credit card information."

The first stresses what must be protected, the credit card information, and the second stresses the role, a
malicious attacker.

For a complete evaluation, you must look at both the targets and roles to be protected. A simple look at one or
the other can cause a vulnerability to be placed in the system. For example, no "known" threat exists, so no
protection in that area is provided. Conversely, excessive resources may be spent protecting a resource that is not
vulnerable to a threat.

As you progress through the system, information may come to light identifying additional roles or targets that
require protection for previous blocks. These items should be noted under the appropriate block as the process
continues. After the first iteration is complete, the analysis is repeated until no more targets or roles deserving
protection are identified. This concludes the high-level security analysis of the Identify phase.

Analyze
When the identification process is complete at the highest level, it is time to analyze the vulnerabilities and the
known and theoretical attacks against the targets by the various roles identified. The goal of this phase is to
develop an understanding of which items deserve additional resources to protect, which are "nice to have," and
which can be placed on an "acknowledged, with no action necessary" list.

This phase is accomplished by

• Studying existing known attacks

• Comparing the system to other similar systems to analyze how vulnerabilities were mitigated and
determining the appropriateness of using similar solutions with this system
• Examining current security and technological journals and Web pages for insights into the technology
and research

• Developing an understanding of how potential mitigations will affect other aspects of the system

• Consulting experts in the field

The Analyze phase is closely tied to the next phase. When learning any methodology, it helps to consider each
phase as a separate step, but practice and experience blur the lines between the phases.

Define
The Define phase takes the information from the previous two and defines or develops a strategy for
implementing security in the system. This is where all the principles and factors are analyzed and, with that
knowledge, trade-offs are made to provide the system with the necessary balance between all the elements
defining and guiding system development. Usually, this phase is best approached as a team activity, with
someone representing and concentrating on each major element of the system.

For example, a team may be composed of one or more (depending on the system's size and complexity)
engineers representing development efforts, a security engineer, a project manager representing schedule, cost,
user issues, and conflict resolution, and, last but not least, a facilitator who captures actions and ensures that the
process does not become bogged down by conflicting priorities. The facilitator does not participate in the
discussions. Done correctly, the strategy developed will represent a thoughtful analysis of the critical trade-offs
between technology and business objectives and provide a roadmap for successful completion of the system.

Design
With a strategic roadmap in place, the system can be designed from the ground up, incorporating the features and
procedures developed during the Define phase. The design should incorporate all the aspects of functionality
determined appropriate in prior phases. If during this phase something comes to light that alters what appears to
be reasonable, the process can be reiterated (hopefully at a rapid rate) to evaluate how this new information
affects previous assumptions and recommendations. When the Design phase is complete, the resultant design
specification and associated functional description should fully describe the system.

Figure 2.3 shows a graphical representation of the process. You enter the process at the Identify phase and iterate
through the four phases until you have an acceptable design. Then you exit.

Figure 2.3. The security analysis process

Repeat
The next step in the process follows a recursive pattern in which each block is broken down to the next
functional level. Figure 2.4 shows our typical wireless system with the wireless device broken down to the next
functional level. The levels shown are just one possible way to break out the device.

Figure 2.4. A wireless system with the wireless device broken down to the next level

Each block at this level is then examined as a system unto itself, as illustrated in Figure 2.5, just as the high-level
blocks are examined using the I-ADD security analysis process. This recursion process is continued until the
low-level software components are designed or the hardware components identified. The results are then returned
up through the process, verifying that higher-level design requirements have been covered and incorporating any
additional items identified in the lower levels.

Figure 2.5. A wireless system with the I-ADD process imposed on the second level of the
wireless device

The Foundation
Before continuing, you should have a good understanding of the principles of security, development, and
management. The security principles are Authentication, Access Control and Authorization, Nonrepudiation,
Privacy and Confidentiality, Integrity, and Auditing. The development principles are Functionality, Utility,
Usability, Efficiency, Maintainability, Scalability, and Testability. The management principles are Schedule,
Cost, Marketability, and Margin.

You should also understand the I-ADD security analysis phases of Identify, Analyze, Define, and Design and
that they are iterative processes that can be used recursively to divide a system into its basic functional blocks.
These blocks are then analyzed, and the results are returned back up through the process until a complete system
design or analysis is performed.

This provides you with the background necessary to understand the security issues covered in latter parts of this
book. We will now be able to talk specifically about security issues of wireless systems and will go through the
process described here on portions of actual wireless systems in use today. This chapter gives you an even
foundation on which to build your understanding of wireless security.

Part II: Know Your System


Chapter 3. Technologies
Technology is the knack of so arranging the world that we do not experience it.

—Max Frisch

In the initial phases of developing a comprehensive security solution, you need to know the ins and outs of all
components. Here we present detailed information on wireless technologies such as 802.11b, Bluetooth, and
Wireless Application Protocol (WAP). Each technology occupies a different place on the wireless technology
spectrum and has its own security implications. This chapter shows you what type of information you should
understand about wireless technology by presenting the information necessary to know about certain
technologies.

802.11 and 802.11b


In 1997, the Institute of Electrical and Electronics Engineers (IEEE) published the first world-recognized
standard for wireless networks, 802.11. About two years later, the IEEE published 802.11b, also known as
802.11 High Rate, which specifies the standards for building wireless systems that operate with data speeds of up
to 11Mbps. The intention of this standard is to give wireless networks the same robustness as that of wired
Ethernet networks. One declared benefit of 802.11b is that administrators who design wireless systems to be
seamlessly compatible with existing wired standards can follow the specifications of 802.11 with the assurance
that the 802.11b standard will be backward compatible.

The basic features of 802.11b are defined by the existing 802.11 standards for architecture and services. The
design paradigms are similar, and the components find parallels to the wired equivalent of 802.11b. The 802.11
standards provide specifications for the lower two levels of the Open System Interconnection (OSI) network
reference model: the physical layer and the data link layer (see Figure 3.1). Any current application, network
operating system, or protocol that exists in compliance with the 802.11 standards should be compatible with the
wireless standards. These components reside in layers above the physical and media access control layers. Their
operation is not affected by differences in the lower layers. To understand 802.11b, you must understand the
building blocks of the 802.11 standard.

Figure 3.1. The OSI model


802.11 System Components
The 802.11 regulation sets definitions for two categories of equipment: a station and an access point (see Figures
3.2, 3.3, and 3.4). The wireless station is any standard PC that has a Network Interface Card (NIC) that supports
wireless communication. The station has access to the wireless medium and radio contact to an access point. The
access point is the equipment that allows the wireless system to interact appropriately with a wired one; in other
words, the access point performs an important function called bridging. A typical access point comprises a radio,
a wired network interface, and bridging software conforming to the IEEE bridging standard. The access point can
be pictured as a base station for the wireless system. Communication for many wireless stations is funneled to the
access point and directed to the wired network.

Figure 3.2. The BSS infrastructure mode


Figure 3.3. The ESS infrastructure mode
Figure 3.4. The BSS ad hoc mode

802.11 Architecture Modes


The wireless stations and access points are configured in two modes as defined by the specifications:
infrastructure mode and ad hoc mode. In infrastructure mode, all stations in a system connect to an access point,
not directly to one another. In ad hoc mode, the stations interconnect directly, without communicating through an
access point.

Infrastructure mode comprises access points and stations in the same radio coverage that form a basic service set
(BSS), illustrated in Figure 3.2. Several basic service sets connected form a distribution system, creating one
larger network and extending the wireless coverage area. This distribution system is called an extended service
set (ESS). The 802.11 specification does not further detail the architecture of a distribution system. The
individual implementations are left up to system architects. Decisions about how to interconnect BSSs are based
on design requirements, types of stations or devices, and business considerations. If interconnection with wired
systems was a requirement in building a wireless system, infrastructure mode specifications would provide
direction for a viable system. Handoffs can occur between BSSs to extend network capabilities. Alternatively,
the access points in several BSSs can be connected to a wired LAN, further extending its capabilities.

An architecture in ad hoc mode is a set of stations that communicate without an access point (see Figure 3.4).
This on-the-fly mode does not require connection with a wired network and is easily assembled and
disassembled. Each node communicates with the others directly. In ad hoc networks, however, possibilities for
interconnecting with other wired or wireless networks are limited in that there is no master/slave relationship and
each station maintains its own independence. As in infrastructure mode, interconnected stations form BSSs.
802.11 does not specify routing paradigms, data forwarding, or exchanging topology information among BSSs.

802.11b Physical Layer


One of the most valuable additions the 802.11b standard provides is the standardization for the physical layer
support of two new speeds, 5.5Mbps and 11Mbps. The 802.11 standard specifies two signaling methods, with
data rates of 2Mbps and 11Mbps and operation in the 2.4–2.4835GHz frequency band: frequency-hopping
spread spectrum (FHSS) or direct-sequence spread spectrum (DSSS). The two are not interoperable. In FHSS,
the band is divided into many subchannels. The receiver and sender, with the intent of minimizing the chance
that two senders will simultaneously use the same subchannel, decide on a hopping pattern. Subchannel
bandwidth cannot be greater than 1MHz, as regulated by the FCC. These regulations restrict the maximum usage
and lead to high hopping costs. At the same time, however, 802.11b is less susceptible to multipath propagation
interference than 802.11.

The DSSS technique, however, allows for most subchannels to overlap slightly. Data is sent over channels
without hopping. A technique called chipping is used instead. Much like file compression, this technique allows
bits of user data to be converted into a series of redundant bit patterns, called chips. The redundancy and spread
of chips across the entire channel facilitate error checking and correction. Retransmission is rarely necessary,
even if part of the signal is damaged.

The value added by the institution of the 802.11b standard is realized in the standardization for the physical layer
support of the two new higher speeds, 5.5Mbps and 11Mbps. In the 802.11b specifications, DSSS is the sole
signaling method supported. FHSS is eliminated in this new standard because it cannot support higher speeds
without violating FCC regulations. The intention for 802.11b DSSS use is that it interoperate with existing
1Mbps and 2Mbps 802.11 DSSS systems but not with 802.11 FHSS systems.

To increase the data rate in 802.11b, advanced coding techniques are described. In the previous standard, 11-bit
Barker sequences (an 11-bit chipping) encode all data sent over the air. Each Barker sequence is converted to a
waveform and sent over the air. The waveforms, called symbols, are transmitted at 1MSps (a million symbols per
second) in a 1Mbps DSSS system and are doubled in the 2Mbps systems. In the 802.11b standard, rather than use
the 11-bit Barker sequences, a Complementary Code Keyring (CCK) is specified. This CCK enables the symbol
rate to be increased to 1.375MSps.

802.11b uses dynamic rate shifting to achieve the maximum data rate, even in cluttered environments. Data rates
are automatically adjusted to make the best use of the 11Mbps rate. When high interference is present or a
wireless device is moved outside the best range for 11Mbps, the rate is shifted to a slower speed (5.5Mbps,
2Mbps, or 1Mbps). The dynamic rate shifting automatically bumps to a higher speed when moved back into
appropriate 11Mbps range or when the interference sufficiently subsides.

802.11 Media Access Control Layer


The 802.11 Media Access Control (MAC) layer is designed to support multiple users on a shared medium by
having the sender detect and gather information about the medium before accessing it. The 802.3 Ethernet-based
(wired) LAN specification is also designed to support multiple users on a shared medium and specifies methods
for the sender's sensing the medium, however the protocol employed (Carrier Sense Multiple Access with
Collision Detection [CSMA/CD]) details collision handling and redirection. In 802.11, collision detection is not
possible because stations cannot listen and transmit at the same time; the radio transmission prevents the station
from sensing a collision. The protocol specified is slightly different from that in 802.3; it is termed Carrier Sense
Multiple Access with Collision Avoidance (CSMA/CA). CSMA/CA involves sending extra packets to confirm
receipt of transmitted packets, called explicit packet acknowledgment (ACK).

In a proper CSMA/CA transmission, the sender senses the medium, and if no collisions are detected, it waits a
randomly defined period of time and, if the medium is still free, transmits to the intended recipient. When the
recipient has received the sender's entire transmission, it returns an ACK frame. The process is successfully
complete when the sending station receives the ACK frame. If the ACK frame is not received by the sending
station, either because the original transmission was not received or because the ACK frame was unable to
transmit successfully, a collision is assumed, and the data packet is retransmitted after another randomly defined
period of time.

CSMA/CA effectively handles transmission and collision problems associated with radio communication. It does
not provide for minimization of overhead, however, and renders 802.11 communication slower than that of 802.3
by necessity. Simply put, wireless communications are slower than wired communications.
Another protocol defined at the MAC layer is an optional Request to Send/Clear to Send (RTS/CTS) protocol.

Unfortunately, accompanying each benefit is a caution. The 802.11b standard was developed to be seamlessly
compatible with the existing IEEE wired standards but has been criticized as being too compatible. Because the
standard's security requirements are compatible with a wide range of devices, networks, and other technologies,
the standard in its basic form leaves potential systems wide open.

The standards dictate what should be possible—that the application layer and network protocol layer not be
affected by these differences at a data link or physical layer—but they do not operate seamlessly without risk.

802.11b Security and Wired Equivalent Privacy (WEP)


The goal of securing wireless network traffic is to approach as closely as possible the security offered in wired
networks. The 802.11b standard affords this possibility by way of the Wired Equivalent Privacy (WEP) protocol.
WEP offers communication encryption and physical device authentication capabilities to wireless
communications while balancing users' needs for privacy with ease of use. WEP is available in 64-bit and 128-bit
strength. In wired LANs, stealing network traffic is considered difficult because an attacker needs to be in close
physical proximity to the network to gain access. The attacker has to be close enough to a network cable to use
listening equipment to intercept waves emitted as data flows though the network. In wireless networks, however,
the same attacker does not need to be physically close to a cable but can simply be in a parking lot adjacent to the
building where the wireless LAN is installed.

The WEP protocol algorithm is designed on five premises:

1. Reasonably strong. Takes a reasonably long time to crack the encryption.

2. Self-synchronizing. Resynchronizes connection among devices when communication is inadvertently


terminated.

3. Computationally efficient. Is not too taxing on battery power.

4. Exportable. Can be moved among media when necessary.

5. Optional. Can be turned on and off at a user's discretion.

The protection provided by the WEP algorithm is all some mobile users require. It automatically synchronizes
itself between the device and the access point. This is helpful because wireless stations frequently drop
communications or vacillate in and out of service, depending on their distance from an access point and the
strength of the signal. The algorithm is efficient and can therefore be implemented in software or hardware. It
can be exported under current U.S. government regulations and is optional in an 802.11 system.

The process is as follows (see Figure 3.5):

Figure 3.5. The WEP authentication sequence


1. A requesting station sends an Authentication frame to the access point (AP).

2. When the AP receives the initial Authentication frame, it replies with an Authentication frame
containing 128 bytes of random challenge text generated by the WEP engine in standard form.

3. The requesting station copies the challenge text into an Authentication frame, encrypts it with a shared
key, and sends the frame to the responding AP.

4. The receiving AP decrypts the value of the challenge text, using the same shared key, and compares it
to the challenge text sent earlier. If a match occurs, the responding station replies with an authentication
indicating a successful authentication. If not, the responding AP sends a negative authentication.

Most businesses that choose to use 802.11b wireless LANs, however, should not rely on WEP alone. In spring
2001, WEP encryption was determined breakable by researchers at the University of California, Berkeley and at
the University of Maryland. The papers produced by these two groups of researchers outlined the weaknesses
inherent in the creation of keys used in the encryption algorithm for encrypting traffic traversing the wireless
network. Throughout the rest of the year, different groups implemented the attack with collections of WEP data,
and some released tools to facilitate the encryption break.

The encryption algorithm itself is not the problem. The vulnerability lies in the keys used in the encryption
algorithm, which render it relatively easy to break. The mechanism used to generate the keys creates keys that
are too closely related to one another. With enough wireless data packets captured, you can easily determine keys
to use to crack encryption. With the ability to crack the encryption code, all data passed on a wireless network
becomes viewable, and you have successfully pried the network wide open.

Often, the wireless access point installed in an office is placed inside the corporate firewall, opening the entire
network to attack. Wireless network attacking is very difficult to detect because the attacker needs only to
conduct a passive attack to gain access to the system. Merely by listening to packets as they fly through the air,
an attacker can execute her break. Two applications that can be used to break into a wireless network, AirSnort
and WEPCrack, can be used to implement the findings published by the California and Maryland researchers.
These applications boast being capable of resolving a network's WEP keys within seconds of listening to network
traffic. With the introduction of these applications, any high school student with a laptop and a wireless network
card, regardless of her knowledge of technical details, can break in to wireless systems.
The weakness of the WEP encryption implementation is not the only one in 802.11b. There is another concern,
which should garner more attention than it does. Most wireless access points and networks are being deployed
without the limited defense of WEP encryption being enabled.

Wireless networks configured merely by plopping an access point into an existing secure wired network should
be deemed insecure. Wireless access points should be used only with the knowledge that they introduce gaping
holes into a system by nature. They can be used, but only after certain precautions are taken. In December 2001,
an IEEE committee approved an interim patch for WEP that thwarts the success of applications like AirSnort or
WEPCrack and other homegrown varieties of WEP encryption breakers. It is more prudent, however, to consider
WEP a weak protection against attack. Access points should be treated like Internet traffic—with great caution.
They should be placed outside firewalls and routed through Virtual Private Network (VPN) solutions in all cases.
Basically, networks should not rely on the security provisions that come with 802.11b out of the box. Firewalls
and VPN solutions protect the problems described here just as they protect wired systems.

In late 2001, RSA released a solution to the weakness present in WEP, the Fast Packet Keying solution, which
uses a technique that rapidly generates a unique key for each wireless data packet. The IEEE committee
approved this fix in early 2002. Although it quells the war-driving experiments of many, it does not solve
wireless LAN security problems indefinitely. Claims were made that this solution solved the weakness inherent
in wireless communication using WEP. These claims are valid, but many wireless security proponents now
believe that a more advanced encryption mechanism and key generation scheme should be used.

Until subsequent versions of 802.11 and WEP are secure, external security measures are absolutely critical.
Almost no security is offered at the data link layer. It is safe to assume that if any wireless access points are
placed inside a firewall on a corporate network, anyone within physical range of your wireless network can act as
a legitimate user on that network. Although 802.11 is the most popular technology, there are other competing and
complementary technologies, such as Bluetooth.

Bluetooth
Unlike 802.11, Bluetooth is a technology that operates solely in ad hoc networks. Infrastructure mode is absent
from Bluetooth discussions, as are long ranges among stations. Interestingly, Bluetooth gets its name from a
Danish Viking king named Blatand, who was king of Denmark around the late 900s. He was responsible for
Christianizing Denmark and uniting it with part of Norway. His name indicates nothing about the technology but
rather signifies the importance of countries in this region of the world in the wireless industry. In 1998, a little
more than one thousand years after his death, five companies, following Ericsson's lead, founded the Bluetooth
consortium. These companies, including Intel, IBM, Nokia, and Toshiba, directed the development of Bluetooth
specifications with the intention of Bluetooth's being a low-cost wireless transmission standard. Many more
companies joined what is now termed the Bluetooth Special Interest Group (SIG); the membership totals around
1,000.

Bluetooth is a de facto standard, as well as a specification for small-form factor, low-cost, short-range radio links
among devices. The Bluetooth SIG drives the development of the technology and is attempting to push it to the
general telecom, networking, and computer industry markets.

The Bluetooth SIG goal is to integrate Bluetooth in everyday devices, not just cell phones and laptops. SIG
operates under the premise that adding Bluetooth technology to a device should increase its cost by only $5 or so.
Bluetooth should be capable of functioning in such basic implements as a pen and such complicated devices as a
computer or PDA. Bluetooth spares expensive wiring and infrastructure costs but does require stations to be
within close proximity of one another to communicate. It enables devices to interoperate with an approximate
range of 10 meters. The Bluetooth SIG members intend for Bluetooth to be the dominant technology for
connecting all consumer electronic devices. They envision the use of Bluetooth to connect a cordless handset to
its phone, a peripheral to a computer, a PDA to a computer, two PDAs to each other, or perhaps even a remote
control to a TV via a computer.

In general, devices similar to Bluetooth, which use infrared (IR) as a transmission medium, are reliable, and
building the technology into a device requires little cost. These devices do, however, require a line of sight
between them, which significantly limits their versatility. Bluetooth itself does not have this same line of sight
restriction because it operates over radio instead of light, like other IR-capable devices.
Bluetooth Physical Layer
Bluetooth uses the 2.4GHz frequency band with a frequency-hopping scheme for transmission at a rate of 1,600
hops per second. The 2.4GHz frequency represents the range assigned by international agreement as the
communications spectrum for industrial, scientific, and medical (ISM) devices. More familiar devices that
operate in the ISM band include baby monitors, garage-door openers, and certain cordless phones.

One method some Bluetooth advocates see as a way to avoid Bluetooth devices' interfering with one another's
communication is its frequency-hopping technique. This technique is similar to that employed in the 802.11b
standard. The time between two hops in Bluetooth transmissions is called a slot. Each slot utilizes a different
frequency, and each device hops among the 79 randomly chosen frequencies. In some countries, the available
bandwidth is only 80MHz. In these countries, the hops are spaced out in 1MHz intervals of equal proportion. In
countries with smaller bandwidth allowances, where regulations allow only 23 hop carriers, the frequencies are
less spaced out from each other. Bluetooth transmits a weak signal, only 1 milliwatt. The most powerful cell
phones transmit at about 3 watts. The low power dictates the short 10-meter range from which Bluetooth devices
can communicate. The low power combined with frequency hopping is its native protection against security
breaches and communication interference. Later in this book, flaws in this theory are discussed, as well as risks.

The Bluetooth technology is structured in ad hoc piconets, networks with two or more devices that don't require
infrastructure frameworks (see Figure 3.6). In a Bluetooth piconet, devices are designated master or slave. In any
given communication session, a device has the potential to be either a master or a slave. The default method for
determining the master is that device which initiates a connection. As many as seven devices can be designated
slaves to each master. In some special circumstances (for example, Bluetooth's implementation of profiles,
discussed later in this section), devices may need the capability to toggle between master and slave. In such
cases, master/slave switch operations have to be added to the basal devices.

Figure 3.6. A piconet

To establish a piconet, an initial device (by default, the master) must first initiate contact with a slave. If the
master already knows the address of the intended station, it transmits a page message. This message simply alerts
the potential slave that the master wants to connect and establish a communications link. If the master does not
know the slave's address, it must first send an inquiry message to determine the station's address. The message
requests the slave's MAC address and, after retrieving the information, transmits a page message.

The master in a Bluetooth piconet defines its frequency-hopping sequence. This frequency hopping is similar to
the FHSS examined in the 802.11b section. The frequency hopping in this instance, however, is designed to
reduce the likelihood of transmissions from one piconet impeding those of a neighboring piconet. It is possible
for a master to address a specific slave with its communication and not broadcast a transmission to any active
slaves in its piconet. This type of interaction is typically called point-to-point communication. If a master
broadcasts a transmission to all active slaves in its piconet, this is termed point-to-multipoint communication.
Slaves cannot initiate either type of communication. Slaves cannot even initiate communication with each other.
A slave device would have to depart from its current piconet and begin another as a master, at which point it
could contact another slave directly from the previous piconet (if that slave, too, has left the original group).

Recall that in 802.11, BSSs could be joined to form ESSs. This network expansion finds a parallel in Bluetooth
as well. Piconets can join to form scatternets (see Figure 3.7). For BSSs to expand to ESSs, they must
interconnect through an infrastructure, a connection to a similar access point or wired network. In Bluetooth,
however, one device that is a member of two would-be joined networks must be simultaneously a slave and a
master. This device is an active participant in more than one piconet. The Bluetooth consortium has identified
several parameters for the existence of scatternets.

Figure 3.7. A scatternet

In a scatternet, there may be only ten fully loaded piconets (recall that a piconet can contain one master and as
many as seven active slaves). If more than ten piconets are connected, communications can be damaged
arbitrarily among any devices or piconets. Derivatives of this rule stipulate further limitations. A single device
may link two or more piconets into a scatternet. A single device may act as a master in, at most, one piconet but
as a slave in (as many as) the other nine. The safest and most common configuration for any given device is
designated as a master of one piconet and a slave of another. When a device becomes a slave in more than one
piconet, its role becomes increasingly problematic in that it must maintain the clock values of multiple masters.
Coordinating these clocks is extremely difficult, and the Bluetooth specifications do not lend themselves to
multiple-slave designations.
A limitation in the Bluetooth specifications themselves is that they are general and lacking in architectural and
application-level recommendations. This generality makes application development and, subsequently,
development of provisions for security exceedingly troublesome. For more technical information about
Bluetooth, please see the Bluetooth specifications. For security concerns, the concepts outlined here provide
ample grounds on which to identify risks and construct their mitigations at various levels.

Bluetooth Protocol Architecture


The Bluetooth SIG has identified and described core protocols for Bluetooth architecture systems. These
protocols are designed to be used in conjunction with existing protocols used in other 802.x systems. The
Bluetooth SIG defines four core protocols, as well as several others that support Bluetooth communication. The
four core protocols are

1. Baseband and Link Control (BLC)

2. Link Manager Protocol (LMP)

3. Logical Link Control and Adaptation Protocol (L2CAP)

4. Service Discovery Protocol (SDP)

The BLC layer enables the combined physical links to form a piconet. This layer assists in organizing the
frequency hopping by using page and inquiry messages to determine the best path available. Two types of
physical links are available to Bluetooth devices: Asynchronous Connectionless (ACL) and Synchronous
Connection-Oriented (SCO). In ACL mode, communication is faster because the device is either transmitting or
receiving data at any one time and functions only with data packets. In SCO mode, a device can be transmitting
and receiving at the same time and can use either audio or data packets. The audio and data packets can be
encrypted and can use different levels of error correction.

The LMP layer establishes the physical links and manages encryption or authentication by negotiating baseband
packet sizes and exchanging and checking link and encryption keys. The LMP also maintains the connection
state of a device in a piconet.

The L2CAP layer operates in higher protocol layers than the BLC. It provides data services, both connection-
oriented and connectionless, to the upper-layer protocols with the capability to partition data for sending and
reassemble it upon receipt. L2CAP is defined only for ACL links; no support for SCO links is specified by the
Bluetooth Specification 1.0.

The fourth core protocol defined in Bluetooth is particularly interesting. The SDP layer fosters the creation of
Bluetooth usage scenarios. Bluetooth advocates tout SDP as extremely easy to use when you want to
communicate with another Bluetooth device. SDP provides device information, services available, and specific
information about querying services so that a device connection can be established. If SDP is configured in what
is called discoverable mode, Bluetooth devices are available for other devices to detect. The intended usage
model is that two people point their PDAs at each other and instantly communicate. The critical flaw in this
example is the assumption that both users want to communicate with each other.

Immediately, security-minded professionals should be able to conceive of hundreds of situations in which this
ease of connection is a distinct hindrance to security and privacy. If Barbara is taking the train from New York to
D.C. and falls asleep with her Bluetooth-enabled PDA on her lap, she doesn't want Karen, the technical rep from
her firm's biggest competitor, simply pointing her own Bluetooth-enabled PDA at Karen's and "communicating"
unbeknownst to Karen. If Tom approaches a vending machine and is about to make a phone call on his
Bluetooth-enabled cell phone, he doesn't want the vending machine to deduct 50 cents from his checking account
simply because there was a direct line of sight between the Bluetooth-enabled cash handler in the vending
machine and his cell phone.

These situations represent every security-conscious individual's worst nightmare. The likelihood that they will
come to fruition is, as of yet, unknown. Attempts will be made. Of that we can be certain.

Other protocols implemented in a Bluetooth stack are


• Cable Replacement Protocol. Provides transport capabilities for upper-level services.

• Telephony Control Protocol—Binary. Defines signaling for data and speech calls.

• Telephony Control Protocol—AT commands. Defines fax and modem use in Bluetooth devices.

• A set of adopted protocols (PPP, TCP/UDP/IP, OBEX, WAP). Allows Bluetooth to interoperate
with applications that reside in still higher layers on the protocol stack.

For more information on these protocols and Bluetooth's use therein, see the Bluetooth Specification 1.0.

Bluetooth Profiles
Bluetooth is designed to be specific at the lower layers of a protocol stack and more flexible and interoperable at
higher levels. The physical and data link layers define Bluetooth's unique characteristics and operations. The
Bluetooth SIG has developed several profiles to be used in assisting Bluetooth application development. These
profiles are intended to capitalize on Bluetooth's features, as well as enable it to operate with other systems. One
Bluetooth profile available enables developers to integrate WAP-compatible applications with Bluetooth devices.

Current SIG working groups are working on a profile to allow Bluetooth devices to be routable. Should this
profile be approved and implemented, the 10-meter distance limitation could be eliminated. In this intended
profile, if a user is more than 10 meters from an access point but within 10 meters of a user who is closer to the
access point, the transmission from the first user can be routed through the second to the access point and back.
This profile has not yet been published, so the relative security of the routing cannot be assessed. Assumedly, this
could present many dangers. The second user, who is functioning as a router, must be open to receiving traffic
and potentially has access to it, given the right equipment and applications.

Another profile worthy of note is the Generic Access Protocol. It is discussed next because it is inextricably
linked to discussions of Bluetooth security.

Bluetooth Security
The LMP in the Bluetooth protocol stack handles link-level authentication and encryption mechanisms. These
features are based on a shared secret between two devices. This shared secret key is supposedly generated the
first time two devices forge a connection and communicate. Bluetooth profiles describe how to use the functions
built in to the LMP and BLC protocols compatibly with other non-Bluetooth devices. Three security modes are
possible under the instruction of the Generic Access Protocol:

• Security Mode 1. Non-secure. The device does not automatically initiate any security procedures.

• Security Mode 2. Service-level enforced security. The device does not automatically initiate security
procedures before the L2CAP layer establishes a channel. This level facilitates ease of interaction with
applications that have varied security requirements.

• Security Mode 3. Link-level enforced security. The device initiates security procedures before the link
is established at the LMP level.

It almost goes without saying that mode 1 security is a poor solution for any operations or applications that
require even the beginnings of security. For the purposes of this text, Security Mode 1 will be disregarded.
Bluetooth recommends that either of the other two be implemented in designing Bluetooth architectures and
applications. Security Mode 3 is more straightforward than Security Mode 2 in that it initiates security
procedures before the link is established. Security Mode 3 does not allow for as much flexibility, so it may not be
used as often in systems designed to accommodate different devices, applications, and topologies. Security Mode
2, therefore, provides the most salient topic for examination of Bluetooth security. This mode allows system
architects and developers to construct security paradigms and requirements without eliminating the possibility of
integrating with devices and applications that have different security requirements.

Before discussing security design paradigms in Bluetooth systems, several points need to be established. In
Bluetooth's base form, its technology and specifications do not include inherent security functionality. Options
for designing implementations do provide for more secure methods of protecting devices, applications, and
architectures. However, each option requires significant expertise on the part of system architects and application
developers.

You can design and produce an application without security features, to be used with Bluetooth-enabled devices.
A device left configured at mode 1 security, for instance, with no additional security implemented, is wide open
to attacks at many levels. A device with mode 2 security, however, can be constructed to limit the risk posed to
its users.

Finally, it is important to note that the security paradigms set forth here are merely options available and are
intended to provide guidelines for establishing appropriate security architectures in other wireless systems. A
wireless architect would be remiss if he did not examine his system with respect to the security procedures
explained in Chapter 2, "Security Principles." The system must be dissected before appropriate security methods
can be determined. That said, the Bluetooth specifications do not provide for security design suggestions but do
provide the foundation on which to build relatively secure systems and communications.

There are two types of relationships between devices. In the first type of relationship, devices can be configured
to always trust and recognize each other, perhaps including unrestricted access to each other's available
resources. Your cell phone, for instance, can be configured to always recognize and authenticate fully with your
PDA and laptop. The second relationship type requires periodic or repeated authentication between your device
and others because they do not operate under the same patterns of trust as yours, for instance. This second
relationship between devices establishes nonpermanent trust. Access to services on these devices is restricted and
subject to consideration for different contexts and resources. The choice between the properties of trust and
nonpermanent trust is more or less appropriate in different system configurations.

WAP
The Wireless Application Protocol (WAP) is considered by some to be the standard in wireless communications.
WAP specifications were developed by members of the WAP Forum. (Those who sit in opposition to the
assertion that WAP is or will become the standard in wireless communications find themselves on the WAP Is
Crap bandwagon.) The WAP Forum was started by Phone.com (then Unwired Planet, now Open Wave) and
joined shortly thereafter by Nokia, Ericsson, and Motorola. Although its pricey admission fee of $27,500 did
prevent other companies from initially investing in the development of the WAP standard, the WAP Forum is
open to any group that pays the admission fee. Currently, there are nearly 1,000 members, including a variety of
smaller companies representing different interests from those of the four founding members.

There is great debate in the development community whether WAP, because it is a protocol standard like 802.x,
should be open for comment and criticism from individuals other than members of the WAP Forum. Still, the
members keep their doors closed to outside comments. WAP is in its early stages, with planned updates to be
released periodically over time. Whether its critics are right and whether it does succeed, WAP is a technology
that has had and will continue to have an effect on the wireless industry.

The WAP Forum published this global wireless protocol specification based on existing Internet standards, such
as XML and IP, for all wireless networks. WAP is primarily used in Europe, although it sees a small penetration
in U.S. wireless markets. More than 75 percent of cellular phones are shipped with it worldwide. Fewer wireless-
enabled PDAs, however, implement WAP than cell phone manufacturers. This could be due, in part, to the main
products produced by the founding members of the WAP Forum or could be merely coincidental.

Several components of WAP are essential to understanding its security features and limitations. First is its
protocol stack. WAP functions higher on the protocol stack than does Bluetooth or 802.11b. Because these three
technologies do not function at the same level, drawing salient parallels among the three is difficult. In wireless
markets, however, these three are the forerunners in the race for the most pervasive wireless system paradigm.

After studying WAP's protocol stack, it is imperative that you become familiar with the workings of WAP
architecture. Its devices, languages, gateways, and networks lend insight into how to best prepare a system for
protection against its security limitations. First, we will examine the protocol stack, shown in Table 3.1.

Table 3.1 illustrates a mapping between wired Internet components and WAP components. Although the WAP
stack is, in large part, derived from the ISO OSI reference model, it describes only five layers to the OSI model's
seven. It is similar to the Web or Internet model, as shown in Table 3.1. The WAP specifications define each
layer of the protocol stack by designing mechanisms that facilitate communication from smaller devices through
slower connection speeds over networks with high degrees of latency, with the intent of enabling them to
integrate with other wired infrastructures.

Table 3.1. The WAP protocol stack

Functionality Internet WAP

Application layer HTML Wireless Application Environment (WAE)

JavaScript WML, WMLScript

Session layer HTTP Wireless Session Protocol (WSP)

Transaction layer HTTP Wireless Transaction Protocol (WTP)

Security layer TLS-SSL Wireless Transport Layer Security (WTLS)

Transport layer TCP Wireless Datagram Protocol (WDP)

UDP User Datagram Protocol (UDP)

Network layer IP Bearers: GSM, SMS, USSD, GPRS, CSD, CDPD, etc.

The top layer in Table 3.1, the Wireless Application Environment (WAE), provides a framework for the
development of portable applications and services. The WAE provides residence for WAP's native languages: the
Wireless Markup Language (WML) and its partner script, WMLScript. The two languages are based on ECMA
script (a predecessor to JavaScript) and are designed to allow development in small chunks to save costly
transmission time. Both languages are discussed in further detail in Chapter 5, "Languages."

The next lower layer in the protocol stack is home to the Wireless Session Protocol (WSP). This layer provides
definition for exchanging content by way of sessions between clients and servers. In the Internet realm, this is
handled by HTTP. HTTP also handles the transactions in wired network applications, but in WAP, transactions
are handled by the Wireless Transaction Protocol (WTP). WTP offers several methods by which devices can
perform transactions; variations among methods are mostly in degree of reliability.

Of particular interest is the Wireless Transport Layer Security (WTLS). This wireless equivalent to Transport
Layer Security (TLS) or Secure Socket Layer (SSL) provides authentication, privacy, and secure connections
between applications. In WAP specifications, as in Internet communications, this layer is optional. The problem
with WTLS is that it does not provide end-to-end security, as might be expected. At a certain point during a
communication cycle structured according to WAP specifications, a server, called a WAP gateway, has to
decrypt the WTLS packets and encrypt them in TLS or SSL to transmit them securely across network channels
that follow wired Internet specifications. For an instant, that information resides in the clear on the gateway and
is vulnerable to attack.

The next lower layer in the WAP stack is the transport layer. The Wireless Datagram Protocol (WDP) serves as
a divider between the upper layers of the protocol stack and the bearer services provided by the service operator.

WAP Overview
The easiest way to explain WAP is to follow a communication cycle. This example describes how a user with a
WAP-enabled cell phone can execute an application that runs via a server-side script on a standard Web server.
Figure 3.8 illustrates the process. The user presses a key on her phone that has an assigned URL request. The
user agent on the phone sends an URL request to a WAP gateway in the form specified via WTP. The WAP
gateway receives the request and translates it into an HTTP request for the same URL. If the transmission is sent
using WTLS, the gateway also translates the data into HTTPS. The gateway forwards the HTTP or HTTPS
request for the specified URL to the Web server. The Web server processes the request. If the URL refers to a
static file, the Web server retrieves that file. If the URL refers to a script application, however, the Web server
runs the application and returns its output. This completes one half of the communication cycle. Now the data
begins its return trip to the phone.

Figure 3.8. WAP architecture

The Web server passes the requested file or output, along with any HTTP or SSL headers, via HTTP back to the
gateway. Again, the gateway performs translations. If the returned information is in a language readable by the
phone (in this case, WML), the data is simply forwarded. If the data is in HTML or another language not
readable by the phone, a translation server must translate the information into WML. Sometimes this translation
is performed on a server other than the gateway, and other times it is performed by the gateway itself. (The
search engine Google, www.google.com, currently searches all HTML pages and performs the translation for the
user. Other wired Internet engines currently search only the smaller subset of Web pages coded in WML and do
not perform translations on behalf of mobile device users.)

Another translation may occur on the WAP gateway when the data is sent via SSL. In this case, the WAP
gateway decrypts the SSL-encrypted data and encrypts it in WTLS standard. After the appropriate translations
are performed, the WAP gateway verifies the WML content and encodes it, along with companion HTTP
headers, into a binary form. This binary form serves one purpose: to minimize bandwidth usage. Now the
gateway sends the data as a WAP request over WTP via bearers, finally ending at the phone.

After the phone receives the WAP response, it begins another set of conversions. It parses the WML response
and displays the first card of the WML deck on the phone's display screen.

Wireless Application Environment (WAE)


The application environment is the only place security can be anchored with any strength in wireless systems.
The WAE is where this must take place in a WAP architecture. The WAE gives developers the tools to construct
applications suited for small, limited platforms and wireless communication. Two user agents on the devices
compose the WAE: The WAE user agent includes the microbrowser and the text message editor, and the
Wireless Telephony Application (WTA) user agent is responsible for the receipt and execution of telephony
functions.

The WAE specifications do not provide guidance on implementation or design of user agents. They simply offer
formats that can be used for images or text messages with which user agents must comply. The different
components of a user agent can vary greatly among manufacturers of both devices and software resident on those
devices.

When exploring the WAE, assessing its languages and their potential security risks is important. WML and
WMLScript are investigated in greater detail in Chapter 5. In brief, WML is analogous to HTML and based on
XML. It uses a deck of cards metaphor. Instead of pages, as they are called in HTML, contained slices of WML
are called cards. Each card contains a small amount of text or data to provide smaller chunks of information that
need to be displayed at any given time. A group of cards that are packaged together for transmission is called a
deck. The encoded WML sent to a user agent from the WAP gateway is decoded into a deck of cards for display
to limit network roundtrips.
WMLScript is based on ECMAScript (the same language on which JavaScript is based). WMLScript enables
WML to offer value-added services. WMLScript provides the functionality to validate user input and access
local functions in a wireless device (for example, WTAI) and supports local messaging in the form of alerts or
errors resident on a device to save communication time. WMLScript supports six standard libraries, which are
explained in Chapter 5. WMLScript is called via URLs and is not embedded in WML, as in <a
href="calling.wmls#calculate(2,3)">. No type checking is done at either compile time or
runtime, and no variable types are explicitly declared. WML/WMLScript developers can and must develop their
own libraries to maximize the capabilities of these minimalist languages.

Another significant component of the WAE is the Wireless Telephony Application (WTA). The WTA provides
access to the phone's telephony facilities, allowing service providers to accept and initiate calls, send and receive
text messages, add, search, and remove phonebook entries, examine call logs, send tones during calls, or press
keys on a keypad during a call. It allows the Internet to access mobile phone functionality resident either on the
client itself or in the mobile network. A framework for WTAs allows access to telephony functionality from
within WMLScript applets. This enables operators to develop secure telephony applications integrated into
WML/WMLScript services. For example, services such as call forwarding may provide a user interface that
prompts the user to make a choice between accepting a call, forwarding it to another person, or forwarding it to
voice mail.

Each of the telephony features appears attractive to would-be attackers of wireless systems. To protect against
the dangers of unauthorized use, WTA services assume that users will visit only trusted WAP gateways, where
they then connect to WTA servers. The WTA servers regulate the WMLScript functions that can be passed to a
device that accesses WTA functions and stores the libraries used to invoke WTA functions. These servers are
controlled by wireless operator networks and are assumed to be trusted.

The WTA and WAE specifications do not dictate any native security on the mobile device beyond this minimal
precaution. If a WTA server or WAP gateway was compromised, the regulation of WTA functions for phones
would be inept and would leave all devices communicating through the system open to compromise. It would be
possible for scripts to erase phonebooks, steal phonebook data, place unwanted phone calls, or otherwise tamper
with telephony functions. The Wireless Telephony Application Interface (WTAI) facilitates the interoperation of
the WTA user agent with either the telephony functions residing on the device itself or those housed on the
wireless operator's network.

Another security precaution inherent in WTA specifications is that the device manufacturers (both hardware and
software) appropriately assign permissions for scripting on the phone. Three permission types are available to
access WTA functionality:

1. Blanket permission to access all functions within a WTAI library

2. Context permission to run a given WTA function within the current execution context

3. Single-action permission to access a given WTA function once

The WAP specifications do not require these permissions to be established or configured at a certain point in the
creation of a device or application. Caution in its implementation is left up to developers. WTA specifications do
not even specify default settings, so if left unconfigured, the telephony functions appear wide open. The settings
are potentially determined by the wireless service provider. If history is any indication, service providers will
preconfigure devices with liberal permissions to permit access to their own scripts without regard for other
potentially malicious scripts.

To understand the role of WTA and WTAI more clearly, it is helpful to step through a communication procedure.
When there is a need to access a WTA function, the WTA user agent on the mobile device sends a request to the
WAP gateway with the name of the library and specific function desired. The gateway then solicits help from the
WTA server to obtain the requested function and determine permissions for its use. After receiving a response
from the WTA server, it forwards the appropriate code back to the WTA user agent on the mobile device. On the
device, the code is run when the WTAI allows it to interface with the phone or network telephony functionality
initiated from the code. WTA functions can be stored (in an abbreviated subset) on the device itself to save
expensive communication trips to the WTA server. The danger in this is that if the software on the device is
tampered with, the permissions and regulatory actions normally performed by the WTA are left to the phone's
security.

This possibility presents an interesting dilemma. Trust the WTA gateway, and store more functions on it
(allowing it to control all telephony activity on a device), or trust the phone hardware and software manufacturers
to implement permissions appropriately, and leave more WTA functions on the phone (affording less trust to a
WTA server). There are no easy answers. Without any foundational security model, the severity of attacks
against wireless devices will increase as these devices become more critical to users and businesses in storing
and processing confidential information. Telephony functions will likely be an area of amateur attacks. Much
like e-mail viruses, the types of attacks that target telephony functions will be directed at damaging information,
usurping pricing models, or otherwise wreaking havoc on current structure. E-mail viruses do not (typically)
exploit sensitive data or seek to steal or alter sensitive data. That class of attack falls elsewhere in our discussion
of WAP security than exploiting WTA functionality.

WAP Security
After close inspection of the technical specifications for all aspects of WAP, paying special attention to the
WTLS specifications, it becomes increasingly apparent that one major design flaw exists. There is a glaring lack
of security design specifications in the WAP framework. The security built in to WAP systems is left entirely to
architects and developers. WTLS provides an important piece of the puzzle, but in no way does it provide end-to-
end security—in theory or in practice. Wired system architects are no strangers to insecure platforms. Smart
wired system architects recognize that even if a platform is labeled or advertised as secure, it shouldn't be
trusted—not if any information it touches holds value or is sensitive.

WAP has the same limitations as other wireless technologies. It is limited by size, space, speed, and cost. The
requirements for building a secure system must be tailored when these factors come into play. As some Web
application developers are tricked into a false (not to mention, dangerous) sense of security by employing SSL
and encryption, so too can wireless developers fall victim to a similar fate. WTLS certainly adds a layer of
protection to transmitted data in WAP systems. However, it does not provide any assurance against malicious
content that runs on the device or for online application exploits of WAP servers. For this reason, it is necessary
to focus the discussion of WAP security on the WAE layer, and specifically on WML scripts that run in the
application layer of wireless devices. The scripting threat to WAP-enabled wireless devices is likely imminent.
Malicious scripts can infect mobile devices just as they do devices plugged in to the wired Internet.

Many security features in WAP are heavily rooted in trust of entities out of your control. WAP gateways are
trusted to decrypt and encrypt data, rendering it open to compromise from various methods of attack. A WAP
gateway can be a simple NT server with the necessary applications running, sitting in an unlocked closet in an
unlocked hallway of a national wireless service provider's regional office. There is no guarantee that the security
proffered by the regional office is sufficient to protect data to the standard you require. Too many links in this
chain are variable and unknown for you to assume that all of it can be trusted.

WTLS is employed so that the system is secure, end to end. Partial security is, in reality, insecurity. The security
features inherent in the WAE place requirements on WML and WMLScript, as well as on WTA functionality,
but their effectiveness is directly tied to the state of their implementation. In WTA functionality, either the device
manufacturers or the WTA server must be trusted, both of which are typically out of a developer's control. WML
and WMLScript are limited subsets of languages with known security problems. By stripping down already
faulty languages, security paradigms for development must be altered. In their current state, WML and
WMLScript have to be coded extremely securely to offer the appropriate level of security required in most
corporate wireless systems and demanded by consumers who are paying bills, checking secure e-mail, or
initiating stock trades via their mobile devices.

In too many places in the WAP architecture, the sole option for designing security is to trust an entity that is out
of your control. The only viable solution is to develop security into applications so that your own resources are
protected from system compromises and inadvertent or malicious attacks.

Chapter 4. Devices
Everything that can be invented has been invented.
—Charles H. Duell, commissioner, U.S. Office of Patents, 1899

Device security begins with one simple concept: Treat your device as you do your wallet. The single biggest
security risk in using mobile or wireless devices is that it can be easily lost or stolen because of its small form-
factor. Some devices have a locking feature that allows them to be secured by a simple PIN. This does not
protect the data stored on the device but does prevent the person sitting next to you in a dark movie theater from
accidentally making a call on your phone.

Devices are a challenging area of our investigation into wireless security. They are, by far, the most likely of all
components in our research to evolve and change rapidly. The voluminous impending changes make this chapter
difficult to structure. We have not yet progressed to a time when all wireless networks are standard and
compatible (as wired networks are). We have 802.11x networks, Bluetooth on the horizon, and iMode in Asia,
but none has proven its worth or market share across the globe. Devices are still varied and nonstandard. Phones,
PDAs, and wireless-enabled laptops are three categories of devices. Within each category, you find myriad
differences and nuances, making each device unique and making security planning for such a wide audience
exponentially more difficult.

This chapter investigates a few PDAs and a phone. Because OSs are largely tied to vendors at this stage of
wireless technology evolution, we define categories of PDAs or phones by their OS/vendor labels. Wireless-
enabled laptops are not explored here because they do not represent the same limited-device platforms as the
former categories. They do represent a significant portion of devices used to connect wirelessly to networks and
the Internet, but their uniqueness lies in their connection methods, not in the devices themselves. PDAs and
phones represent a distinct breed of devices.

Personal Digital Assistants


Personal Digital Assistants (PDAs) first appeared on the market in the form of electronic organizers.
Organization remains their primary function; other features have been added as demand changes. The most basic
organizers, often termed PIMs (Personal Information Managers), include a calendar, an address book, and a
primitive notepad. PDAs serve as organizers almost without exception. The Franklin Day-Planners of old have
been replaced with this new electronic way of managing personal information. Most PDAs also have the
capability to synchronize with a PC, whether to upload or download information from a proprietary application
for storage and backup purposes or to synchronize with popular PC organizers such as Microsoft Outlook or
Lotus Organizer. This enables users to avoid the hassle of maintaining both paper and electronic versions of
business or personal contacts, meetings, or appointments.

The ability to tote these PDAs anywhere you go makes them mobile, but not necessarily wireless. The ability to
connect without docking with a PC makes a PDA wireless. The devices we cover in this chapter are wireless,
whether inherently so or enabled via an add-on such as a wireless modem. More recently, PDA vendors are
offering add-ons to bolster functionality. These additional components can play music, add memory, function as
a modem, or add storage space.

Memory expansion is an attractive feature in many PDAs. Memory that is added to a mobile device is called
flash memory. It can be programmed and erased repeatedly and consumes power only when accessed. Obviously,
this is an important feature for devices with constrained battery power. Flash memory also retains its information
after it is turned off. Memory upgrades are proportional to their size. Flash memory extensions the size of a
postage stamp typically add 8MB of memory to a device, but larger format extensions can accommodate more
than 64MB of memory. The catch is that, when adding 64MB of memory, the cost of the extension flash memory
can sometimes exceed the cost of the PDA itself. See Table 4.1 for the sizes of various memory cards.

Table 4.1. Memory Card Sizes

Card Size (mm3)

Springboard 23,085

PC card 14,552
CompactFlash 7,740

Average credit card 4,644

MemoryStick 3,010

SecureDigital 1,613

SmartMedia 1,265

MultiMediaCard 1,075

A PCMCIA card is an option when seeking to expand a PDA's functionality. PCMCIA cards, or PC cards for
short, have long been industry-standard extensions for desktop systems. The physical hardware interface is not
integrated into mobile devices as of yet because of the battery power required to supply the connector. To solve
this problem, many devices feature sleds, an add-on that fastens to the PDA and allows PC card integration. PC
cards are very taxing on batteries and are not seen as often in PDAs with smaller batteries. Sometimes PC cards
exist as adapters for still other expansion cards, which are smaller. A PC card–to–Memory Stick adapter allows a
Memory Stick to be inserted into a PC card. The Memory Stick can be inserted into a PC card in a laptop, for
instance, and data can be transferred manually between the laptop and a PDA with a PC card adapter. Adapters
are available for several additions, such as Secure Digital, SmartMedia, MultiMediaCard, and CompactFlash
cards. For more information on PC cards, see http://www.pc-card.com. For more information on Memory Stick,
see http://www.sony.com.hk/Electronics/pr_t/tec/memory.

One popular expansion module is Handspring's proprietary interface, Springboard. Boasting the largest physical
capacity of the expansion cards, it also provides the highest data transfer rate. The data transfer rate is made
possible by the attachment of I/O devices directly to the processor bus. Some Springboard modules support their
own separate batteries to support functionality that requires more power. For more information on Springboard
modules, see http://www.handspring.com/developers.

The CompactFlash card is supported across the board on most PDAs. It can be attached by insertion into a PC
card adapter and provides additional memory to add-ons such as portable MP3 players or bar code scanners.
CompactFlash cards also provide a platform on which additional functions can be built, such as serial ports,
Ethernet cards, GPS devices, or modems. For more information on CompactFlash, see
http://www.compactflash.org.

Secure Digital (SD) Memory Cards offer high storage capacity (32MB and 64MB currently and 128MB and
256MB scheduled for release), fast data transfer, and limited security. Their purpose is to store information
downloaded from a desktop system and then transferred to a smaller device such as a PDA. For more information
on SD Memory Cards, see http://www.sdcard.org.

Two other forms of expansion technologies are the MultiMediaCard, which acts as a storage medium for MP3
players, and SmartMedia (or Solid State Floppy Disk Card [SSFDC]), which acts as extra file storage space.
Information about the MultiMediaCard can be found at http://www.mmca.org and about the SmartMedia card at
http://www.ssfdc.or.jp/english.

Not all expansion interfaces are supported on all platforms. RIM's BlackBerry supports none of the expansions
discussed here. High-end models of the EPOC platform support CompactFlash and MultiMediaCards. The
Pocket PC offers limited integration with PC cards, CompactFlash, MultiMediaCards, and Secure Digital
Memory Cards. Palm OS devices have some integration capabilities with all expansions mentioned except PC
cards (although Springboard expansion modules can be used only on Handspring proprietary devices, which do
use the Palm OS).

Palm OS Devices
The Palm OS allows users to browse the wireless Web and the actual Web only in what it calls Web clipping, a
browsing technique that strips graphics and complex functionality out of Web pages, leaving bare text to be
displayed on a mobile device. Applications developed for Palm OS devices have become available in full force.
They range from games, to office automation tools, to music players, to reference tools. The biggest advantage
for Palm is its name brand recognition. Unfortunately, its production of new and exciting products has slowed as
of late. Where Palm is slowing, other vendors are picking up the pace.

Two types of applications can be run on a Palm OS device: Web clipping applications (WCA) and regular GUI
applications. On top of either of these, a conduit is sometimes necessary. A conduit is a tool used to synchronize
data between a desktop application and a Palm OS device. WCAs are sets of HTML pages compressed into a
proprietary format called a Palm Query Application (PQA) and downloaded to a device. Users input information
to the HTML forms in these pages, and the WCA sends the request to the Palm.net proxy server. The Palm.net
proxy server translates it into an HTTP request that is then forwarded to a company's Web server. That server is
responsible for processing the information and returning the appropriate page. The returned page is relayed to the
proxy server, compressed, and downloaded to the device. We are not sure of the degree of trust that should be
afforded this proxy server, but applications developed in any environment should be constructed so that a
compromise of sensitive data would not occur should the Palm.net proxy server be compromised.

WCAs are not standalone applications on a Palm OS device. They run inside a Palm OS application called the
Web Clipping Application Viewer. The viewer is automatically launched when a WCA is invoked, or clicked by
the user.

GUI applications that run on Palm OS devices are single-threaded and event-driven. Only one application can
run at any given time. The OS automatically closes one application when another is invoked. Palm OS
applications are compiled into Palm Resource Files (PRC files) and then downloaded to the handheld. The same
PRC can run on any device licensed to run the Palm OS. Some of the devices have individual characteristics to
which you can specifically program.

A few manufacturers license the Palm OS software and build devices around it. Palm's own devices, most
recently the Palm VIIx, have only slightly different features from other vendors' devices powered by the Palm
OS. Handspring, for instance, uses the Palm OS on its devices but adds functionality to attract portions of the
market share. The most recent version of the Handspring products, the Visor Edge, has a 33MHz Motorola
Dragonball processor. The Dragonball is the only processor that works with the Palm OS. It has 8MHz RAM and
features the Palm OS 3.5.2H (not the most recent OS). The battery life on Palm OS devices is measured in weeks
and varies, based on the screen brightness or applications run.

Input on typical Palm OS devices is accepted through a touch screen interface with a pen (called a stylus) used to
touch options on the screen. Users can enter text either by selecting letter images on a keyboard on the screen or
by writing Palm's signature, Graffiti language. Graffiti represents keystrokes used to write letters. The device
recognizes keystroke patterns and displays the corresponding letters, numbers, or characters on the screen.

Network connectivity in Palm devices is offered in several forms. Connection via IrDA is integrated into the
device. Serial or USB connections are possible via a cable by docking in a cradle. Connections are possible via a
PC card or CompactFlash for 56Kbps modems and via PC cards, CompactFlash, or cradles for Ethernet.
Springboard modules can provide connections to a cellular wide area network.

The Palm OS is logically similar to that of a traditional PC. At its base you find the device hardware and third-
party hardware that can be added. Just above is the hardware abstraction layer, a software layout of how the
hardware works. On top of this are the kernel and system services. The system services are a group of managers
that give Palm OS its basic functionality:

• Graffiti manager

• Resource manager

• Feature manager

• Event manager

• Serial manager
• Sound manager

• Modem manager

Above these managers and system services are the system and third-party libraries. The system libraries include
important components such as TCP/IP and floating-point support; they also allow developers to extend
functionality of the OS. The third-party libraries can be communications-related or language-specific, such as
Java libraries. Atop the libraries sits the application toolbox, and at the very top sit the device applications, such
as the address book, mail functions, and calendar, as well as any third-party applications added to the device.

Palm Security
The last version of the Palm OS does encrypt passwords, but the encryption algorithm used has proven easily
breakable. There is also a backdoor for the passwords, which can be easily used to circumvent the limited
security a password provides. The backdoor is intentionally provided so that application developers can debug
code. However, anyone performing source-level or assembly-level debugging can access information such as an
encoded form of the system password and all information stored in the system database and can install or delete
applications. The debugger is activated by using a short Graffiti keystroke pattern and is easily utilized.

As mentioned, according to Palm documentation, Palm OS 4.0 provides mitigation for this security problem. It
also includes support for color displays and other supposed security enhancements. During 2001, Palm, Inc.,
announced its intention to spin off its OS department. The effect this spin-off will have on Palm OSs in general
should prove interesting.

Palm OS data transfers are not encrypted by default. Additional applications can be used to introduce this
functionality. Although some bolster the security offered, none provide an entirely robust security solution.

Palm OS 4.0
The newest version of Palm's operating system offers more expansion capabilities for secondary storage and has
new APIs from the old versions. Palm OS 4.0 applications are copied into main RAM during execution and are
removed automatically when terminated. Palm OS 4.0 supports multiple expansion cards in one device and
extends the synchronizing API to provide access to expansion file systems. The install tool that comes with the
OS supports non-Palm file types, enabling users to view different media types, such as music or graphic and
picture files. A password is required to view details on event alarms, and the device can be automatically locked
after a specified period of inactivity.

Pocket PC Devices
What began as the Windows CE operating system quickly changed into the Microsoft Pocket PC OS after CE's
unpopularity became glaringly apparent. Microsoft dropped the Windows CE label after version 3.0 and began
marketing its device and operating system as the Pocket PC only. (Although some devices are called Pocket PCs,
for the purposes of this section, the term Pocket PC will be used to designate the OS itself.) The Pocket PC can
surf the actual Web and can stream Windows audio and video. Surprisingly, Palm has a bigger market share,
even though it does not provide full Web browsing capability. Microsoft targets the business market, whereas
Palm markets itself as a personal organizer. Where Palm boasts name brand recognition, these devices boast
functionality. The battery life, however, on a Windows CE device is far shorter than that of a Palm OS device.
The battery life is measured in hours and maybe days, versus Palm's weeks.

Compaq's iPaq PDA runs on the Microsoft Pocket PC OS and uses the Intel SA-1110 206MHz processor, much
faster than the Handspring Edge processor. It has 16MB flash memory and 32MB SDRAM. It integrates
seamlessly with PC cards and CompactFlash and can run slimmed-down versions of Microsoft's Office products,
Pocket Excel or Pocket Word. One nice feature of this OS is that it supports a variety of processors, unlike the
Palm OS. Processor-specific information is contained in a few spaces in the Pocket PC OS but is assignable after
a processor is chosen for a given device that will use this Microsoft Pocket PC OS.

Pocket PC supports both 1K and 4K page sizes so that it can be used on processors that support either size.
Applications developed for use in a Pocket PC OS must be compiled in a processor-specific compiler. If an
application is designed for use with more than one processor, it must be compiled on each compiler specific to a
certain processor or family of processors. The OS supports 32 concurrent processes. Communication among
processes is facilitated through Windows messaging, similar to how it works in other Windows OSs.

A Pocket PC system supports RAM, ROM, and flash memory. RAM is used to provide buffers for application
data and to run applications. The OS provides battery-backed RAM, which gives the devices their instant on
feature. ROM is used to hold programs. It has persistent contents and can be likened to a file system or disk
storage space in a traditional PC. ROM contains the files that make up the OS. To prevent inadvertent overwrites
of the OS-critical files, physically inserting a new ROM chip is necessary to upgrade the operating system. The
alternative memory option is flash memory. It can be upgraded, but the OS's entire image must be set in, byte by
byte.

The GWES (Graphics, Windowing, and Events Subsystem) provides services for the device. It includes power
management, window and dialog management, user input services, and the graphics interface. The Graphical
Device Interface (GDI) helps render text and graphics. GDI processes can be set in place to create windows and
dialogs and interact with the user. Standard functions for each of these are available. The input support in GWES
allows the use of various input mechanisms, a keyboard, a touch pad, or even a thumbwheel-type option.

The option to induce power management is an optional module. Most devices that feature the Pocket PC OS
require power management, however, because they run applications that are very taxing on the battery. The
power management of an extension module may be distinct from that of the device itself. There are four possible
states of power:

1. No power. Only before the first configuration.

2. On. Full power.

3. Idle. On but inactive.

4. Suspend. When turned off, minimal power, but retains settings and programs. Power is suspended to
peripherals and the CPU.

5. Critical off. Low battery causes cutting off power to the CPU and peripherals until recharged.

The Pocket PC file system allows access to data stored in all three types of memory, as well as extension devices
as files or folders. The file system is based on file allocation tables (FAT). It works with extension storage cards
that can be partitioned into sections named volumes. The volumes are accessible from the CE root directory. The
Registry is part of this OS's file system as well. It is used by the OS and its modules, device drivers, and
applications. Configuration information is typically stored in the registry and accessed upon initiation of an
application. That same configuration information is also stored back to the registry upon termination of the
application.

Network connectivity is possible in various ways with these devices, as with Palm OS devices. IrDA
connectivity is integrated into devices, and serial or USB connections are possible with a cable or by docking in a
cradle. PC cards and CompactFlash enable Pocket PCs to connect via 56Kbps modems or Ethernet, not via a
cradle, as in Palm OSs. Whereas Palms cannot connect to wireless LANs, Pocket PCs can, with the use of a PC
card expansion. Also, using a PC card, CompactFlash, or an IrDA link to a cell phone, Pocket PCs can connect to
cellular networks.

BlackBerry (RIM 950 and 957)


Patented by Research in Motion Ltd. (RIM), a Canadian company, the BlackBerry device comes in two current
models: the RIM 950 and the RIM 957. The 950 is approximately the size of a pager, with a small text screen.
The 957 is about palm-size, and its difference from the 950 is simply a larger screen. Both devices have 32-bit
Intel 386 microprocessors. The 950 comes with either 2Mb or 4Mb of flash memory, and the 957 comes with
5Mb. Flash memory preserves battery life and retains all information if the battery dies or is removed.

The BlackBerry solution is a device predominantly used for mobile e-mail access. It does not provide the extra
functionality that other PDAs do (such as browsing the Web or reading files) but does what it does very well.
Extra functionality can be added to a BlackBerry device by using its SDK and developing applications.
BlackBerry added support for Java 2 Micro Edition in late 2000. Some applications, such as mini-browsers, are
publicly available for use on BlackBerry devices. Following are three popular browsers available that were
developed after-market:

• GoAmerica (HTML, WAP). www.goamerica.net/html/developers

• Neomar (WAP). www.neomar.com/developers/index.html

• Novarra (HTML, JavaScript, WAP). www.novarra.com

BlackBerry uses a push architecture instead of the traditional pull architecture in most other mobile devices.
Devices using a pull architecture periodically connect to an e-mail server to check for messages. In BlackBerry's
push model, new e-mails are connected to the user, and the user is notified that she has new mail. A copy of e-
mails residing on corporate servers or a desktop PC is sent to the device. BlackBerry currently works out-of-the-
box only with Microsoft Exchange and Lotus Notes. It began its implementations with Microsoft Exchange and
added integration support for Lotus Notes in January 2001.

To save memory and battery life, only the first 2K of an e-mail message are delivered to the device. Users can
choose to receive the rest of the message if they want to, although most e-mail messages are less than 2K.
Attachments are not delivered to the device, either. They remain on the corporate mail server or in the user's e-
mail account on a PC. If a message arrives with an attachment, however, and the user chooses to forward it to
another person, the attachment is forwarded along with the message, even though the user never viewed it on her
BlackBerry.

The BlackBerry Enterprise Server is available at a corporate level. This enables messages to be forwarded at the
server—instead of the desktop level. It also affords mail administrators the ability to manage and control batch
and security policies and includes support for Simple Network Management Protocol (SNMP).

Input on a BlackBerry is very different from input on other PDAs. Instead of a pen and touch screen, the devices
have a QWERTY keyboard for text input and a thumbwheel for scrolling through menus and selecting items.
Although the keyboard takes getting used to because of its small size, when a user becomes accustomed to it,
typing e-mails is natural and easy.

The AutoText feature makes input much easier. When enabled, AutoText allows users to specify shortcuts for
commonly used words or keystroke patterns. For example, when composing an e-mail, if you type a period
followed by two spaces, the next letter is automatically capitalized. Certain contractions are also commonly used
AutoTexts, for instance, if a user types dont, the device automatically adds an apostrophe, changing it to don't.

Network connectivity is possible via fewer total means than Pocket PCs or Palm OS devices, but more network
connectivity possibilities are integrated into BlackBerry devices. IrDA, 56Kbps modems, and connections to a
cellular, specifically Motient, network are integrated in the BlackBerry device. It is possible to connect to a
desktop via a serial connection and docking cradle. Support for USB and Ethernet and connections to wireless
LANs are not possible with the current devices.

The BlackBerry OS architecture is a closed architecture. It is difficult to gain specific information other than
those API descriptions provided in the SDK about the architecture itself. To this end, our discussion of a
BlackBerry device's architecture is limited. There are obviously advantages and disadvantages to having a closed
architecture. It is challenging to learn about the device, to find its security weaknesses and protect against those
weaknesses. On the flip side, however, the security weaknesses are not easy to discover and, theoretically, less
likely to be exploited. This last reason does not afford much comfort, though. We must operate as though the
device deserves limited trust but is not wholly trusted.

BlackBerry APIs
Information pertaining to the hardware in BlackBerry devices can be organized in several ways. It is best to work
backwards to the hardware from the APIs to see these categories. The three main categories of APIs are

1. Database API

2. Radio API
3. User Interface (UI) Engine API

Table 4.2. Commonly Used BlackBerry APIs

Database Address Book

Radio Message

View

Compose

UI Engine AutoText

Options

Ribbon

In each category there are important main functions to know. The most important API in the Database group is
that which allows operation of the Address Book. The database and file system is relatively sophisticated. The
database manager allows adding, deleting, and navigating through database records. All the data is stored in flash
ROM, which we define as nonvolatile, permanent memory, and data is retained even when the battery is
removed or the device is powered off. Flash is quick to read but slow to write to.

All applications can use the same Address Book. The Address Book integrates with the Message API. Table 4.2
shows commonly used BlackBerry APIs. The Radio API group houses the Message API. The Message API has
two main capabilities: viewing and composing a message. Messages are created the same way BlackBerry
creates e-mail. The Compose API talks to the UI engine, which houses the third group. The UI engine includes
the AutoText feature and the Options and Ribbon features. The Options API allows applications to use the same
settings, such as notification type (beep, vibrate), date, time, and auto on/off. The UI Engine API contains
function calls that allow applications to create custom screens, menus, dialog boxes, status boxes, and fields. The
UI can return a message or any event based on user input or action.

As many as 31 applications can run simultaneously on a BlackBerry, with just one in the foreground at any given
time. Each of the other applications remains silent until a trigger event wakes it and delivers it to the foreground.
Trigger events can be activity on the serial port, a user pressing a key, a timer counting down to 0, a data packet
received, or movement of the thumbwheel by the user. Applications act on the events and then return to the
background.

The hardware in BlackBerry's architecture fits with the APIs as described here. The Database APIs work with the
Files, Memory, and Scheduling components. The Radio APIs work with the actual radio and serial port; the UI
Engine works with the Screen, Keypad, and Thumbwheel.

BlackBerry Security
Although receiving personal or internal corporate data via the Internet does not introduce a hole into a corporate
firewall, it does introduce a point of attack. BlackBerry devices receive e-mail sent over the Internet, which is
publicly accessible. To mitigate this problem, BlackBerry encrypts its communication. The keys for encrypting
and decrypting are stored only on the device and at the desktop. Protection is also afforded to the BlackBerry
redirector, which forwards incoming mail to the device. A malicious person simulating RIM device commands to
the redirector will be unsuccessful without knowledge of the shared key. The redirector responds only to
commands encrypted to the key it shares with the device. The encryption method used in encrypting BlackBerry
e-mails is triple-DES encryption. This encryption standard is considered the most widely accepted of industry-
approved encryption algorithms, provided that the keys are created appropriately (see Chapter 6, "Cryptography,"
for a discussion of triple-DES). When keys are generated for encryption algorithms, they must gather a certain
amount of randomness (explained in detail in Chapter 6).
BlackBerry gathers randomness in creating its private and shared keys by asking the user to move the mouse.
This is a common method for generating randomness. It cannot be assumed to be exactly enough but represents
effort on RIM's part to collect randomness from a human-generated source.

Another significant component in an encryption process is the distribution of the keys. We have noted that the
keys are stored only on the device and on the desktop, but the process for getting the keys to the device is
important. If the keys are sent over a wireless link or an Internet connection, they can be captured. BlackBerry
transfers keys only while the device is docked in its cradle and directly attached to the desktop via a serial cable.
This is an acceptable method for transferring keys. Although BlackBerry's method is not sufficient for protecting
matters of national security, this method does speak to BlackBerry's commitment to develop appropriate security
into its devices.

The device can be password-protected if this feature is activated. When established, the user can easily lock the
handheld to protect the device from unwanted use. This password also prevents access via the serial port and
docking cradle. A timeout can be set on the device so that after a specified amount of inactivity in an unlocked
state, the device locks itself, requiring the password for unlocking.

An interesting feature BlackBerry includes is that if an incorrect password is entered ten times, the device's
memory is automatically erased, rendering it unusable. The applications can be reloaded at the desktop, but the
data stored on the device is erased. BlackBerry documentation notes that the password is obfuscated on the
device so that if someone were to download the contents of memory from the device, the password would be
unattainable. We are unable to verify that this is the case. When a corporation uses the BlackBerry Enterprise
Server administrators can require passwords to be used on devices and can specify a minimum length for all
users.

The piece of the puzzle here that is most unrelated to BlackBerry devices themselves but can, perhaps, open the
most doors to attack is the user's desktop. Leaving the desktop unattended, unlocked, or on without proper
physical security can expose the device and its contents to compromise. This does not supersede prevention that
should occur at a desktop level in general. It just follows that if a desktop is not properly secure, this device
cannot be assumed secure as well.

We do not investigate the BlackBerry Enterprise Server here, but you should note that it requires integration with
a corporate firewall and introduces administrative security issues that should be examined before
implementation. For more information, see
http://www.blackberry.net/international/uk/solutions/pdfs/BlackBerry_Enterprise_Server_for_Exchange_GPRS_
Technical_White_Paper.pdf.

Each type of device is unique. Knowledge of one device's intricacies does not translate into working knowledge
of another's. This chapter presents a range of considerations. It is imperative that you gather specific information
about devices used in your system. Security measures or functionality present in one may not be present in
others. Verifying all components of a device is up to the system architects and application developers because no
one can predict those that will have security implications and those that will not.

Chapter 5. Languages
The secret of eternal youth is arrested development.

—Alice Roosevelt Longworth (1884–1980)

By now you should have noticed a theme—there is more than meets the eye when considering wireless systems.
So, too, is the case with wireless languages. Wired languages and wireless languages do not differ in many ways,
but in their differences lies dangerous ground. If developers are not aware of the differences, applications can be
wide open to attack. Wireless application languages are specially designed to accommodate smaller display
screens, slower networks, and devices with less memory. Typically, intensive processes are run on servers, rarely
on devices themselves. Although this applies more to applications that will be run on PDAs or cell phones
instead of wireless laptops, it is the focus of this chapter. The languages we cover are Java 2 Micro Edition
(J2ME), Wireless Markup Language (WML), and its partner script, WMLScript. (Both WML and WMLScript
are used in WAP systems.)
Now, if you are a project manager or anyone whose job function is at a higher level technically than a developer,
this chapter may seem irrelevant for your security planning purposes. The devil is in the details here, and
someone on a development team has to be responsible for expertise in wireless languages. This is a good chapter
to refer a colleague and developer to—you can combine your recently acquired expertise in security principles as
they apply to wireless systems with that developer's knowledge of wireless programming.

This chapter does not teach the languages to the extent that you can become proficient, or even to the extent that
you can use it as a primary resource in learning about the languages. This chapter is included for completeness. It
is our firm belief that understanding the ramifications of the programming language you choose for development
is an important part in developing a holistic security solution for a given project or environment. To this end, we
describe the languages in their most basic forms, discuss notable details, and provide insight into their security
features and drawbacks. A background in programming languages such as HTML, Java, C, or C++ is necessary
to achieve competency in the languages discussed here.

J2ME, WML, and WMLScript have several things in common. They are all subsets (whether or not directly
derived) of traditional programming and Internet languages. J2ME is a subset of Java 2, WML is based on both
HTML and XML, and WMLScript is based on JavaScript's earliest version, ECMAScript. They differ in their
scope, functionality, and common uses. First, we will take a look at WML.

Wireless Application Protocol (WAP)


As you learned in Chapter 3, "Technologies," WAP specifications define a protocol stack similar to the Internet
protocol stack. Each layer of the WAP protocol stack (Application, Session, Transaction, Security, Transport,
and Network) represents an area where vulnerabilities can exist. The Wireless Application Environment (WAE),
residing at the top of the WAP protocol stack, is the area of the WAP protocol stack on which we focus the bulk
of our discussion here (see Table 5.1). Application issues are of interest for a variety of reasons. Although
noteworthy security concerns are associated with all facets of WAP, the issues implicit in the lower layers of the
protocol stack can be attended to similarly to wired Internet issues. Because those issues lower on the protocol
stack can be mitigated, as WAP evolves, the Application layer becomes the most likely breeding ground for a
new generation of attacks.

Faulty programming in the application environment and on the client can lead to serious problems in the wireless
world, just as it can with any existing Web programming. In general, WAP developers should avoid translating
existing Web applications to wireless ones by attempting modifications to the original code. Taking shortcuts can
lead to vulnerable and poorly designed applications, at least where their usability is concerned.

Table 5.1. The WAP Protocol Stack

Functionality Internet WAP

Application HTML Wireless Application Environment (WAE)

JavaScript WML, WMLScript

Session HTTP Wireless Session Protocol (WSP)

Transaction HTTP Wireless Transaction Protocol (WTP)

Security TLS-SSL Wireless Transport Layer Security (WTLS)

Transport TCP Wireless Datagram Protocol (WDP)

UDP User Datagram Protocol (UDP)

Network IP Bearers: GSM, SMS, USSD, GPRS, CSD, CDPD, and the like
The WAE houses several services and functions to interact with its user agents: Wireless Markup Language
(WML) and its partner script, WMLScript—based on HTML and JavaScript, respectively. WMLScript enhances
WML's standard browsing and presentation capabilities by providing client-side scripting functionality to WML.
It supports advanced user interaction, validity checks for user input, and local messaging (error messages, alerts,
and the like) and provides a mechanism to access the facilities resident on the client. It is designed for use on a
WAP-enabled device, specifically on thin clients with low bandwidth communications capabilities.

WML was created to facilitate displaying Web pages on wireless devices with small display screens. If you recall
the discussion of WAP in Chapter 3, you will remember that WML is analogous to a watered-down translation of
an HTML page. In addition to small display screens, however, WML accounts for the fact that input to these
devices is difficult and varied, memory is not available in excess, and CPU cycles are limited (for a discussion on
how these limitations affect the implementation of cryptographic security solutions, see Chapter 6,
"Cryptography").

WAP Browsers
More and more pages on the wireless Web are being written in WML to avoid having to translate to or from
HTML. WML pages are not to be thought of as a substitute for standard Web sites. They provide only a limited
subset of information and graphics to wireless device browsers. To view a WML page, a device must have a
browser, an application used to interpret and render the WML code on a display. In the wired world, browsers
are required to interpret HTML; some popular browsers are Netscape Navigator, Internet Explorer, and Opera.
Several types of browsers, produced by various companies, can render WML. Each phone manufacturer decides
the type of browser to install. Each PDA manufacturer decides either to install a given browser or to offer it as an
add-on. As the mechanisms by which users update software on these devices become easier to use and more
prevalent, the capability to upgrade these browsers becomes more commonplace.

To understand certain risks associated with viewing WML pages on a WML browser, we must first visit the risks
associated with HTML browsers. In the case of HTML browsers, vendors often release security patches or
updates to existing versions and, from time to time, full-blown new versions. Because each version of a browser
is in use for some time, attackers take aim at holes inadvertently left by vendor programmers. By exploiting these
holes, attackers gain sensitive information from users' systems. Sometimes the holes are so unfortunately placed
and the attacks crafted in such a way that an attacker can take full control of a user's machine. Because HTML
browsers can be customized when installed on a machine, it is possible to disable some of the functionality (that
is, scripting or Java) that could enable exploits to come to fruition. By turning off scripting or Java, users can
protect themselves from certain attacks, both known and unknown.

In a WML browser, these options are not as easily configurable by the end user— yet. As time progresses,
however, this will become status quo. Tailoring WML browsers to meet the security needs of business and
personal users will become possible.

Staying consistent with one theme of this book, with all security features there are functionality drawbacks. In
disabling features that could leave you open to an attack, functionality loss is suffered. In a corporate
environment, a blanket policy requiring that all users disable Java and scripting could force functionality loss that
is counter to business productivity. If staff who follow this policy cannot access critical applications or retrieve
necessary information, this lack of ability is a problem. The appropriate course of action in determining which
functionality to disable and which to leave vulnerable is to weigh all components of the issue, as discussed in
Chapter 2, "Security Principles," and Chapter 11, "Analyze Mitigations and Protections." The answer to this
question is found in examining the information to be protected, the risk if it is exploited, and numerous other
factors.

If we could infallibly predict the occurrence of an attack, we could patent a Predicting Attacks formula and retire
comfortably tomorrow. The factors that come into play in assessing the likelihood of an attack's causing damage
are dynamic and often unpredictable. The uncertainty of these factors make calculating the likelihood of damage
with any degree of certainty virtually impossible. Not enough variables can be isolated to solve an equation and
come to a meaningful conclusion. Chapter 6 discusses in detail the process necessary to determine an appropriate
course of action, but it cannot begin to calculate the likelihood of an attack in any specific percentage.

Obviously, using a browser for general home purposes is worth the risk of an attack. The information that can be
gained, the applications that can be executed, and the convenience that is drawn from using an HTML browser to
browse the Internet are usually viewed as significant enough to warrant risking an attack. If, however, you
believe that the potential exists for anyone to attempt to compromise your computer through a browser hole, you
may feel differently. By being aware of attacks, protecting your data, and keeping your browser up-to-date, you
minimize this risk.

What is the risk in using WML browsers? It is much the same as in using HTML browsers. When attackers can
exploit holes left by insecure code and architecture of a browser, wireless devices become open to attacks
ranging from information leakage to complete control by a malicious party. Keeping the most current version of
a browser is essential. Right now, the set of information available via these browsers is smaller than that which is
available via HTML browsers, by its very nature. This is changing as quickly as the rest of technology.

Another significant risk in using WML, from a security perspective, is in the translation that occurs between
HTML and WML. It is assumed that translation servers are trusted, but this assumption can lead to a false sense
of security. To alleviate just one problem in a WAP system, clients could be required to accept only WML pages,
avoiding the need to pass through a translation server in addition to an already risky WAP gateway.

Wireless Markup Language (WML)


Although WML is similar to HTML (closest to version 4.0) and XML, programming in it requires the use of
different tags and structures. WML adopts its application model from HDML (Handheld Device Markup
Language). It does require the XML-specified Document Type Definition (DTD) at the beginning of its code,
including the official public identification of the Standard Generalized Markup Language (SGML) and the
syntax definition for the document.

WML is, however, organized in a much different fashion. Converting a document from XML or HTML to WML
is nontrivial. WML is ordered in decks of cards instead of groups of pages. When a user on a PC connected to the
Internet types in a URL to request a Web page, the user's browser contacts the host server for the requested page
and renders it via the browser on the user's PC. When a WAP-enabled cell phone user, for example, requests a
Web page, the browser requests that page, termed a card, and returns not only the requested card but a set of
cards, called a deck, or a group of cards associated with that single URL. The requested card is returned as the
top card in the deck and is rendered by the cell phone browser for the user to view.

Because an entire PC-sized Web page cannot be rendered, it must be broken intelligently into smaller subparts
(the cards) and delivered as a package (the deck) that can define user interactions and directions in displaying its
information. The tag, <wml>...</wml> defines a deck and holds a group of cards together. The metaphor of
decks and cards describes how the data is displayed when rendered in a browser, not how it is stored. A deck can
be one file, beginning and ending with the <wml>...</wml> tags and containing <card>...</card>
tags throughout to define the segmenting of the display on a wireless browser. The following is an example of a
simple Hello World page in WML:
<?xml version="1.0"?>
<!DOCTYPE wml PUBLIC "-//WAPFORUM//DTD WML 1.1//EN"
"http://www.wapforum.org/DTD/wml_1.1.xml">
<!–Hello World in WML -->
<wml>
<card id="Card1" title="WML example">
<p>Hello World</p>
</card>
</wml>
Cards are named with an id in the <card>...</card> field so that they can be identified easily inside a
deck. The reason cards are sent in decks and not individually is that the most costly parts of WAP data exchange
are the communication round-trips. Anything that can be done to minimize the number of times a communication
round-trip must be made decreases the cost of a system. Each card displays hyperlinks just as HTML pages do.
The tricky part for programmers is to organize the decks as intuitively as possible. When calling up a page such
as www.yahoo.com, for instance, the developer might guess that the user will want to view a subdirectory listing
of a certain category, say, Local News, for the user's zip code. The developer would include with the initial deck
some cards associated with top news stories in anticipation that a user will need to access those pages next.
Alternatively, the developer might guess that the user will, instead, be searching for another lower-level directory
listing of News so that she can choose among Associated Press news, Reuters news, or other newswire services,
and the developer would send different cards in the initial deck. This deck might contain a card for the
Associated Press and a card for each news agency for the user to select.

The two methods of sending cards to a deck—estimated target hits down a vertical clickstream (that is, News,
Local, Zip Code, Cards for each story) or a horizontal clickstream (that is, News, Cards for AP, Reuters, and so
on)—are useful in different situations. WML specifications make no design recommendations for organizing
cards and decks. Making these judgments is up to developers. Adept understanding of user behavior and choices
leads to cheaper applications. Developers who correctly anticipate the cards a user will likely need to access,
based on links clicked from a previous card, will have more user-friendly and thus successful applications. This
is not as much of a concern on the Internet as browsed by wired PCs. Except in the case of old billing models and
slower speed connections, the time required to render a page on a wired PC is negligible and does not affect a
Web site's organization in the same way.

When a user clicks a link, one of two actions can ensue. If developers and designers correctly anticipated the
clickstream choice, the user clicks a link that refers to a card in the same deck. The initial card is moved to the
rest of the deck, and the newly requested card is rendered at the top of the deck in the client's browser for the user
to see. If developers or designers incorrectly guessed which cards the user will request next, the initial deck is
discarded from memory, and a second deck is sent, with the requested card on top.

There are, of course, self-contained decks, those that perform a limited function and do not contain links to cards
outside the deck. Browsing is much less taxing on a system and faster for users when this is the case.

Unlike HTML, WML is translated into binary code at the WAP gateway before being sent to the WAP device.
To fit the narrow bandwidth typical in wireless communication, both WML and WMLScript code are compiled
into bytecode on the server and then sent to the user agent. The bytecode is a binary representation of the text-
based WML and WMLScript code. It is structured very similarly to binary executable files. The advantage of a
bytecode file over a binary executable file is that it is machine-independent. It is compiled once and run on any
machine capable of running a WMLScript interpreter.

WML browsers, WMLScript libraries, and a WMLScript interpreter are physically placed on the mobile client by
its manufacturer. Because WMLScript compilation units are compiled into the WMLScript bytecode before they
can be run on a WAP client, each client has a WMLScript interpreter that interprets the bytecode it receives.

The card/deck structure brings no direct security concerns to the table. What the structure does do, however, is
introduce a new variable into the equation that inherently requires more time and effort on a developer's part. As
with HTML, WML is a formatting language. Rendering the pages is up to the browser, not the language that
describes how data will be displayed. In the browser is where you find security concerns (discussed later).

Protecting Deck Access Using the access Element

WML provides a basic method of protecting access to a deck. The access element allows a programmer to
designate those entities that have access to a given deck. A browser attempting to access a card from a deck
outside its current one enforces access control by this element. The browser must evaluate the permissions
defined by the new deck to determine whether the existing deck can access its contents. The security concern
brought about by this function is that it does not protect the data within the deck. The browser has access to the
data while checking permissions. If the browser is trusted, this is not a problem. However, because the user has
little control over the browser, a rogue browser could be created that would compromise the integrity of the data.

The method an access element uses to specify permissions is based on the domain and path of the originating
request. The domain and path attributes are used to set these rules. See Singhal (2001) for examples of these.
If neither a domain nor path is specified, the default access parameter is that only cards from the domain of the
deck itself are allowed access to it (as long as the access element is invoked). If the access element is
omitted, the deck is considered accessible from any source. Domains should be specified at the broadest level to
which access is permitted. If you want to restrict a deck to be accessible by virginia.usa.com, maine.usa.com, and
washington.dc.usa.com, you need to specify
<access domain="usa.com"/>
Be aware that this also grants access to any originating domain that concludes with usa.com. If you want to
restrict the deck further to include vienna.virginia.usa.com and oakton.virginia.usa.com but to exclude
portland.maine.usa.com, you need to specify
<access domain="virginia.usa.com"/>
This excludes anytown.anyotherstate.usa.com because the allowable domains must be an exact
match.

Protecting Deck Access Using the path Element

The second way of limiting access is by specifying a path. The path attribute designates access to decks in
given paths on the application server. The path is derived from the prefix of the requesting or originating card.
The beginning of this path must be an exact match as well. If you have, for example, a WML document residing
at http://usa.com/states/list.wml and the following specification is set, any document that falls below the
/states tree and is also below the usa.com domain can access the deck:
<access path="/states"/>
Coding with Due Diligence

Recall the Chapter 2 discussion of security concepts. Access control is not security.3 Much more is involved in
protecting data, resources, systems, and applications. Responsible coding is a large part of developing secure
applications. This book does not endeavor to discuss every element of WML or J2ME (or any other wireless
programming language, for that matter) with respect to security considerations. Instead, we recommend that
coders know the languages inside out, be aware of the common pitfalls, and understand appropriate methods for
building secure applications. Although we explain the architectural considerations necessary for making
decisions about designing secure wireless applications, due diligence dictates that coding be taken seriously. The
importance of investing in coders with appropriate security background cannot be stressed enough. We
recommend the following resources to bolster the actual coding of secure wireless applications.

Wireless Application Protocol Specifications

The specifications for WML are available at http://www.wapforum.org/what/technical.htm. Although they are a
good resource for learning the building blocks necessary to code in WML, they do not provide solid security
architectural design recommendations. A little knowledge can be dangerous. An understanding of the
specifications outlined by the WAP forum is essential but not nearly enough to make you adept at writing secure
code.

Device-Specific Books

After you decide that you need to code to a certain device, you should certainly learn more about the security
design recommendations for that device. Books we find helpful include

• Palm Programming: The Developer's Guide by Neil Rhodes and Julie McKeehan

• GPRS and 3G Wireless Applications: The Ultimate Guide to Maximizing Mobile Internet Technologies
by Christoffer Andersson

• The Wireless Application Protocol: Writing Applications for the Mobile Internet by Sandeep Singhal et
al.

Other helpful books are appearing on shelves daily. At the risk of repeating a lesson surely taught in
programming classes, understanding the relationship between the language and the device on which it will be
used is crucial. (Although this is suggested here in the context of WML programming, examining these resources
is important, no matter which language is chosen for development.)

General Software Security Books

After you master the mechanics of coding in a new environment, you must incorporate certain security design
paradigms necessary to building applications in general, whether or not they are for the wireless world. When
reading a book on general software, you must bear in mind the limitations the wireless realm brings to the table.
As recommended earlier, Building Secure Software by John Viega and Gary McGraw is a valuable resource in
this domain. It focuses on the security aspect critical to many applications. (Regarding device-specific books,
note that this is also applicable to J2ME-developed applications or applications in any other wireless language.)

WML by itself affords limited function. It describes formatting, contains text, provides links for navigation, and
contains commands that can be used to run scripts. It does not allow programmers to perform all the functionality
necessary to develop robust applications. For this reason, WML Script was introduced.

WMLScript
WMLScript is based on ECMAScript and JavaScript. According to its specifications, WMLScript uses similar
syntax and constructs and provides semantically equivalent functions. The theory behind WMLScript is different
from that supporting JavaScript. It is designed for capability-limited devices.

WMLScript gives WML added functionality just as JavaScript adds to Java. With WMLScript, WML decks can
perform processing they would otherwise rely on a server to perform. WMLScript enhances WML by providing
browsing and presentation functionality that betters WML pages from a usability standpoint. The following is a
list of functions that WMLScript adds to WML, according to its specifications:

• Checks the validity of user input

• Provides access to the device's facilities (for example, on a phone, allows the programmer to make
phone calls, send messages, add phone numbers to the address book, access the SIM card, and so on)

• Generates messages and dialogs locally, thus reducing the need for expensive round-trips to show
alerts, error messages, confirmations, and the like

• Allows extensions to the device software and the capability to configure a device after it has been
deployed

• Provides more advanced user interface functionality

• Reduces the amount of bandwidth needed to send data between the server and the client

WMLScript libraries add significant functionality and are present in devices designed to interpret and render
WML and WMLScript. The default libraries, however, can and should be expanded on to include custom
libraries that enable applications to access the features of a device that make it unique. The feature that
distinguishes a Handspring Visor from a Palm Vx cannot be easily invoked with generic libraries. Coders with a
certain amount of prowess can dispatch specific libraries to make the most of their devices' special properties. If
you are developing applications for cell phones, perhaps tailored WMLScript libraries can define specifics
related to call control, synchronization with a PDA, or access to various storage areas in the device.

WMLScript is similar in syntax to C++ or Java. It is case-sensitive, and statements are terminated with a
semicolon. As in C++, a double slash indicates a line comment, and a statement beginning with /* and ending
with */ denotes a block comment. WMLScript is a procedural language that is extendable by adding the
customized libraries just mentioned. It is not object-oriented, nor does it have an object model. Instead of
ECMAScript's 64-bit floating-point math, WMLScript features 32-bit integer math with optional floating-point
support. Unlike WML, it does not support global variables and does not have appropriate error handling. The
error-handling mechanism is based only on the (invalid) error value. The invalid data type is distinct
from strings, integers, and so on, and carries with it its own rules. In most cases, the presence of invalid in an
expression causes the result of an expression to be invalid.

WMLScript supports standard libraries of WMLScript procedures installed on the clients themselves. It does not
support certain more advanced features of JavaScript that require more computational or bandwidth capabilities.
WMLScript can call remote scripts via URLs but cannot invoke libraries physically located on the client besides
those in its own libraries. WMLScript compilation units, resources, and functions are accessed by using URLs
because they are not embedded in WML code. A user agent can make a call to an external WMLScript function
by providing the compilation unit's URL and the function name and parameters as the fragment anchor. The
requested URL must be escaped according to the URL escaping rules because no compile-time automatic
escaping, URL syntax, or URL validity checking is performed.

Telephony Functions

One of WMLScript's main additions to WML is that it allows applications to access client functions. Of
particular interest are those mechanisms designed to interact with the client's telephony-related functions. These
mechanisms, called Wireless Telephony Applications (WTA), include features that need to be protected from
unwanted use or access in security considerations in applications and networks. The WTA functions are

• Accepts or initiates a call

• Sends or receives text messages

• Adds, searches, or removes phonebook entries

• Examines call logs

• Sends Dual Tone Multi-Frequency (DTMF) tones during an active call

• Presses keys on a keypad during a call

WMLScript invokes the WTA Interface (WTAI) to manipulate telephony functions of the phone. WTAI
functions are included in WMLScript libraries and are called as WMLScript functions. These types of functions
have parallels in the wired Internet world but invite different exploits in the wireless world. A whole host of new
attacks can be designed to use these seemingly benign functions in a malicious way. WTAI functions that access
phonebook entries provide WMLScript access to storage areas in the phone. Whether this simple access to an
indexed array such as the phonebook can be extrapolated into misuse has to be investigated. Certain exploits can
be constructed by manipulations that are not intended but also not prevented.

For a good description of the literals, variables, type conversions, operators, identifiers, statements, and functions
used in WMLScript, see Chapter 9, "Scripting and Using WMLScript and WTAI," of The Wireless Application
Protocol: Writing Applications for the Mobile Internet by Sandeep Singhal et al. These are worth investigating
but do not differ enough from other languages to warrant significant concern on our part for their wireless
implications.

Risks and Exploitation

Current WMLScript applications are predominantly benign, accessing WTAI functions for legitimate purposes
such as placing a call from a link on a WML page, directing incoming calls to voice mail upon a user's request,
or forwarding an incoming call to a different phone number. These benign uses, however, come dangerously
close to malicious activity.

In the wired Internet world, different browsers behave differently. Wireless phone browsers are not an exception.
Discrepancies run rampant among browsers, phone manufacturers, and applications alike. The major WAP-
enabled phone manufacturers such as Nokia, Ericsson, and Motorola provide SDKs that specify differences in
countless implementations. As in the wired Internet arena, time to market for WAP applications is decreasing
rapidly. While developers race to produce new applications, the myriad inconsistencies require layers of minute
coding differences that can be easily dismissed, leading to serious security flaws.

This complicates the problem originally facing developers, that there are many functions on which malicious
users can attempt to get their hands. Placing unauthorized long-distance calls and broadcasting or erasing
phonebook entries are only two of the problems that must be prevented with safe coding practices.

For example, the intended implementation of the WTAI function WTAVoiceCall. setup that makes a call is to
display the phone number before the call is made. If the WAP browser being used does not prompt the user
before placing the call, the call is automatically activated. In Japan, a prank was played on phone users who
pressed a number in response to a voice mail message that automatically dialed the Japanese police emergency
number without the callers' knowledge. Japan uses imode technology, which is different from WAP but
represents an initial exploitation of a native capability.

The specifics of the browsers for which your team develops must be investigated. If a browser does not protect
against certain potentially malicious activities, prevention mechanisms must be built in to the applications.

In the security model, some risks will be avoided. Calling functions that are not external, passing illegal
parameters, and accessing a WMLScript called from an unauthorized domain and path will be prohibited. DNS
spoofing, however, can circumvent the last of those three, possibly sending unwanted information to an unknown
location. A smart WAP programmer recognizes that to be completely safe, certain additional safeguards should
be taken in WMLScript usage—not only in the use of WTAI functions but also with respect to all areas of the
client—so that even trusted servers and gateways cannot be the scene of an unwelcome attack.

The restrictions on WMLScript span beyond the realm of WTAI functions. WMLScript has few limitations and
access control parameters. It is up to developers, service providers, and gateway hosts to control which parts of
the client are protected and under what conditions they can be accessed. WMLScript libraries have limited
functions and have methods for controlling their use. The problem is that the controls are not necesssitated; they
are merely suggested.

Securing information on a phone is possible, but loose coding practices open doors to many forms of attacks.
These range from minor infractions, such as dialing fake phone numbers to increase long distance bills, to major
problems, such as deletion of confidential information on the phone (the user's unique ID, for example) or, in the
case of smart card–enabled phones, stealing credit card information or other sensitive data.

J2ME
Now that we have investigated a popular wireless markup language, we will look at a more robust language
designed to run applications on limited devices without regard to operating system. Java is a versatile
programming language. Sun's Java 2 Programming Language consists of three versions:

1. Java 2 Enterprise Edition

2. Java 2 Standard Edition

3. Java 2 Micro Edition

The version we focus on in this text is, of course, Java 2 Micro Edition (J2ME). By way of introduction, we take
a brief look at the overall structure of Java. For various reasons, Java is a highly utilized language for many
applications. It is object-oriented, has automatic garbage collection and exception handling, and allows
multithreading. For those familiar with C++, it has similar syntax and is also object-oriented.

We mentioned that Java is designed to run without regard to the operating system. In wireless programming, this
is not necessarily an advantage. In the wireless world today, application development is device-specific and
operating system–specific more often than not. The ability to port one application across operating systems is not
often seen. J2ME is not available on all devices or platforms. Thus, porting and device-specific code are a viable
way of implementing wireless applications. This is rapidly changing with the release of Motorola phones that
support J2ME and personal Java.

Components of Java
For those of you not acquainted with Java, this section provides a high-level overview so that you can gain a
basic level of familiarity with Java terminology and functionality. The two major components of the Java
Runtime Environment (JRE), the part of Java that makes a program run, are its execution engine and several
runtime libraries. The Java Virtual Machine (VM or JVM) is the core of the execution engine. Virtual machines
enable coders to run a program on any architecture just by porting the VM. The capability to move a VM from
one architecture to another to run a program on multiple physical or logical architectures is why Java is so
portable. The VM alone does not provide complete portability. Portability is achieved through the assistance of
the class file format and the standard runtime libraries.
The JVM gets its specifications from the Java Virtual Machine Specifications (JVMS), available from
www.sun.com. The JVM is given instructions for operations termed bytecodes. Bytecodes can be separated into
two categories:

1. Bytecodes for operations that any machine architecture supports (for example, reading and writing
memory, simple arithmetic operations)

2. Bytecodes specific to the Java language

The javap tool, available as part of Sun's Java Development Kit (JDK), enables you to see how Java code is
translated into bytecode.

The Java class file format is the required format for bytecode to be input into the VM. The class file format
greatly aids in portability. The JVMS takes care of the differences in a class file from one architecture to another.
If you input a class file to the VM compiled on architecture A, it can be moved to a machine of architecture B
without modification.

The garbage collector is an important feature of Java. There is no Java keyword used for deallocating objects.
Instead, the garbage collector is in charge of reclaiming unused memory. Certain pros and cons are associated
with automatic garbage collection, rather than requiring programmers to deallocate objects explicitly. Java
programmers have little control over garbage collection, but they can develop certain habits to make the process
more efficient. For instruction on these habits, you can consult a well-respected Java programming resource.
Especially when designing a wireless program, you will find this worth understanding and implementing
correctly. Making applications efficient becomes increasingly important as the limitations on the device itself,
due to architecture, become more stringent.

The class loader is the component that helps the VM locate the classes it needs for a given application. There are
two categories of class loaders. System class loaders store the essential runtime classes and the application
classes when necessary. The classes are put into memory so that the virtual machine can access them. User class
loaders (nonsystem class loaders) help provide application-specific methods of finding class files. For example, a
WAP browser could load a class loader that gets classes from Wireless Transport Protocol (WTP) from a WAP
gateway and a Web server. Class loaders store loaded classes to save the costs associated with retrieving classes
when they are referenced.

The class verifier processes a class after the class loader loads it. The verifier serves an important purpose: It
checks the bytecode for illegalities (ensuring that classes are well formed and that the bytecode follows the Java-
specified typing rules), checks that enough stack is allocated for each section of code, and so on. The verifier is
one defense against malicious programs. It can prevent unwanted or unrecognized classes from being placed on a
device without the intended application's prior consent.

The last component of the execution engine we will visit in this chapter is the native code interface. This last
component ties Java to the architecture's operating system. It defines how the virtual machine calls native code
and how native code makes calls back into the execution engine to create new objects, invoke methods, or set
and get field values.

The last component in our brief Java tutorial is the Java runtime libraries. We mentioned earlier that these
libraries aid the VM and the class file format in establishing Java's hallmark—portability. Java depends on these
libraries for interacting with the system and to provide useful shortcuts to standard programming tasks. This
veritable shorthand saves programming time and allows for standardization across developers' implementations.
There are various groups of runtime libraries, and the functions of some blur the lines between Java and the
libraries themselves. The core of the Java runtime library is the java.lang package. Some of its classes are
referred to directly by the JVM. They are usually initialized as defaults, along with other core classes, at start-up:

• Object. Is the root of all class and interface hierarchies

• Class. Defines information about a class

• ClassLoader. Loads classes


• String. Stores string constants

• Throwable. Declares and throws exception information

Flavors of Java
Over the past several years, Java has unfolded into many different flavors. Figure 5.1 illustrates the relationship
among some. Early versions of Java focus on different kinds of programming. Java 1.2, the current version
(familiarly called Java 2 as a marketing ploy more than a version control number), was designed for Enterprise
programming. Java 1.02, the first public version, focuses on client programming; Java 1.1 focuses on server
programming.

Figure 5.1. Different flavors of Java

Java 2 has a unique naming convention from Java 1.x. Recognizing the need for different sizes and functions of
Java, Java 2 is available in several flavors. Within the Java 2 version are several groups of core APIs, which
make up the different editions. The veritable plethora of programming interfaces is overwhelming. Many of these
APIs are defined and controlled by entities outside Sun's realm. Sun recognized that it could not develop APIs
fast enough to keep up with demand, so it enabled these external groups to do so by developing the Java
Community Process. External groups develop APIs that are publicly available as long as they comply with Sun's
Java API requirements. Sun does not, however, include every API in its core. Sun decided to formalize the usage
of its extra APIs by defining standard extensions. Some extra sets of APIs that are useful only to subsets of the
Java community are available as extensions to the core APIs—at an extra charge. Java 2 is also divided into three
editions:

• Java 2 Standard Edition

• Java 2 Enterprise Edition

• Java 2 Micro Edition

Java 2 Standard Edition


J2SE targets much the same programs as earlier versions: basic client or server applications that do not require
special APIs or interoperability with other object models or languages. J2SE is much too large for limited-
capability devices such as wireless clients. One noteworthy feature of J2SE is its HotSpot execution engine (see
http://java.sun.com/products/hotspot for a detailed description), designed to make Java more expedient. HotSpot
is more discriminating in optimizing code than its predecessors and includes an improved garbage collector. Two
versions of this new addition are available, HotSpot Client VM, geared towards use on clients where applications
start easily, and HotSpot Server VM, geared towards use where applications are likely to be more intensive and
taxing on a machine and are therefore server-side, where more optimization at start-up is possible.

Java 2 Enterprise Edition

J2EE subsumes J2SE. It is designed to meet the needs of enterprisewide programming and applications. J2EE is
ideal for applications that must be available to thousands of clients, are largely server-based, and must be capable
of interacting with legacy systems. J2EE allows large-scale applications to be deployed more easily than J2SE
and with increased security. At the risk of blatancy, J2EE is also far too large for use on wireless clients. For
more information on J2EE, consult its home page at http://java.sun.com/j2ee.

Java 2 Micro Edition

Finally, the Java 2 Edition important to wireless developers: Java 2 Micro Edition. J2ME selectively rewrites and
removes integral components of the core runtime environment to make it easily portable to smaller constrained
devices. Sun Microsystems' J2ME made its debut in June 1999. Almost shadowing this announcement was the
introduction of its counterpart, a new virtual machine that could run simple Java programs on Palm OS devices.
The specifications were not quite ironed out at this point, so the actual debut (and availability to the public) was
not until the summer of 2000 after the specifications were finalized.

The new virtual machine, called the KVM, is optimized for small devices and makes possible much of the
wireless development that occurs today. We will investigate the KVM shortly. The KVM (an abbreviation for an
early name, Kuaui VM) runs well in a constrained environment. You can use the JVM in J2ME, and you would
choose to do so when working in a 32-bit environment with a generous amount of memory. The KVM is used in
a J2ME architecture where the architectures are either 16-bit or 32-bit and have limited memory.

Class Differences

J2ME is not a new form of Java. If an application is developed for J2ME, it is compatible with J2SE or J2EE,
provided that the environments include the same extra APIs. J2ME works (mostly) seamlessly on a variety of
devices. A big factor in being able to use this edition of Java on smaller platforms is to reduce the number of
classes used. Some classes are left out by default, and others are reduced through a process that eliminates
redundancies in methods. This constructs an actual subset of the J2SE runtime classes. The differences do not
end in merely a reduction in size, however. If necessary, any classes that are part of the classic runtime
environment can be stored in the VM's internal format instead of the normal class file format. Note, however,
that user-defined classes must be readable in the normal class file format. Some of the classes that are eliminated
at the start in J2ME provide ways of interacting with the external devices. J2ME therefore includes new classes
tailored to meet the needs of the smaller devices for which it is selected.

Beyond the new and changed set of classes, we will discuss three components of J2ME:

• The virtual machine (KVM)

• Configurations

- Connection Limited Device Configuration (CLDC)

- Connected Device Configuration (CDC)

• Profiles

- Mobile Information Device Profile (MIDP)

- Others under development (PDA, Foundation, Personal)


Each item listed here contributes to enabling portability across a variety of limited resource devices.

K Virtual Machine

Our perusal of the smallest of the Java 2 versions begins with the KVM. The KVM is a small-footprint virtual
machine for resource-constrained devices. It accepts almost the exact same set of bytecodes and class file format
that the regular VM does. The goals in designing the KVM were that it be easy to understand and maintain,
highly portable, and limited in size without sacrificing essential Java features. The KVM is a less fancy version
of the JVM. It does not boast dynamic compilation or other more advanced performance optimization techniques.
The KVM only runs at roughly half the speed of JDK 1.1 software without an improved compiler. The KVM
does boast portability. Its release includes three choices of ports:

• Win32

• PalmOS (3.01 and up)

- Solaris

Sun's external partners have ported it to still other platforms, totaling just fewer than 30. Unlike some other
limited Java implementations, the KVM supports dynamic class loading and regular class files, as well as a full
Java technology bytecode set and JAR file formats.

As far as compatibility with the JVM, KVM's general goal is that it be fully compatible with the JVMS and Java
Language Specifications. The main differences are found in the language levels. No hardware floating-point
support exists on most resource-constrained devices because of space limitations, so the Connection Limited
Device Configuration (CLDC) (which runs on top of the KVM and is discussed in detail in the next section),
does not include floating-point support. The exceptions to total compatibility also include the fact that limitations
prevent the use of the full J2SE platform security model and that the libraries included with CLDC are limited.
For those of you more familiar with full-blown Java, the implementation differences of the VMs are that no JNI,
reflection, thread groups, weak references, and finalization are included, as well as limited error handling support
and a new implementation of bytecode verification.

The KVM garbage collector component is small and simple. It is nonmoving, nonincremental, single-space,
designed to limit recursion, and optimized for small heaps (32–512K). Sun intends to release an alternative, more
advanced garbage collector in the future. The advantages of the KVM garbage collector are that it does not move
objects, allowing for a simple and clean code base, and that it uses less memory. The disadvantages inherent to
the KVM are that object allocation is slower, memory fragmentation can cause it to run out of heap, and it
induces latency when using large heaps in garbage collection, because it is nonincremental.

Configurations

To conceptualize the logical organization of J2ME, you have to understand more than just its virtual machine.
You must understand the J2ME concepts of Configurations and Profiles. A configuration is composed of a
virtual machine, core libraries, classes, and APIs. It specifies a general runtime environment for consumer
electronic and embedded devices and acts as the Java platform on the device. A profile is an industry-defined
specification of the Java APIs used by manufacturers and developers to address specific types of devices.

Configurations define the minimum capabilities and libraries for a virtual machine for the Java platform that will
be available on all devices belonging to a family. All these devices will possess similar memory requirements
and processing power. There are currently two specified configurations in J2ME: the Connected Limited Device
Configuration (CLDC) and the Connected Device Configuration (CDC). Both configurations are the results of
Java Community Process efforts.

Connected Limited Device Configuration (CLDC)

The CLDC has standardized a portable, minimum-footprint Java chunk for resource-constrained devices. The
CLDC configuration provides for a virtual machine and a set of core libraries for use in certain profiles that
define requirements for given devices. One such profile, the Mobile Information Device Specification's Profile
(MIDP), uses this configuration in designing applications for wireless or handheld devices. Accord-ing to Sun's
community Web site describing the CLDC, the devices generally targeted with CLDC are characterized as
follows:

• 160–512K total memory (including both RAM and flash or ROM) is available for the Java platform.

• Power is limited (often powered by battery).

• Connectivity to a network is often intermittent (and/or wireless).

• Bandwidth is expected to be limited, 9600bps or less.

• User interfaces are primitive, with variable degrees of usability.

The most recent CLDC version includes several improvements over its immediate predecessor. It has a faster
bytecode interpreter, an exact, compacting garbage collector, Java-level debugging APIs, and preverifier
improvements, among others. This release also includes an implementation of CLDC for the Linux operating
system. The static size of the CLDC platform, including both the VM and the libraries, is usually less than 128K.
In addition to virtual machine and language features, CLDC specifications cover input/output, networking
support, internalization, and a security model. Several components that you might expect to be there are
intentionally omitted: application installation, user interface support, event handling, a high-level application
model, and database support. These features are, instead, defined in a profile.

CLDC security is different from traditional Java security. CLDC defines low-level virtual machine security that
protects the device from harm and guarantees a certain level of security by the class file verifier. Because the size
of the J2SE class file verifier is larger than the entire KVM, CLDC/KVM has its own class file verifier. This
class file verifier introduces a two-pass method. To save time and processing cycles, the verification is done on a
desktop or server computer where the class files are compiled, rather than on the device. This off-device
verification is termed preverification. The device performs some simple operations to confirm that the class file
was verified and that it is still valid. The CLDC class file verification does not require code signing.

CLDC security also defines a sandbox model similar to that of other Java editions that creates application
security. The sandbox model defined in J2SE is far too large for limited devices. The sandbox security in CLDC
does serve similar functions, though. It requires that class files be properly verified and affirmed valid Java
applications. Only the predefined set of Java APIs used in CLDC is allowed to execute. In the sandbox model,
only the device can download and install applications. Applications are not allowed to download individual
classes.

The classes CLDC inherits from J2SE are in three packages: java.lang.*, java.io.*, and
java.util.*. They are listed in Table 5.2. All new classes introduced are parts of
javax.microedition.io.*.

Standard J2SE networking, I/O, and storage libraries are too large for CLDC devices. Original classes assume
that TCP/IP is available and are not easy to extend to support new protocols (for example, Bluetooth). Instead,
CLDC introduces a new generic connection framework. It allows for better ease in supporting different types of
networking protocols, is more easily extensible, and is upwards compatible with standard Java class libraries. A
general connection, for example, would take the form of

Table 5.2. Inherited Classes

Package Class

java.lang Boolean, Byte, Character, Class, Integer, Long, Math, Object, Runnable, Runtime, Short, String,
StringBuffer, System, Thread, Throwable

java.io ByteArrayInputStream, ByteArrayOutputStream, DataInput, DataInputStream, DataOutput,


DataOutputStream, InputStream, InputStreamReader, OutputStream, OutputStreamWriter,
PrintStream, Reader, Writer
java.util Calendar, Date, Enumeration, Hashtable, Random, Stack, TimeZone, Vector
Connector.open("<protocol>:<path>;<parameters>");
A connector used in HTTP would take the form of
Connector.open("http://www.usa.com");
A connector used in opening a serial port would take the form of
Connector.open("comm:0;baudrate=9600");
CLDC does not define any network protocols, just the framework for their interaction. Profiles define the
specific protocols that device categories support.

Connected Device Configuration (CDC)

The CDC defines a set of items that compose a portable Java chunk for consumer electronic and embedded
devices. The CDC configuration provides for a virtual machine and a set of core libraries appropriate for use with
industry-defined profiles for less limited devices, such as the Foundation Profile. According to the Sun
community Web site describing the CDC, target devices are characterized as follows:

• They are powered by a 32-bit processor.

• They have 2MB or more of total memory (including both RAM and flash or ROM) available for the
Java platform.

• They require the full functionality of the classic JVM.

• Connectivity to a network is often intermittent (and/or wireless).

• Bandwidth is expected to be limited, 9600bps or less.

• User interfaces are primitive, with variable degrees of usability.

As you can see from this list, the differences in the target devices for CLDC and CDC are that CDC target
devices are powered by more powerful processors, have more total memory, and make use of the full-blown
JVM. The CDC contains the full JVM, the CVM virtual machine, and the class libraries and APIs necessary to
get the system up and running. For an application to be meaningful, this configuration requires the help of a
profile. At this time, only one profile, the Foundation Profile, is used in CDC. Because CDC subsumes CLDC,
any CLDC-compliant profile can be used in CDC.

The CDC class library contains the following packages from J2SE:

• java.lang

• java.util

• java.net

• java.io

• java.text

• java.security

CDC uses a subset of the J2SE APIs, eliminating nonessential ones to conserve space and performance cycles.
CVM combines minimalist features inherent to the KVM with more enhanced features of the JVM, such as a
precise memory system, an advanced garbage collector, and better Java synchronization. CVM supports multiple
porting and is implemented in C. CVM can run with mostly preloaded classes concurrently with dynamically
loaded classes. This allows for quicker start-up time, less latency, and the capability to execute bytecodes out of
ROM. CVM provides interfaces between garbage collection, the type system, and the interpreter. These
interfaces are clearly defined and well separated.

Now that you have learned about configurations, it is appropriate to introduce two profiles that give these
configurations specificity and increased ease of portability.

Profiles

A profile is more application-oriented, whereas a configuration is more device-oriented. As mentioned earlier, a


profile is an industry-defined specification of the Java APIs used by manufacturers and developers to address
specific types of devices. A profile supplements a configuration to provide capabilities common to a certain
device category. Currently, two profiles have completed the Java Suggested Revision (JSR) process that is part
of the J2ME-approved specifications. The Mobile Information Device Profile (MIDP) is used with CLDC, and
the Foundation Profile is used with CDC.

Mobile Information Device Profile (MIDP)

The CLDC does not define any user interaction parameters in its specifications. This is left to profile definitions.
The MIDP is designed to run on Mobile Information Devices (MID). A MID is defined as having the following
features: a display that is at least 96 pixels by 64 pixels; a touch screen, keypad, or keyboard; a wireless network
connection (either always on or intermittent); 128K of nonvolatile memory for MIDP, 8K of nonvolatile memory
for persistent data, and 32K of volatile memory for the Java runtime; and the capability to run the VM and low-
level APIs for obtaining user input. The MIDP adds APIs to the CLDC for the following functions:

• Displaying text and graphics

• Responding to user events

• Defining and controlling applications

• Storing data in simple databases

• Network connectivity via a subset of HTTP

• Timer notifications

The MIDP necessarily excludes details for

• Downloading applications

• Installing applications

• Network security

The Foundation Profile

According to its specifications, the Foundation Profile is simply a set of Java APIs tailored to complement the
CVM in the CDC. Together, these provide a complete J2ME runtime environment for devices that are limited but
have 2MB or more memory and a 32-bit processor. The profile tweaks some of the packages in J2SE to provide
necessary support for critical functions on limited devices.

The Future of J2ME


J2ME targets a wide range of uses and devices that differ greatly from one another. Currently, profiles are
developed for specific categories of devices. The MIDP targets devices such as cell phones. The intention behind
profiles is that J2ME application developers utilize certain profiles so that their applications can port easily
across any device that implements the profile. The proposed inclusion of building blocks will define an API
derived from J2SE or J2EE APIs for use in J2ME. According to the Java Community Process Java Specification
Request for this change, a building block might define a specific set of classes from java.io. The building
blocks will then be available for Profile Expert Groups to use in developing new profiles. The more far-sighted
intent for building blocks is that they will eventually cause configurations to fall into extinction. At the time of
the writing of this text, both configurations discussed here are standard operating procedures for J2ME, as are
several profiles, but Sun is in the process of visiting the proposal for building blocks. Several profiles are
currently in various stages of approval. There are proposals for a PDA profile, a Personal profile, an RMI profile,
and a Java TV profile. Refer to Sun's J2ME Web site for updates on these proposed profiles, at
http://www.sun.com/software/communitysource/j2me.

Part III: Protect Your System


Chapter 6. Cryptography
Only amateurs attack machines; professionals target people.

—Bruce Schneier

The purpose of this chapter is not to make you an expert in cryptography but to give you a basic overview,
focusing on the most important issues to wireless developers. After reading this chapter, you should have a
general understanding of the following:

• What applied cryptography is (and is not)

• What it can (and cannot) accomplish

• How it should (and should not) be used

• What a secure encryption mode looks like

• Which common pitfalls are associated with the use of cryptography in secure applications

This chapter is based on the unpublished work Introduction to Applied Cryptography by Tadayoshi Kohno
(tkohno@acm.org). We have simplified certain concepts to make them more understandable to the
noncryptographer. Therefore, it would be a mistake to read this chapter and then go off and spin your own
cryptographic algorithms. Portions of this chapter may delve beyond the extent of your mathematical knowledge,
or some concepts may be difficult to grasp. However, there are two primary lessons to be learned and kept in
mind when working with cryptographic issues:

1. Cryptography is not security. In particular, application security is more than just cryptography.
Strong cryptography is usually a prerequisite for secure applications, but the mere use of cryptography
cannot guarantee that an application will be secure.

2. Cryptography is not easy. Because cryptography is so difficult (yet so important for the security of
many applications), developers should not invent their own cryptographic algorithms. They should use
only well-known, trusted algorithms in their applications. When this is not possible, they should seek
the advice of experienced cryptographers.

Applied Cryptography Overview


To demonstrate the concepts in this chapter, we will use the office complex case study from Chapter 1, "Wireless
Technologies."

The Office Complex Case Study


An advertising corporation, AdEx Inc., has installed a wireless LAN system throughout its multistory building in
Reston, Virginia. It has installed access points at key locations to provide complete coverage throughout the
building. Employees are given laptop computers with docking stations at their work areas. Both the docking
stations and the laptops are equipped with wireless LAN access devices. The conference rooms are equipped
with projection systems connected to the LAN so that employees can take their laptop to a conference room,
connect to the projection system over the network, and control a presentation via their laptop.

An AdEx sales team, headed by Kathleen, is proposing a new marketing campaign to a potential new client,
NitroSoft. The team has been working on this presentation account for several weeks. Before the presentation,
Kathleen takes the NitroSoft group to lunch. During lunch, the NitroSoft people receive a message on their PDAs
announcing a new acquisition that has relevance to the team presentation. One of the people in the NitroSoft
group, Louis, mentions the announcement to Kathleen, who takes out her PDA and asks him to send her a copy
of the announcement. Louis sends a copy to her PDA, with additional background information. Kathleen
forwards it to one of her staff members, with instructions on how to incorporate the new information into the
presentation.

After lunch, the group returns to AdEx and proceeds to the conference room for the presentation. On the way,
Kathleen checks her PDA and receives word that her team will be able to incorporate the new information but
that it will take 20 more minutes. They inform her that the changes fit well in the second half of the slides.
Kathleen says that she will begin with the original presentation and switch to the new presentation half way
through if they complete it and are satisfied with the results. Otherwise, she will stick with the original
presentation.

Kathleen and the NitroSoft group reach the conference room and settle in for the presentation. The AdEx sales
team continues working as Kathleen begins the presentation. As she talks, she monitors her PDA and receives
confirmation that the team has incorporated the new information and is satisfied with the result. At a convenient
point, Kathleen pauses, loads the updated slides, and switches to the new presentation. The NitroSoft group is
impressed by the efficiency and speed with which the team incorporated the new information. AdEx and
NitroSoft close the deal that day.

Imagine that the relationship between AdEx and NitroSoft is not cordial. In fact, let us say that AdEx and
NitroSoft are hostile to or competing with each other. Louis and NitroSoft need to communicate securely.
Unfortunately, they must communicate over inherently insecure (and potentially hostile) channels (AdEx's
network). Kathleen is a malicious party who wants to spy on or tamper with their conversations. Figure 6.1
depicts the relationship we will use for examples throughout this chapter.

Figure 6.1. The overt relationship between Louis, NitroSoft, AdEx, and Kathleen

What does communicating securely mean for Louis and NitroSoft, particularly if they must communicate over
hostile channels? It means that even if their messages are transported over an insecure medium (such as AdEx's
network or the Internet), it should be as if they were communicating directly with each other over a dedicated
and physically secure channel that they completely control. If the communications channel is secure,

• No one can listen to their communications.

• No one can modify or tamper with their communications.

• Both know that they are communicating with each other and not with an impostor.
• Louis cannot deny that he sent a message to NitroSoft, or vice versa, at a later date.

These requirements for secure communications serve as goals for applied cryptography. They are also, not
surprisingly, four of the six security principles discussed in Chapter 2, "Security Principles": Privacy and
Confidentiality, Integrity, Authentication, and Nonrepudiation. Here is the meaning of these principles in the
context of the office complex scenario:

• Privacy and confidentiality. Kathleen should not be able to learn anything about the contents of a
message Louis sends to NitroSoft.

• Integrity. Kathleen should not be able to trick NitroSoft into believing that information she sends came
from Louis; she should not be able to modify, undetected, a message Louis sends to NitroSoft.

• Authentication. Louis should be able to convince NitroSoft that it is communicating with him;
Kathleen should not be able to trick NitroSoft into believing that she is Louis.

• Nonrepudiation. After committing to a transaction, Louis should not, at a later date, be able to claim
that he did not commit to that transaction.

Cryptographers use many tools to accomplish these security goals. An encryption algorithm is a cryptographic
protocol Louis and NitroSoft can use to establish privacy. Figure 6.2 depicts the process. Encryption is the
process that transforms an understandable message into a form that only a legitimate recipient can read. The
initial (readable) message is called the plaintext, and the resulting garbled (unreadable) message is called the
ciphertext. Decryption is the process of retrieving the original plaintext message from the ciphertext.

Figure 6.2. The encryption/decryption process

Traffic Analysis
It is worth emphasizing why these security principles are called goals and why encryption in and of
itself may be insufficient. Take Privacy and Confidentiality, for example. The preceding states that
Kathleen should not be able to learn anything about the contents of a message Louis sends to
NitroSoft. You may be saying that, if it is encrypted properly, this is true. Not so? You must consider
traffic analysis as well. Traffic analysis is the art and science of examining communication patterns to
derive meaning from otherwise meaningless communications.

For example, let us say that Louis must receive approval for any deal before he can sign. Further, let
us assume that NitroSoft has a process in place for evaluating proposals and receiving approval for
the expenditure. The approval process takes ten minutes, and if the proposal is approved, notifying
Louis and processing the necessary paperwork for signatures take five more minutes. By timing the
response, Kathleen can derive information from the exchange without being able to read the message.
This is a simple example, but the point is that you can gain a lot of information—by watching traffic
volume, the length of messages, and the like—which encryption by itself does not protect.

Primitives and Protocols


Primitives are algorithms or procedures to accomplish a computing task, such as converting plaintext to
ciphertext or converting ciphertext to plaintext. Protocols are processes or procedures to accomplish
communications between entities. An encryption protocol is the process of encrypting plaintext into ciphertext so
that the intended recipient can decrypt the ciphertext and retrieve the plaintext. There are numerous
cryptographic algorithms and protocols. The following examples are frequently used cryptographic methods that
do not fall strictly into the encryption or decryption category:
• A message authentication code (MAC) is an algorithm to generate a code that can be used by a protocol
to establish the authenticity of a message between to entities.

• A key agreement algorithm is part of a protocol that can be used by two entities to compute a shared
secret (a key or another token known only to them) even if they cannot communicate over a secure
channel.

• A digital signature is an algorithm used by an entity to sign an electronic message; the protocols of this
algorithm are the process of signing a message and verifying the signature at the destination.

A set of cryptographic protocols forms a library of useful cryptographic tools. These tools, used appropriately,
can help a developer create secure applications. The thing to remember is that cryptographic primitives are used
to produce cryptographic protocols; these protocols are used in turn to generate secure applications. Figure 6.3
depicts this relationship. Developers should not use primitives; used by themselves, primitives may be insecure.

Figure 6.3. The relationships between primitives, protocols, and applications

Symmetric and Asymmetric Algorithms


Conventional cryptographic algorithms use what are called keys. A key is electronically represented information
that affects the execution of an encryption or decryption algorithm. A keyed algorithm produces different output
for different keys. Figure 6.4 depicts an encryption algorithm that takes two inputs, the original plaintext and an
encryption key, and produces one output, the ciphertext. Using a different key as one of the inputs produces a
different ciphertext. The decryption algorithm also takes two inputs, the cipher text and a decryption key, and
produces a single output, the plaintext (if the decryption key is correct).

Figure 6.4. The encryption process with keys


An encryption algorithm is symmetric if the encryption key is the same as the decryption key. Using a symmetric
protocol, Louis and NitroSoft share the same symmetric key, or shared secret. To send a private message to
NitroSoft, Louis encrypts the message with the shared symmetric key, and NitroSoft decrypts that message with
the same symmetric key.

An encryption algorithm is asymmetric if the encryption and decryption keys differ. One use of asymmetric keys
is public/private key cryptography, in which each person has a private and a public key. Using a public/private
key protocol, if Louis wants to send a private message to NitroSoft, he encrypts the message with NitroSoft's
public key, and NitroSoft then decrypts the message with its private key. Although NitroSoft must maintain the
secrecy of its private key, it can give the public key to anyone it chooses (including Kathleen). NitroSoft could
even publish the public key in a public directory or phone book. A big advantage of asymmetric cryptographic
protocols over symmetric cryptographic protocols is that Louis can obtain NitroSoft's public key from a public
directory and send NitroSoft a secret message without ever meeting face to face with anyone at NitroSoft (that is,
without ever agreeing on a shared secret symmetric key).

Cryptographic Attacks
To understand fully how to use cryptography in a wireless (or other) application, a developer must understand
who her application's adversaries are. In the example, the developer must understand how Kathleen thinks and
which techniques she might use to circumvent the security of an application.

Cryptography plays only a small role in an application's security. Certainly, if an application uses bad or weak
cryptography, it will be exploitable. However, even if an application uses strong cryptography, it may still be
insecure. For example, a privacy application might use a strong encryption algorithm but store the algorithm's
decryption key in plaintext in a buffer that can be used to pad information transmitted, thus leaking the key to
Kathleen. (Application security is stressed in later chapters.)

Types of Attacks

Consider how Kathleen could attempt to break (attack) an encryption algorithm. Recall that an encryption
algorithm is a protocol utilizing keys designed to protect the privacy of information. Kathleen can break an
encryption algorithm if she can learn any information about the plaintext corresponding to encrypted ciphertext.
By this, we mean learning any information about the plaintext: the number of 1 bits in the plaintext, the value of
the fifth bit, or the entire message.

To obtain information about the plaintext, Kathleen can try every possible decryption key until she finds the
correct one. This is called a brute force or exhaustive key search attack. When Kathleen learns the decryption
key, she can decrypt the entire ciphertext and any subsequent ciphertext transmitted.

Rather than mount a brute force attack, Kathleen could try to exploit an obscure property of the algorithm itself.
Such an attack is referred to as a smart or shortcut attack. Although shortcut attacks typically require less time
than brute force attacks, they may be less practical in the real world (for example, they can require an exorbitant
amount of memory or observed ciphertexts).

What would Kathleen's attacks look like? There are two general types of attacks: passive attacks and active
attacks. In a passive attack, Kathleen would simply listen to and record the communications between Louis and
NitroSoft. After collecting enough data, she could perform some computation and try to break the protocol
between Louis and NitroSoft. In an active attack, Kathleen would actually interfere with the communication
between Louis and NitroSoft. She could do the following:

• Prevent NitroSoft from receiving some of Louis's messages.

• Modify some of Louis's messages in transit.

• Save some of Louis's messages and resend them at a later date (commonly referred to as a replay
attack.

• Pretend to be Louis to NitroSoft and NitroSoft to Louis (the man in the middle).
Costs of Attacks

Almost all modern cryptographic protocols are breakable in theory. Given enough time and resources, you could
break virtually any cryptographic protocol. The question is not whether someone can break a cryptographic
protocol but whether it is practical in real life. This aspect of breaking cryptography leads cryptographers to
classify an attack against a cryptographic algorithm by the attack's requirements. The more practical an attack,
the more susceptible an algorithm is to that attack.

The other aspect is cost: how much memory or how long it will take. Does the attack algorithm run in minutes,
days, years, or centuries? The more time required, the less vulnerable the cryptography. Thus, the measure of an
attack's cost gives an indication of the attacked encryption algorithm's strength.

Of course, even if all known attacks against an algorithm currently require too much time or memory to execute,
advances in scientific research, or computing power, will yield considerably faster attacks. For example, in the
1970s, single 56-bit key DES was considered secure enough to protect sensitive information. Now these keys can
be brute-force attacked in hours on a network of standard desktop personal computers.

Another aspect of an attack's requirements is the information necessary. Because we are considering attacks
against an encryption algorithm, we might ask how much ciphertext is required for analysis or whether
knowledge of part of the plaintext is required. The more information required by an attacker, the less likely an
attacker will be able to mount a successful attack.

Large Numbers

We close this section with a discussion of large numbers. In the preceding subsection we mentioned that a
cryptographic algorithm may be considered secure if it is unfeasible for an attacker to mount an attack in
practice. Perhaps it takes too much time, consumes too much memory, or requires too many ciphertext samples.
But how much is too much? We will answer that question in the section "Choices." For now, let us point out that
too much is usually a very large number on the order of 2128. So that you can put these large numbers in context,
and for future reference, see Table 6.1. This table is taken from Applied Cryptography: Protocols, Algorithms,
and Source Code in C, by B. Schneier, John Wiley and Sons, second edition, 1996.

Table 6.1. Examples of Large Numbers

Physical Analogue Number

Years until next ice age 214

Age of the Earth in years 230

Age of the Universe in years 234

Number of atoms in the Earth 2170

Number of atoms in the Sun 2190

Number of atoms in the galaxy 2223

Symmetric Cryptography
Symmetric cryptography is a cryptographic method employing a single key for both encryption and decryption.
The use of a single key makes the decryption process a simple reversal of the encryption process. Symmetric
cryptography is the cryptographic method most often thought of when people consider classic cryptographic
techniques, such as one-time pads (which we cover in the section "Stream Ciphers").

Symmetric Primitives
In this section we will describe some of the most important primitives and protocols used in today's
cryptographic applications. Recall that cryptographic primitives are the building blocks for cryptographic
protocols and for secure applications. Although cryptographic primitives by themselves do not provide useful
functionality and should not be used by application developers, an understanding of these primitives is helpful in
understanding the protocols.

Block Ciphers

Block ciphers are among the most important, powerful, and useful cryptographic primitives. They serve as the
backbone for many cryptographic protocols. Although most of you do not need to understand the inner workings
of block ciphers, you should understand the external behavior of block ciphers.

A block cipher consists of a pair of algorithms: an encryption algorithm and the inverse, a decryption algorithm.
These algorithms, respectively, accept a plaintext (or ciphertext) message and a key as input and produce a
ciphertext (or plaintext) message as output. For any given block cipher, the size of the plaintexts and ciphertexts
is the same fixed value (called the cipher's block length). That is, a block cipher would convert 64-bit (8-byte)
plaintext blocks into 64-bit ciphertext blocks. To encrypt messages longer than the block length, you must wrap
the block cipher in a higher-level protocol, which we will cover shortly.

For any given key and plaintext block, a block cipher encryption algorithm will always produce the same
ciphertext block. Similarly, for any given key and ciphertext block, a block cipher decryption algorithm will
always produce the same plaintext block. The block cipher's encryption and decryption algorithms are
complementary. Different keys (with very high probability) produce different encryptions and decryptions. A
plaintext encrypted under one key yields a different ciphertext than the same plaintext encrypted under a
different key. For any given key, a block cipher should appear to be a random mapping between possible
plaintexts and ciphertexts. That is, an attacker should not be able to tell the difference between a block cipher
encryption and a purely random string.

Obviously, keys play an integral role in the security of a block cipher. If an attacker acquires a key, she can
encrypt or decrypt blocks with that key. We are assuming that the encryption algorithm is known or
determinable; in most cases, this is true. Therefore, not only must an application keep its keys secret, but it must
also be prohibitively difficult for an attacker to randomly guess a particular key or try a brute force attack. One
way to ensure that an attacker will have difficulty guessing a particular key is to increase the number of bits in a
key, making the number of possible keys very large.

If a block cipher uses k-bit symmetric keys, there will be 2k possible keys for that block cipher. This means that
an attacker will have to try 2k keys before she finds the correct one. In reality, the attacker may be lucky and the
first key will be correct, or she may have to try all 2k keys. On an average, the attacker would have to make 2k–1
guesses. This is for strong block ciphers where brute force is the most efficient attack; weaker block ciphers may
have more efficient attacks. Key length has less effect on these other attacks. These issues are not specific to
block ciphers; key length affects the strength of all ciphers.

Although there are many block ciphers, we shall give further consideration to the three most popular: DES,
3DES, and AES. Published in the 1970s, DES is the U.S. Government's Data Encryption Standard. DES has a
block size of 64 bits and a key size of 56 bits. Despite its relatively small block and key sizes, DES is an
incredibly well-engineered cipher. Although there are a number of theoretical attacks against DES, the most
practical attack is still a brute force attack of trying all 256 (or more than 72 quadrillion) keys.

Even with this large number of keys, brute force attacks against DES are now possible with today's networking
and desktop computing power. Thus, the adoption of 3DES (or triple-DES). Triple-DES consists of three chained
DES operations (see Figure 6.5):

Figure 6.5. The triple-DES process


1. Encrypt the plaintext with key K1.

2. Decrypt with key K2.

3. Encrypt again with key K3.

This is known as three-key 3DES. There is also a two-key 3DES, in which K1 and K3 are the same. The key size
for two-key 3DES is 112 bits, and the key size for three-key 3DES is 168 bits. Unfortunately, certain attacks
against two-key 3DES are significantly faster than brute force. Proving that, even if you take a very strong
primitive and implement it correctly, its implementation may be vulnerable.

In the mid 1990s the U.S. Government's National Institute of Standards and Technology (NIST) began the
process of selecting a new U.S. Advanced Encryption Standard (AES). Because of DES's vulnerabilities (due to
its small key and block sizes and slow performance), AES is to replace DES and be computationally efficient and
secure. After several years and numerous technical meetings, NIST has picked a cipher called Rijndael
(pronounced "rhine-dahl") to be the AES. Rijndael is configurable; versions of Rijndael exist for any
combinations of appropriate key sizes (128, 192, or 256 bits) and block sizes (128, 192, or 256 bits).

Stream Ciphers

Stream ciphers are another very popular encryption primitive. They are typically much faster and simpler than
their block cipher cousins (especially when implemented in hardware) and are therefore commonly used in
systems that cannot handle the overhead associated with block ciphers (such as high-speed telecommunications
systems). Unlike block ciphers, there is no clear, generally accepted, popular stream cipher. In fact, many of
today's commonly used stream ciphers are proprietary.

Modern stream ciphers are modeled after the Vernam one-time pad (see Figure 6.6). The sender and receiver
share a long sequence of random binary digits (the one-time pad). When the sender wants to send a secret
message to the receiver, the sender XORs the first bit of the plaintext message with the first bit of the one-time
pad. Then she XORs the second bit of the plaintext message with the second bit of the one-time pad, and so on.
This process is continued until the end of the message is reached.

Figure 6.6. A one-time pad

XOR or Exclusive OR is represented by the symbol and has the following properties:

0 0 = 0;

1 1 = 0;
0 1 = 1;

1 0=1

As you can see, XOR is a change indicator, meaning that the result is 1 only when the two bits are different.

To maintain the security of the system, a one-time pad (as the name implies) requires that its bits never be
reused. After a bit is used from the one-time pad, it must be discarded. Provided a one-time pad is never repeated
and is truly random, its encryption algorithm is perfectly secure. This means that even if an attacker has an
unlimited amount of computing power at her disposal, she will be unable to learn anything about the plaintext
message.

Although theoretically optimal from an encryption standpoint, in practice the one-time pad is not very useful.
Because it must never be reused, it must be as long as the secret message to be encrypted. In fact, it must be as
long as all the messages that may ever be sent. Even with great forethought, this is virtually impossible to do in
practice.

Modern stream ciphers are an attempt to provide a practical version of the Vernam one-time pad. The heart of
the modern stream cipher is a keystream generator. The keystream generator takes a small key (or seed) and
generates a longer keystream. The resulting keystream is then used like the one-time pad. The premise behind
stream ciphers is that it is much easier for two parties to agree on a small secret key (to produce the keystream)
than to agree on an entire one-time pad. Unfortunately, these modern stream ciphers are not perfectly secure, and
an attacker with enough resources can break the encryption. Most of the vulnerabilities lie in the fact that
producing truly random bit streams using a computational method is very difficult. Developers and consumers
should be very leery of stream ciphers or algorithms that claim to produce random numbers or random streams.
The truth of the matter is that the best that can be accomplished computationally are pseudo-random numbers or
pseudo-random streams.

Hash Functions

Cryptographic hash functions are another extremely important cryptographic primitive. Hash functions take as
input a long string of bits (such as a message or file) and produce as output a smaller, compressed string of a
fixed size. For example, the SHA-1 hash function produces 160-bit hash values, and the MD5 hash function
produces 128-bit hash values. Unlike block ciphers and stream ciphers, hash functions do not utilize keys. This
means that, given a specific input, anyone can compute the hash of the input.

Most cryptographic hash functions share the following properties:

• Pre-image resistance. It should be difficult for an attacker, when given a hash value, to be able to
identify a stream that will produce that hash value. That is, if Z is an output of a hash function, it should
be hard for an attacker to find a value X so that X hashes to Z.

• Second pre-image resistance. It should be difficult for an attacker to find a second stream for a given
hash value that produces that hash value. That is, given a value X that hashes to Z, it should be hard for
an attacker to find another value Y that also hashes to Z.

• Collision resistance. It should be difficult to find two different inputs that hash to the same value. That
is, it should be hard for an attacker to find two values, X and Y, that hash to the same value Z. Here the
attacker has complete control over the choice of both X and Y.

As with block ciphers and stream ciphers, hash functions are susceptible to brute force attacks. To brute-force a
pre-image or second pre-image of a hash value Z, an attacker would repeatedly pick input strings and hash those
strings until one of those strings hashes to Z. If the target hash function produced an n-bit hash value, an attacker
would need to try nearly 2n strings before finding one that hashes to Z.

To brute-force a collision, an attacker would have to pick random input strings repeatedly until two of those
strings hash to the same value. Because of a mathematical property dubbed the birthday paradox, an attacker
would have to hash approximately 2n/2 strings before finding two strings that hash to the same value. Meaning, to
prohibit these brute force attacks, secure applications should use hash functions with 160-bit or more hash sizes.
(Again, this number will increase as processing power and available memory increases in readily available
computing platforms.)

The Birthday Paradox


The birthday paradox is a problem of probability. Given a room with more than 23 randomly selected
people, the probability that 2 people in the group have the same birthday is 50 percent. In other
words, there is a 50/50 chance that 2 of those people have the same birthday. The paradox comes in
because we normally consider only matching our birthday to someone else's, which is a 1/365 chance
of matching, or .27 percent. Put another way, the probability of failure is (1 – .27%) = 99.73%. Each
additional person we ask increases the chance by only 1/365 (an addition function), so asking 23
people increases the chance of matching to only 1/365+1/365 (repeated 21 more times), which equals
23/365, or nearly 6.3 percent—a very low percentage.

However, when you start comparing pairs in the group (that is, looking for any matching birth date),
it becomes a different story because each person is comparing her birth date against the other 22
people (a multiplication function), or (23*22)/2 pairs, or 253 pairs. Although the probability of failure
in the first case is 99.73 percent, in the latter case it is the probability of failure raised to the number
of pairs, or .9973253 = .5046. The probability of success is 1 – .5046 = .4954, or approximately 49.5
percent.

In a large set of possible strings, finding a second string that hashes to the same value as a given
string has a very low-percentage probability of success. Finding a pair of strings that hash to the same
value within a large set of possible strings is not as difficult a task. However, if the set of possible
strings is large, you still have to try many strings before the probability of success becomes
reasonable.

Recently, the U.S. Government published three additional hash algorithms: SHA-256, SHA-384, and SHA-512.
As their names imply, these new hash algorithms respectively produce 256-bit, 384-bit, and 512-bit hash values.
Because the strength of an n-bit hash function against a brute force collision attack is about 2n/2 and the strength
of a block cipher with a k-bit key against a brute force key search is about 2k, applications requiring block ciphers
with k-bit keys should use hash functions with 2k-bit hash results. For example, an application using 128-bit key
AES should use SHA-256, and an application using 256-bit key AES should use SHA-512.

Symmetric Protocols
Earlier, we said that a block cipher's encryption algorithm takes a key and an n-bit plaintext and produces an n-
bit ciphertext. How do you use a block cipher to encrypt more than n bits? The answer is to wrap the block cipher
primitive within a higher-level protocol. This higher-level protocol is called an encryption mode.

The Electronic Code Book (ECB) mode is perhaps the most intuitive way to use a block cipher to encrypt more
than n bits of plaintext. Figure 6.7 depicts this process:

Figure 6.7. The ECB encryption mode


a. Pad the plaintext so that it is a multiple of n bits.

b. Split the resulting padded plaintext into n-bit chunks.

c. Encrypt each chunk independently.

Because the ECB mode is so intuitive, many applications use it to encrypt their data. Unfortunately, the ECB
mode is also insecure; it leaks information about the encrypted plaintext. For example, recall that a single
plaintext block always encrypts to the same ciphertext block (using a fixed key). This means that the message
ABAB with a block cipher in ECB mode (where A and B represent n-bit plaintext blocks) encrypts to XYXY
(where X and Y represent n-bit ciphertext blocks). An attacker looking at the ciphertext will learn that the first
plaintext block is the same as the third and that the second is the same as the fourth. This gives the attacker
information about the plaintext.

Another aspect of the ECB mode that the developer should be aware of is that n-bit blocks can be moved around
within the message and still decrypt properly, although the message's meaning is changed. This is not a problem
with ECB, though; it is a problem with the implementation. The encryption modes are tools to accomplish
privacy, not integrity. Unfortunately, many applications mistakenly assume that encryption protocols will
automatically prevent an attacker from tampering with encrypted messages. (More on this in the section
"Common Problems.")

CBC

The Cipher Block Chaining (CBC) mode is a more secure way to encrypt long plaintexts. Figure 6.8 depicts this
process:

Figure 6.8. The CBC encryption mode


a. Pad the plaintext so that it is a multiple of n bits.

b. Split the resulting padded plaintext into n-bit chunks.

c. Pick a random n-bit value called an initialization vector (IV).

d. XOR the first plaintext block with the IV.

e. Encrypt the resulting value using the block cipher.

f. The resultant ciphertext block is output and XORed with the next plaintext block.

g. Steps e and f are repeated until the end of the plaintext message is reached.

h. The message sent is both the random IV and the ciphertext message.

The important aspect of the CBC mode is that the same plaintext block or message always encrypts to different
ciphertext blocks or messages (provided that a different IV is chosen for each encryption).

CTR

Counter (CTR) mode is another secure use of a block cipher to encrypt a long plaintext message. There are two
general CTR mode constructions: randomized and stateful. The idea behind CTR mode is to use a block cipher as
a stream cipher to generate a keystream. This keystream is XORed with the plaintext to produce the ciphertext.
Both versions of CTR mode begin with an n-bit counter or seed i. Figure 6.9 depicts the process of a stateful
CTR mode:

Figure 6.9. The stateful CTR mode


a. The plaintext message is broken into jn-bit blocks. The last block is padded so that it is a multiple of n
bits.

b. Initialize i to some value (usually 0).

c. Encrypt i with a block cipher using key k.

d. XOR the output with the jth plaintext block.

e. Append the result to ciphertext results, and increment i and j.

f. Repeat steps c–e for all j blocks.

The initial counter i must be included with the ciphertext message or must be agreed on ahead of time by the
sending and receiving parties. Of course, the sending and receiving parties should be the only ones who know the
shared secret k.

The two versions of CTR mode differ only in how the value i is chosen. In the randomized version, a different
random i is chosen for each message. In the stateful version, i is set to 0. With both versions every time a new
block is encrypted, i is incremented. The value or state of i is maintained between plaintext messages. As with
any stream cipher, care must be taken so that the keystream is not reused. In the stateful version, this is
accomplished by ensuring that the counter i never wraps or by changing the key k before i wraps. In the
randomized version, this is accomplished by changing the encryption key with enough frequency to ensure that
the probability that the keystream will repeat is very low.

MAC

Privacy and integrity are two goals of applied cryptography. Encryption protocols are tools to accomplish
privacy. We have noted that encryption does not accomplish integrity. This deficiency requires another class of
protocols: message authentication codes (MACs). MACs are algorithms designed to protect the integrity of
messages. If the transmissions are MACed, the receiver can detect whether a malicious attacker has modified a
message. Most applications that require secure communications should use both an encryption protocol and a
MAC.

A MAC consists of two parts: a tagging algorithm and a verification algorithm. The tagging algorithm attaches a
MAC tag to a message, and the verification algorithm verifies a message's MAC tag. As with symmetric
encryption modes, there are numerous MACs, some based on block ciphers and others on hash functions. The
important point is that the chosen MAC needs to be resistant to tampering and sensitive to any modification to
the message—cryptographically secure.

Asymmetric Cryptography
This section deals with asymmetric cryptography, cryptographic methods in which the encryption key is different
from the decryption key. With this type of cryptography, the decryption process is not simply an inverse of the
encryption process but a totally different mathematical transformation. Asymmetric cryptography is a very
powerful technology because a key need not be shared between two communicating parties. This eliminates the
concern of how a common key is shared.

Asymmetric Primitives
As with symmetric cryptography, asymmetric primitives are the building blocks for asymmetric cryptographic
protocols and for secure asymmetric applications. Recall that although cryptographic primitives by themselves
do not provide useful functionality, an understanding of these primitives is helpful in understanding the protocols

RSA

RSA is perhaps the most well-known asymmetric or public key technique. There are two families of RSA
primitives: for encryption and for digital signatures. Although these two families are fundamentally the same
(their underlying basic algorithms are identical), we distinguish between these two families because they
accomplish different goals.

The details are not necessary for understanding the uses and importance of the RSA algorithms. Although you
are encouraged to read this section, grasping the mathematics is not necessary. Many software libraries handle
these details transparently, and most applications developers never have to work directly with the RSA
primitives.

Before you can use the RSA primitives, you must generate public and private RSA keys. The process to generate
these keys follows:

1. Select two large random prime numbers, and name them p and q.

2. Set the values n = pq and = (p – 1)(q – 1).

3. Pick a random integer e greater than 1 and less than so that the only common divisor between e and
is 1.

4. Derive the integer d between 1 and so that ed _ 1 mod (that is, the remainder of ed divided by is
1).

5. The public key consists of the integers n and e, and the private key consists of the integers n and d.

When people refer to the size of an RSA key, they mean the length of the integer n in bits (that is, a 1,024-bit
RSA key means an RSA key where n is 1,024 bits long). If you are astute, you may have noticed that the
preceding procedures deal with integers. Likewise, the RSA encryption and signature primitives operate on
integers. This, in and of itself, is not very useful. How often do you need to send or verify only integer data?
Therefore, the higher-level protocols discussed in the next section handle the conversion between the data to be
sent and an integer message, on which the RSA primitives operate.

To encrypt an integer m with the public key (n, e) resulting in the integer result c (where m and c are integers
between 0 and n – 1), you compute c = me mod n.

To decrypt an integer c retrieving the integer m with the corresponding private key (n, d), you compute m = cd
mod n.

The RSA signature and verification primitives are actually the encryption and decryption primitives in the
reverse order. Therefore, to sign or produce a signature s of an integer m, both between 0 and n – 1, encrypt m
with the private key (n, d): s = md mod n.

To verify the signature s, decrypt s with the public key (n, e): m = se mod n.
Because encryption and signature schemes are designed to meet different goals, and to avoid confusion, it is best
to think of the encryption and signature families of RSA primitives as distinct. Additionally, although these
primitives serve as the foundations for secure encryption and signature algorithms, by themselves they are
insecure. Consequently, applications should not directly invoke the preceding RSA primitives. Rather,
applications should use the RSA-based encryption and signature protocols discussed in the next section.

Discrete Logarithms

Another class of asymmetric primitives is based on the discrete logarithm and the Diffie-Hellman problems.
These primitives work because of the difficulty computing the logarithm of elements in certain groups. (A group,
as referenced here, is a set of elements related to each other in a mathematical way.)

The discrete logarithm problem is this: Given a prime p, a generator g of the multiplicative group Zp* and an
element x in Zp*, find the least positive integer a so that x=ga mod p.

Now, if that made perfect sense to you, you can skip this explanation. If you responded with a "Huh?," read on.
Okay, what does it mean in layman's terms? We will break down each portion:

1. A prime p in cryptographic terms is usually a very large number divisible by only 1 and p.

2. Recall that modulo, or mod, produces the remainder after an integer division operation (that is, 20
modulo 5 = 0, 17 modulo 5 = 2, and 2 modulo 5 = 2).

3. A generator g of the multiplicative group Zp* says that there is a generator (function) ga modulo p,
where a may be in the range 0, 1, . . . p – 2, that produces all the elements in Zp*. The elements may not
be produced in order, but all elements are produced when a takes on values through the entire range.

4. The multiplicative group Zp* is the set or group of positive numbers that is closed under multiplication
with respect to the modulo of a prime p, or the set of positive numbers {1, 2, . . . p – 1}. Any member,
when multiplied by another member, produces a product. This product modulo p falls within the set.
For every element x in {1,2, . . . p – 1}, there is an inverse y in the set, so xy mod p = yx mod p = 1.

5. Find the least positive integer a so that x = ga mod p.

The discrete logarithm problem can be stated as follows: Given an integer a, you can easily compute x = ga, but
no one knows how to efficiently compute a, given x and g. In fact, this is regarded as a very hard problem!

The Diffie-Hellman problem is this: Given a prime p, a generator g of Zp*, and elements ga mod p and gb mod p,
find gab mod p.

Examining the problem from a different perspective, given only ga and gb (that is, you are not given a or b), it is
very hard to compute gab. Of course, given ga and b (or given gb and a), it is easy to compute gab. The difficulty
of computing gab given only ga and gb serves as the basis for many asymmetric cryptographic protocols.

For traditional discrete logarithms, the private key is an integer s between 1 and p – 2, and the public key is gs
mod p. The prime p and generator g are not secret—they are known to all parties. When people refer to the size
of a traditional discrete logarithm key, they mean the length of the prime p in bits (for example, a 1,024-bit
discrete logarithm key means a key where p is 1,024 bits long).

Numerous discrete logarithm-based primitives exist. There are primitives for secret value derivation (so that two
parties can compute a shared secret symmetric key), for digital signatures (so that an individual can electronically
sign a message), and for signature verification (so that a recipient can verify a signature on a message). Each of
these primitives includes many exponentiations (raising g or another element in Zp* to some power).

Elliptic Curve Discrete Logarithms

Elliptic curve–based cryptographic primitives are very similar to the discrete log primitives, except that the
underlying mathematical group is an elliptic curve, instead of the positive integers modulo a prime number. Just
as with the preceding group, you can define the discrete logarithm problem for elliptic curve groups: Given a
cyclic subgroup G of an elliptic curve, a generator g of G, and an element x in G, find the least positive integer a
so that x = ga.

For elliptic curve cryptography, the generator g and the elliptic curve are publicly known. The private key
consists of an integer s between 1 and |G| – 1, and the public key consists of the curve point gs. When people
refer to the size of an elliptic curve key, they mean the number of bits necessary to represent the subgroup G. For
example, if G has about 2160 elements, the key size is 160 bits.

Generally speaking, computing the discrete logarithm for elliptic curve groups is more difficult than for Zp*. This
means that you can use smaller elliptic curve groups and achieve the same effective level of security as with the
larger Zp* groups. In practical terms, this translates into smaller key sizes and less network traffic. Elliptic curve
implementations can also take up less space in dedicated hardware than traditional discrete logarithm or RSA
cryptographic primitives. Therefore, the primary motivation for using elliptic curves is efficiency. Because
efficiency is of greater concern with wireless systems than wired, this explains the interest in using elliptic curve
algorithms with wireless systems.

Other Public Key Cryptographic Primitives

Other public key cryptographic primitives are in use or under development. Two that have drawn the interest of
the wireless community are NTRU and XTR. Although relatively new, both have gained considerable attention
throughout the cryptographic communities. The reason for bringing these to your attention is not to discuss
further the details behind these two systems but to point out that the field is changing and which primitive will
eventually become adopted as the standard for wireless systems remains to be seen. Further, as computing power
and resources available on wireless devices increase, we are likely to see a progression to other, more secure
algorithms in the future.

Asymmetric Protocols
Asymmetric protocols, like symmetric protocols, provide a method of utilizing asymmetric primitives to create
cryptographic procedures to enable protected communications between participating parties. We shall explore
four common asymmetric protocols that utilize the primitives as building blocks for useful functionality:
Encryption, Digital Signatures, Key Establishment, and Certificates.

Encryption

To send a secret message, the sender encrypts the message with the recipient's public key, and the recipient
decrypts the message with her private key. Simple, right? Not quite. The most common asymmetric encryption
algorithm is RSA. However, recall that the RSA encryption primitive by itself is insecure. In particular, RSA
works on integers. Most useful messages are noninteger, so conversion of the message to integer form must be
done. The RSA encryption primitive must be "wrapped" in a more sophisticated and secure protocol. For RSA,
such a protocol is the Optimal Asymmetric Encryption Padding (OAEP) mode.

To encrypt a message M with the OAEP mode, depicted in Figure 6.10, you do the following:

Figure 6.10. Encrypting with the RSA OAEP mode

1. Apply an OAEP-specific encoding to M to obtain an OAEP-encoded message (this step is critical).

2. Convert the encoded message to an integer m.

3. Apply the RSA encryption primitive to obtain the integer c.

4. Convert the integer c back to a noninteger ciphertext C.

To decrypt ciphertext C with the OAEP mode, depicted in Figure 6.11, you do the following:
Figure 6.11. Decrypting with the RSA OAEP mode

1. Convert C to an integer c.

2. Apply the RSA decryption primitive to obtain the integer m.

3. Convert m from an integer to an OAEP-encoded message.

4. Apply an OAEP-specific decoding operation to obtain the plaintext M.

We caution developers that many cryptographic libraries support older and insecure versions. Further, there is no
guarantee that since the writing of this book another protocol has not superceded OAEP. Developers must ensure
that they know which mode is being used in libraries they are using and that these libraries contain the latest and
most secure protocols and modes.

Digital Signatures

Another big advantage of public key cryptosystems over private key cryptosystems is that public key
cryptosystems enable parties to create digital signatures of electronic messages. A digital signature is analogous
to a physical signature. Using the office complex case study, if Louis signs a message, NitroSoft (or anyone else)
can examine the message and be reasonably assured that Louis actually signed it (and that the signature was not
forged by Kathleen). The presence of the signature provides evidence that Louis saw and agreed to the contents
of the message before it was signed. In this way, digital signatures can also provide nonrepudiation. (Recall that
nonrepudiation means that, after signing a message, the signer cannot claim that he did not sign that message.)

As with the asymmetric encryption protocols, secure signature protocols build on top of the corresponding
primitives. Digital signature schemes consist of two algorithms: a signature generation algorithm and a signature
verification algorithm. Most applications that use digital signatures use digital signatures with an appendix. This
means that the digital signature schemes attach a signature to the end of the signed message and require the
signed message as input to the verification algorithm (some digital signature schemes reconstruct the original
message from the signature). The most common digital signature algorithms are DSA (the U.S. Government's
Digital Signature Algorithm), an elliptic curve version of DSA, and RSA-based signature schemes.

Key Establishment

Key establishment is the process of using asymmetric techniques to agree on a shared symmetric key that is then
used to communicate securely. Although the previously discussed asymmetric encryption and digital signature
protocols can be used to communicate securely over an untrusted network, asymmetric cryptosystems are
typically much less efficient than symmetric cryptosystems. Consequently, almost all applications use a hybrid
approach—a combination of symmetric techniques (for speed) and asymmetric techniques (to establish secure
communications without any a priori shared secret).

There are two general classes of key establishment techniques: key transport protocols and key agreement
protocols. In a key transport protocol, a session key is created and sent to the recipient. For example, Louis could
create a session key, sign it, and then encrypt the key and the signature with NitroSoft's public key. Upon receipt,
NitroSoft would decrypt the session key and signature, verify the signature (to ensure that the key came from
Louis), and use the session key to communicate securely via a symmetric protocol.

In a key agreement protocol, both parties partake in the derivation of a shared session key. Many popular key
agreement protocols are based on the discrete logarithm and the elliptic curve discrete logarithm problems. Here
is a simple example:

a. Louis and NitroSoft publicly agree on a prime p and a generator g of Zp*.

b. Louis picks a non-negative integer a less than p – 1 and gives ga to NitroSoft.


c. NitroSoft also picks a non-negative integer b less than p – 1 and gives gb to Louis.

d. Louis computes (gb)a = gab and NitroSoft computes (ga)b = gab. The result, gab, becomes the shared
secret.

Although this scheme is not perfect, it does present the basic ideas behind discrete logarithm-based key
agreement protocols.

There are many desirable properties for key establishment protocols. One such property is perfect forward
secrecy. This means that if an attacker compromises the private key of one of the individuals involved in a
communication, the attacker should not be able to learn any of the prior session keys. The preceding example
does not have perfect forward secrecy. If an attacker learns Louis's private key (n, a), she can compute (gb)a and
obtain the symmetric key. An authenticated key establishment protocol is a key establishment protocol in which
the parties can be assured that they performed the key establishment protocol with each other (and not an
attacker).

Certificates

A discussion of asymmetric protocols should not neglect one significant problem— how do two parties know
that they really have each other's public keys? The commonly accepted solution to this problem is to use
certificates. A certificate is an attempt to correlate a user's public key with her real-world identity. At a
minimum, a certificate contains a user's public key and identifying information about the user. The certificate is
also signed by a trusted third party. This trusted third party is called a certificate authority (CA). By verifying the
CA's signature on the user's certificate before communicating with her, you are somewhat assured that you are
communicating with the user and not an impostor. A public key infrastructure (PKI) is a system with a certificate
authority and a clearly defined way of obtaining, validating, and deleting certificates.

Common Problems
There tend to be two common problems with the use of cryptography in secure applications. First, some
applications developers mistakenly believe that simply incorporating strong cryptography into their applications
makes them secure. Second, some applications developers invent their own (proprietary) cryptographic
algorithms. This section addresses both these problems.

Cryptography by Itself
Designing a secure application is about making decisions, and these decisions often affect how the application
uses cryptography. If an application uses cryptography incorrectly (for example, using an encryption algorithm
when it should use a message authentication algorithm), it will probably be insecure. The design decisions of
applications developers also affect whether there will be holes in an application's design that allow an attacker to
attack the application itself (instead of the cryptography). An example of a design flaw would be an e-mail
encryption program that uses strong cryptography to encrypt a user's e-mail but stores the user's pass phrase
unencrypted on the user's disk. Another common mistake is to assume that an attacker will not reverse-engineer
an application to extract its embedded cryptographic keys or learn the details behind its proprietary cryptographic
protocol. An example of this is the copy protection schemes employed by game manufacturers. As soon as a new
game or protection scheme is released (and sometimes sooner), there is a crack, or some site has an unprotected
version available for download.

The point is that although cryptography is a powerful tool, the designer of an application must not consider
cryptography a silver bullet. Application security is more than just cryptography and must be designed in from
the beginning.

Proprietary Cryptographic Protocols


A common rule-of-thumb phrase in the security industry is Never roll your own cryptography (unless, of course,
you are a cryptographer, and even then, never use what you have designed until others have thoroughly evaluated
it). Although this now trite expression is laughed off, it is fundamentally true.
Companies invent their own proprietary algorithms for several reasons. Some believe that cryptography is not
that difficult or that they can invent revolutionary new protocols more secure than existing solutions. Others
believe that an attacker will have a more difficult time breaking the company's secret protocols than published,
well-known, academically accepted protocols.

Although there is some truth to both these beliefs, they can be quite dangerous. (Unfortunately, it is the end user
who is often affected and not the developers.) To address these beliefs, we would like developers to consider the
following.

Companies and developers who create their own cryptographic protocols seldom have seasoned cryptographers
review their designs. This is especially true when companies keep their protocols secret. Consequently, unless a
company employs its own world-class cryptographers, its proprietary protocols may not be as secure as it
believes. Likewise, although the underlying cryptographic primitive or protocol design may be secure, the
implementation may be flawed, negating any security or advantage in using the proprietary protocol.

A well-known mathematician/cryptographer named Kerckhoff said something to the effect that if an algorithm is
secure against an attacker who does know the inner workings of that algorithm, that algorithm will be secure
against an attacker who does not know the inner workings of that algorithm. We further observe that one of the
best ways to understand the security of an algorithm is to have seasoned cryptographers try to break it (or try to
prove something about the security of that algorithm with respect to a trusted primitive). If, after a reasonable
effort, the world's best public cryptographers cannot break that algorithm, you can have confidence that the
algorithm is secure.

With this in mind, if a publicly accepted algorithm is secure against the world's best public cryptographers,
obviously, that algorithm should be secure against the casual attacker. Furthermore, the preceding observations
suggest that application developers should choose older, more analyzed algorithms over flashy new algorithms
that have yet to be proven. This is provided that the older algorithms are secure, given your situation and
intended use.

Despite the preceding advice (which is not being presented here for the first time by any means), the
development industry is still infused with insecure cryptographic algorithms. As you might expect, people
eventually get around to breaking these algorithms. The heavily publicized flaw with 802.11's Wired Equivalent
Privacy (WEP) protocol is an example of why the preceding advice should be heeded. Some of the flaws with
WEP are classic, and almost any experienced cryptographer could have identified and avoided them. If the
authors of the 802.11 Specification had properly involved cryptographers during the design of WEP, these issues
could have been identified and resolved before the specification was implemented and used commercially.

Common Misuses
In addition to creating their own protocols, one of the most common mistakes people make when designing
secure applications is not to use cryptographic protocols the way they were intended. This might be analogous to
not following the directions included with a house smoke detector or burglar alarm—the smoke detector will not
be of much use if the batteries are not replaced and the burglar alarm is not connected properly to the siren and
phone line. The same thing is true for cryptography— cryptographic protocols come with rules that must be
followed.

The following is a list of common misuses of cryptographic protocols. Although not exhaustive, it will help
wireless and application developers understand and avoid some of the most common pitfalls.

Key Generation

When we first introduced encryption, we said that an encryption protocol consists of two parts: an encryption
algorithm and a decryption algorithm. That is not completely true. It actually consists of three parts: an
encryption algorithm, a decryption algorithm, and a key generation algorithm. The same can be said for many
other protocols. For example, a message authentication code actually consists of a tagging algorithm, a
verification algorithm, and a key generation algorithm.

Although we did not mention the key generation algorithm earlier, the key generation algorithm is among the
most critical portions of a secure cryptographic protocol. To see this, recall that one way for an attacker to break
an encryption protocol is to find the decryption key. In the symmetric setting, if a k-bit encryption key was
chosen randomly, the attacker might have to guess 2k keys before she stumbles on the right one. Thus, in the
symmetric setting, we argued that the strength of a secure encryption algorithm correlates roughly to the size of
that algorithm's encryption key.

Unfortunately, that is true only if the encryption key was generated correctly or randomly. What if the encryption
key was not generated randomly? That is, what if certain encryption keys are more probable than others? For
example, what if the encryption key depends on the user (perhaps as a function of the user's social security
number or birthday)? What if the encryption key consists of a number of zeros followed by a few random bits? In
both cases, an attacker would be able to guess the encryption key in far fewer than 2k tries. This means that
although an encryption algorithm with randomly generated keys might be secure (because an attacker can't
possibly make 2k guesses), the algorithm may not be secure in practice (because the user's application doesn't
properly generate random keys). Developers and users should be aware that if anything other than random
numbers is used, the security of the encryption is adversely affected.

Randomization

Key generation algorithms are not the only algorithms that require random numbers; many cryptographic
protocols use random numbers to circumvent certain attacks. For example, the CBC and stateless CTR
encryption modes rely on random numbers to prevent a single plaintext from always encrypting to the same
ciphertext. To use these protocols properly, developers must ensure that applications correctly supply the
protocols with random numbers.

Any algorithm for generating random numbers is only a pseudo-random number generator (PRNG) because
random behavior cannot, by definition, be captured by an algorithm. Worse, many of the PRNGs provided as
library function calls are not random at all. They are quite predictable. Developers should ensure that, if a
random number is called for, the source of this number is known and its randomness is acceptable for the usage.

Key Management

As its name implies, key management is concerned with the way applications handle cryptographic keys. Key
management addresses issues such as key generation, key agreement, and key lifecycle. Each cryptographic
protocol has different requirements for the way its keys are handled.

There are some general (though often ignored) principles for key management. One principle is that the same
cryptographic key should not be used for multiple purposes. For example, you should not use the same key pair
for the RSA encryption primitives as you use for the RSA signature primitives. Similarly, the same symmetric
key should not be used for symmetric encryption and symmetric message authentication code. Multiple usages
may inadvertently leak information useful to an attacker.

A similar principle is that multiple parties should not share the same key. Although this principle may seem
intuitive (because the more parties that know a key, the more likely it is for an attacker to learn the key), many
modern applications (including most WEP installations) still distribute identical keys to multiple parties.

Another often ignored principle is that keys should be changed over time. Regularly changing keys limits an
attacker's ability to learn about the key, by placing a time period on a key's usability. For example if a key can be
learned by capturing a certain number of WEP packets, the key should be changed before sending that many
packets. With poorly implemented schemes, this period of time may be very short indeed, rendering the
encryption practically useless.

Keystream Reuse

When introducing stream ciphers, we emphasized that you should be extremely careful not to reuse a keystream.
Although most people are aware of this restriction, numerous applications still reuse a keystream. If an
application reuses a keystream, an attacker could learn information about the plaintexts encrypted with the reused
portions of the keystream. As a visible example of this problem in common applications, the 802.11 WEP
Specification does not properly prevent keystream reuse.

Encryption versus Authentication


When most people think about cryptography, they think about encryption. Although there may be an historical
basis for correlation cryptography with encryption, this is no longer the case. Modern cryptography is concerned
with more than just encryption; it is concerned with other concepts as well (such as integrity and authentication).

This bias of regarding encryption as synonymous with cryptography causes many people to use encryption when
they should use some other cryptographic protocol. For example, many people make statements like this: "We'll
encrypt this message. If the recipient of the encrypted message can decrypt it, the recipient will know that the
message really came from us." Unfortunately, this is not necessarily correct.

The problem comes down to following a protocol's directions—the directions for encryption protocols say that
the protocols are tools to accomplish privacy. The directions for authentication protocols say that they are tools
to accomplish authenticity, and the directions for integrity protocols say that they are tools to accomplish
integrity. By misusing encryption to provide message integrity, application developers can unintentionally open
their applications to attack. This problem is exemplified in the 802.11 WEP Specification: Because the WEP
protocol does not include any formal message integrity code, an attacker can make controlled, precise, and
undetectable modifications to WEP-encrypted packets.

The Man-in-the-Middle Attack

When introducing the asymmetric cryptographic protocols, we mentioned the motivation for certificates:
Certificates enable a user to verify that her copy of someone's public key is legitimate (not a public key created
by an attacker). Unfortunately, many applications fail to verify properly the authenticity of other entities' public
keys. When this happens, an attacker can mount a man-in-the-middle attack. In the office complex case study,
Kathleen can pretend to be Louis to NitroSoft and NitroSoft to Louis. In this way, she can undetectably listen to
and modify the communications between Louis and NitroSoft.

Buffers and Information Leakage

One final area, often overlooked by developers, is the handling of the plaintext information after it has been
encrypted. If plaintext information is left in memory or on disk, it may be accessible to an attacker, either directly
through physical access to the user's machine or by leakage of remnant information in buffers used as temporary
storage during the encryption or decryption process. If these memory areas are not cleared after processing the
plaintext information, another process may be able to access this information and break the application's security
without ever looking at the extremely secure encryption protecting the information being sent over the network.

Choices
The developer of a secure application is often faced with many important decisions. Not least among these is
which cryptographic algorithms the application should use. For example, which block cipher should the
application use, DES or AES? Which encryption mode, CBC or CTR? Which message authentication code,
CBC-MAC or HMAC? Which asymmetric algorithm, RSA, elliptic curve, NTRU, or something else? Although
prescribing a precise formula for picking a cryptographic algorithm (protocol or primitive) is impossible, this
section informs the developer of the available options so that she can make an educated decision.

When selecting a cryptographic algorithm, the developer must take several aspects of that algorithm into
consideration:

• The algorithm's intended purpose

• The algorithm's performance

• The algorithm's effectiveness at accomplishing its intended purpose

• External regulations (for example, export laws or existing patents)

We have already discussed the intended purpose of various algorithms and protocols in previous sections;
external regulations we leave to the more politically inclined. Therefore, in the following sections we will
provide some additional information on performance and effectiveness.
Performance
Different applications have different performance requirements. The presence of these various performance
requirements means that certain algorithms are desirable in different situations. For example, some algorithms
are incredibly fast in hardware but slow in software (and vice versa). Other algorithms are fast on certain
processors (for example, processors with integer multipliers) but slow on others. Some encryption algorithms can
change keys quickly but have slow encryption speeds (such algorithms might be suitable for high-speed
cryptographic routers that must simultaneously handle many secure channels).

Some algorithms are also faster in one direction than another. For example, the RSA signature generation
algorithm is slow, but the RSA signature verification algorithm is incredibly fast, as you can see in Table 6.2.
This table is taken from the article "Performance Comparisons of Public-Key Cryptosystems," by M.J. Weiner,
in CryptoBytes, 4(1), 1998. The performance measurements are milliseconds on a 200MHZ Pentium Pro.

In addition to speed, there are space considerations. Elliptic curve implementations can take up less space in
dedicated hardware and are less network-intensive (transmit fewer bits) than RSA. These observations mean that
it would be incorrect to argue that one scheme is faster than another. Applications developers must thoroughly
understand how their application will use cryptography before deciding on the algorithm best suited for the
application.

Effectiveness
The effectiveness or security of a cryptographic algorithm is a measure of that algorithm's resistance to attacks.
The more secure an algorithm is, the more effective it will be at preventing an attacker from breaking the
message. Unfortunately, quantifying an algorithm's security is a subjective process. There are several reasons for
this. First, effectiveness means different things to different organizations and applications. This is exemplified by
the fact that financial institution or governmental security requirements are usually much greater than those of
most individuals. Thus, different applications will likely have different definitions of security and effectiveness.
(For example, a government organization might require 2,048-bit RSA keys, whereas a normal user might find
1,024-bit RSA keys quite effective.)

Table 6.2. Performance Comparisons of Asymmetric Key Protocols

Operation 1,024-Bit RSA 1,024-Bit DSA 168-Bit ECDSA

Signature generation 43 7 5

Signature verification 0.6 27 19

Key generation 1,110 7 7

Another problem with measuring the security or effectiveness of an algorithm is that an algorithm's effectiveness
typically decreases over time. For example, as the speed of hardware increases (and the price of hardware
decreases), the time (and money) necessary to run a single attack algorithm will decrease. If brute-forcing a 56-
bit DES key today takes two days, then, after the speed of computer hardware doubles, it should take only one
day, using equivalently priced hardware. Furthermore, new advances in cryptanalysis may yield more efficient
attacks. These factors must be considered in evaluating a potential cryptographic solution against the data
protection requirements: Some applications require that their information remain protected for years (such as
wills and contracts), whereas other applications require that their information remain protected for only a short
time (such as instructions for a negotiation occurring that afternoon).

Table 6.3 presents the suggested key length for a specific class of primitives. This table is taken from the article
"Selecting Cryptographic Key Sizes," by A.K. Lenstra and E.R. Verheul, in The Journal of Cryptology, 2001.
Recall that the key length of an algorithm is one measure of that algorithm's security. The primary assumption
underlying the data in Table 6.3 is that DES offered an appropriate level of security in 1982. We recommend
consulting the source of Table 6.3 for different baselines and the rate with which hardware speeds and
cryptanalytic attacks will improve. We want to state that Table 6.3 does not provide our suggested key lengths
for the differing algorithms. It is simply meant as a comparison between different algorithms. As previously
stated, the security required is dependent on the application and environment in which it must operate.

Table 6.3. Sample Minimal Acceptable Key Lengths (in Bits) per Year

Year Symmetric Key Hash RSA and TDL Key RSA and TDL Key Elliptic Curve Key
Size Size Size Size* Size

2000 70 140 952 704 132

2002 72 144 1,028 768 139

2004 73 146 1,108 832 143

2006 75 150 1,191 896 148

2008 76 152 1,279 960 155

2010 78 156 1,369 1,056 160

2020 86 172 1,881 1,472 188

2030 93 186 2,493 2,016 215

Decision Trade-Offs
Application developers are often unable to choose an algorithm (or key size) simply for its security. Rather,
application developers often must accept a compromise between security and performance. Unfortunately, what
that compromise should be is often unclear (especially considering that it is impossible to ascertain fully the
security of most cryptographic algorithms).

Of particular concern to most wireless developers is which asymmetric system to use (RSA, elliptic curves,
NTRU, XTR). This is an extremely difficult decision, and one which the cryptographic community appears to be
debating still. If the security predictions in Table 6.3 are accurate, the performance observations in the preceding
subsection suggest that certificate-based software application (applications that do many certificate verifications)
should use RSA and that most other applications (non-certificate-based applications and hardware
implementations) should use elliptic curves. Unfortunately, not everyone agrees with the results in Table 6.3.

We have suggested that, when given the choice, you should choose older, more established cryptosystems over
newer and flashier cryptosystems. This is because cryptographers tend to understand the security properties of
older cryptosystems more than they do newer cryptosystems. This argument holds for the RSA versus elliptic
curves debate as well. Because the RSA cryptosystem is older than the elliptic curve cryptosystem, some
cryptographers believe that it would be dangerous to assume that elliptic curves will not succumb to future
cryptanalytic progress. That is, some cryptographers believe that clever researchers will someday find a flaw
with elliptic curve–based cryptosystems that allows an attacker to efficiently break any elliptic curve–based
cryptosystem. Other cryptographers strongly disagree. Although we argue again that predicting the future is
impossible, wireless application developers should be aware of this debate and remain abreast of developments
involving cryptosystems of interest.

Key Points
When selecting keys, make sure that they are generated randomly. The strength of the algorithm is dependent on
random keys. Do not assume that random number functions produce good random numbers. Do your homework
before selecting a PRNG or pseudo-random number generator. Perform proper key management. Do not do the
following:

• Use the same cryptographic key for multiple purposes.


• Share the same key with multiple parties.

• Use the same key for an extended time or past its key life.

• Reuse the keystream when using stream ciphers.

We close this chapter by reemphasizing the major points covered. Use cryptographic primitives and protocols for
their designed purpose, and ensure that you follow any specific directions or qualifiers. Unless you are a
cryptographer, do not "roll" your own cryptography. Encryption is not the same as authentication or integrity.
Cryptographic techniques may be used to accomplish all three, but they are distinct activities. Finally,
cryptography is a tool to assist in building secure applications and systems, not a security solution on its own.

Chapter 7. COTS
The last time somebody said, "I find I can write much better with a word processor," I replied,
"They used to say the same thing about drugs."

—Roy Blount Jr.

Commercial off-the-shelf (COTS) products present another trap into which we sometimes fall when looking for
security. COTS products offer a false sense of security in some cases. They should be used when necessary and
can offer a partial security solution, but they should be understood first and used with great care. This chapter
investigates some popular wireless industry COTS products and examines how they can fit in to protecting a
wireless application or system.

COTS versus Custom Software


COTS software is commercial off-the-shelf software, such as Windows XP or Quicken 200X. COTS software
includes commercially produced and supported software products that enhance the productivity of electronic
computing devices. Generally, they are well tested, documented, and supported products that provide the
advertised functionality. It is assumed that they provide the necessary security and privacy features associated
with protecting the data that the application will process. This is usually not the case. In the rush to meet
marketing and delivery dates, security is the first to be sacrificed. At a recent trade show, one software vendor
was heard to say, "We think the market will be willing to forgo the security concerns to be able to utilize the
enhanced functionality." This may be the case, but you can bet that the marketing materials do not highlight the
fact that the product does not provide the users or their data with privacy or confidentiality.

We are not saying that all COTS products have this flaw. The point is that knowing what you are not getting is
just as important as knowing what you are getting. Do not assume that a product does something unless you are
explicitly told it does. You must verify what is and is not being done by products used with your system.

Custom Software
Custom software is proprietary software that is built internally, or contracted for, and usually integrates COTS
software into a business's computer system to accomplish a business objective. Custom software is commonly
used in linking a legacy inventory control system or a CRM system to a Web-enabled front end. It enables a
traveling sales or marketing team to access current pricing information on a legacy system to be accessed via a
Web or custom interface.

Custom software is usually requested to meet a specific functional or utility deficiency among COTS products or
between COTS and legacy systems. Sometimes they are contracted to patch or fix security vulnerabilities or
weaknesses that arise when legacy systems originally accessible only via closed internal networks become
accessible to the outside world via Internet or wireless connections.

The potential for security weaknesses increases as the number of components designed to work independently
under specific operating assumptions are integrated into larger, more complex systems. Software programs or
components are usually tested for functionality, not security. As mentioned in earlier chapters, the exploitation of
a component is usually accomplished by getting the software to do something it was not intended to do.
Normally, from both a time and cost perspective, security testing is not performed unless it is specifically
required or the software is destined for use in a security-critical area. Again, in these situations, the initial
requirements often specify that this type of testing must be performed.

For example, when a software vulnerability was identified in a certain company, the developer was brought in,
and the vulnerability was described. The developer's response was, "We knew about the issue, but we could not
think of a situation in which users would encounter the circumstances that would allow that vulnerability to be
exploited. And at the time, if they did, it would not grant them access to anything that they already did not have
access to through some other means."

Here is how it became an issue. This software was integrated with other software and systems to provide
Internet-based access for the entire company. Systems and services assumed that this piece of software was
providing access control and authentication, so these systems blindly accepted requests coming from this
process, further assuming that they were authorized. A network-based attack against this piece of software
identified the vulnerability, and over time an exploitation was devised that allowed a network-based attack to
gain access to the company's entire set of backend servers. It was not totally the fault of the software, however.
Logs being generated had identified that someone was attempting to exploit the vulnerability, but the logs were
not routinely examined.

You can learn several lessons from this brief example:

• Software will not always be used in the environment to which it was originally developed.

• You should not rely on other systems or services to provide security for you, unless you have verified
that they are doing so in all cases.

• Logs are good as a security protection measure only if someone knowledgeable, who can identify
potential attacks, examines them on a routine basis.

We will now look at several technologies being used, or contemplated, to provide add-on COTS and custom
security solutions to networks, both wired and wireless.

Virtual Private Network (VPN)


A VPN is a network of two or more computers, or computer systems, linked together over the public network in a
manner that virtually creates a private network. Figure 7.1 shows how the network can be connected, with the
lines between networks indicating two VPNs: one between Computer A and the PDA and one between Computer
B and Server 2.

Figure 7.1. An example of VPNs


Figure 7.2 shows how the network is logically connected. When implemented correctly, these VPNs provide the
security of a direct private network connection, with the cost saving and scalability of using public Internet
connectivity. The key word in the preceding sentence is when. Just because you have a VPN system in place does
not mean that your communications and network are safe.

Figure 7.2. VPN logical connections

A private network used to mean a network with no external connectivity, only accessible by direct, controlled,
co-located connectivity, as depicted in Figure 7.3. To expand this type of network to multiple locations required
the use of dial-up or dedicated/leased lines between locations, and in security-sensitive deployments, the use of
some form of link encryption over these leased or dial-up lines. This model was expensive to maintain and not
very scalable (see Figure 7.4).

Figure 7.3. A private network with no external connections

Figure 7.4. A private network with external connections


To be able to utilize the cost savings and scalability of the Internet or public networks, security professionals
began to create VPNs via the combined use of encryption, authentication, and mechanisms for obfuscating
information about the private network topology from the public network. The benefit of this idea is that by
utilizing a properly implemented VPN, you have access to the entire private network from any location, as if you
were physically co-located and connected to the network.

The disadvantages are that the Internet cannot provide the level of security, bandwidth, versatility, and reliability
available on a private network. However, as we have discussed in other chapters, the increased utility of being
able to access the private network from outside and varying locations outweighs the limitations currently
imposed by the use of a VPN.

Three basic types of VPN products are on the market today: hardware-based, firewall-based, and software-based.

Hardware-Based VPNs
Most hardware-based VPN systems are encrypting routers. They are generally secure and easy to implement. Of
all VPN systems, they tend to provide the highest network throughput because they are functionally specific and
processor resources are concentrated on performing the routing and encryption tasks. Looking at Figure 7.5, you
will notice that it is essentially the same layout as Figure 7.4, except that it utilizes the public Internet for
between-location connectivity.

Figure 7.5. A hardware-based VPN


However, hardware-based VPNs are not as flexible as software-based systems, should changes be required
because of upgrades or modifications in the backend network. Certain complete hardware VPN packages offer
software-only clients for remote installation and incorporate access control features more traditionally managed
by firewalls or other perimeter security devices. As with many additional features, they come with security
concerns. That the VPN is remotely installable implies that it is remotely administrable, and this raises questions
about the authentication and security mechanisms in place on this device.

Firewall-Based VPNs
Firewall-based VPNs leverage the firewall's security mechanisms to provide VPN functionality (see Figure 7.6).
These VPNs use the firewall to restrict access to the internal network, perform address translation and
authentication, and provide real-time alarm and logging capability. Most commercial firewalls strip out
potentially vulnerable or unnecessary services (called hardening), increasing their security posture. Many
resources are available for ensuring the integrity and security of firewalls. Network administrators should utilize
these resources to ensure that a firewall is performing the tasks anticipated so that the additional security
provided by a VPN is not circumvented by some other vulnerability in the firewall.

Figure 7.6. A firewall-based VPN


The drawback is that VPN services place additional processing responsibility on the firewall, and if the firewall
is already heavily utilized, performance can be affected. Some vendors are offering hardware-based encryption
co-processors or accelerators to increase the efficiency of firewall-based VPNs.

Software-Based VPNs
Software-based VPNs are ideally suited for situations in which the client and server sides are not necessarily
controlled by the same administrative organization (see Figure 7.7). They are also beneficial when a variety of
network hardware, such as firewalls and routers, is implemented within the same organization. This is because
they provide VPN services above the hardware level and provide the greatest flexibility in how network traffic is
managed. Many software-based VPN products provide for tunneling (more on this in the next section) based on
address or protocol, whereas firewall and hardware-based VPN products tunnel all traffic.

Figure 7.7. A software-based VPN


Software-based VPN systems are generally harder to administrate than hardware-based or firewall-based VPN
solutions. Software-based solutions generally require familiarity with the host OS, the application, the network
architecture, and the security mechanisms being employed. With the preceding combination there is greater risk
that the software will not be implemented or configured properly, opening vulnerabilities that may be
exploitable. Software-based solutions may be vulnerable to weaknesses at lower network layers or in the
hardware itself. The best software security solution does no good if there is a way to bypass it altogether at the
hardware level. Implantation of individual components of the VPN solution may have vulnerabilities that expose
the entire system to risk, such as vulnerabilities in the way authentication is achieved or weakness in the session
key or encryption algorithm. Every component must function properly and securely for the overall system to be
reliable and secure.

The trend in the VPN market is to combine the best of the three VPN solutions into more useful, functional, and
flexible products. As this trend continues, the lines between these three basic VPN systems will blur and
eventually may disappear altogether. The proposed implementation of IPSec (discussed in a later section) is
likely to hasten the transition to a more integrated VPN solution.

Tunneling
Tunneling is the concept of wrapping non-TCP/IP-compliant protocols within a protocol that can transit the
public Internet. Two major tunneling protocols are prevalent today: PPTP and L2TP. A third is on the horizon,
IPSec, which we will soon discuss.

The Seven-Layer OSI Model


Before continuing our discussion on tunneling, it is worth reviewing some networking basics. Figure 7.8 shows
the standard seven-layer OSI (Open System Interconnection) network reference model, which we will briefly
describe.

Figure 7.8. The seven-layer OSI network reference model


Layer 1—The Physical Layer

The physical layer (L1) is primarily concerned with transmitting raw data bits over communications medium and
recovering the same raw data bits on the receiving side. This is where the determination is made of what
constitutes a 1 bit and what constitutes a 0 bit. Here are examples of the communication media that can be used:

• A physical wire carrying voltage, current, or frequency changes

• A light carrying frequency changes or pulses

• Sound or radio waves with varying frequencies

Layer 2—The Data Link Layer

The data link layer (L2) manages link setup, data exchange, and link termination. L2 is concerned with taking the
raw transmission and transforming it into what appears to be an error-free transmission to the network layer.
Recall that L1 merely takes bits it is given and sends them over the communications media; it has no concept of
structure or the meaning of the bits. L2 frames groups of bits by adding bit patterns to the start and end of a
group. This group is called a physical layer service data unit, but more commonly a frame. Specialized frames
report errors, acknowledgments, and the overhead to manage a session.

Layer 3—The Network Layer

The network layer (L3) controls the interconnectivity of network computers, nodes, switches, and routers. It
determines the characteristics of the computer, node, and router interface and how L3 datagrams or packets are
routed within the network. L3 ensures that all packets are correctly received at their destination and determines
the route the packets will take in traversing the network. This includes translating logical network addresses and
names into their physical equivalents.

Layer 4—The Transport Layer

The transport layer (L4) manages the flow of data between hosts across a network. This is accomplished by
splitting long data streams into smaller chunks that fit within the maximum packet size for the networking
medium being used. These chunks are then encapsulated with header and ending frames, which provide
sequencing and error detection/correction capabilities. L4 is a source-to-destination or end-to-end layer and is
responsible for ensuring successful transmissions, including retransmission if packets arrive with errors.

Layer 5—The Session Layer

The session layer (L5) is the user's interface into the network. L5 manages session setup, data exchanges, and
session termination. It provides synchronization services between tasks at each end of the session, allowing for
resumption of the session at the point where an error occurred rather than requiring the entire session to be
retransmitted. L5 also performs any overhead necessary to maintain the session during periods of inactivity.

Layer 6—The Presentation Layer

The presentation layer (L6) performs any data-format translation for the networked communications. It takes data
in the format the application understands and translates it into a generic format that can be transmitted over the
network. Data compression may also occur at this time. Received data is translated from the generic format to the
format the application understands.

Layer 7—The Application Layer

The application layer (L7) allows access to network services such as networked file transfer, messaging, and
remote procedure calls that support applications directly. This layer also controls general network access and
portioning of tasks across the network and provides network error and status information for applications.

PPTP
The Point-to-Point-Tunneling Protocol (PPTP) is a networking technology that supports multiprotocol virtual
private networks. This allows your private network to utilize IP, IPX, and NetBEUI and provides access to a
wide variety of existing LAN infrastructures. PPTP is supported in nearly all the current Microsoft products
because Microsoft was one of the original members of the consortium defining the standard, including
specifications for Linux and other platforms.

In an effort to make PPTP a complete VPN solution, Microsoft markets PPTP with other components to provide
the authentication and encryption required for a VPN. The authentication is provided in various ways, depending
on the platform. In Windows NT, this is done via the Remote Access Service (RAS): the Password
Authentication Protocol (PAP) and the Challenge Handshake Authentication Protocol (CHAP).

RAS utilizes a shared secret between the RAS client and the RAS server. This shared secret is in the form of a
user-supplied password at the client, which is then used to derive a MD4 hash. The stored password in the
Windows NT security database at the server is used to compute the same MD4 hash. Although this solves the
problem of key distribution, it leaves the system vulnerable to cryptographic attacks to identify the hash value. In
fact, in 1998, Counterpane Systems released a statement that five major security flaws existed with Microsoft's
implementation of PPTP and several other attacks were identified that would compromise the security of the
VPN. (See Chapter 6, "Cryptography," for more on cryptographic attacks.)

The important point to remember is that even though a standard or defined protocol is used, it is the
implementation that truly provides the security. If the implementation is flawed, the security profile is
compromised. Protocols or standards sometimes receive bad press when actually the particular implementation is
to blame. Even when the implementation is performed by smart, knowledgeable groups, there can still be
problems.

L2TP
The Layer Two Tunneling Protocol (L2TP), an extension to the PPP, enables ISPs to operate VPNs. PPP defines
a standard for encapsulation of multiprotocol packets over L2 links, and its use in L2TP provides the flexibility
to carry any routed data protocol. L2TP merges the best features of two other tunneling protocols: PTPP from
Microsoft and L2F from Cisco Systems. As the name implies, the tunneling occurs at the network layer (L3), or
at the data link layer.

L2TP is made up of two main components: the L2TP Access Concentrator (LAC) and the L2TP Network Server
(LNS). The LAC is the device that terminates a connection or link and provides a means of access to the physical
layer for communications. The LAC is also known as the network access server in Layer 2 Forwarding (L2F), a
predecessor to L2TP. The LNS is the device that terminates the PPP data stream. The LNS can also be used to
authenticate the data stream. It can have only a single LAN or WAN interface but can terminate calls arriving via
any LAC's interface. For example, async serial, ISDN, PPP over ATM, or PPP over Frame Relay. The LNS is
also known as the Home Gateway (HGW) in L2F terminology.

As with PPTP, L2TP requires the use of additional components to provide the encryption and user authentication
for VPN security. Likewise, L2TP is susceptible to the same vulnerabilities as PPTP. However, under L2TP the
tunneling is performed at L2, so this provides greater flexibility in what encryption protocols are used.

IPSec
IPSec is an attempt to bring together several security technologies into a complete solution to provide
confidentiality, integrity, and authenticity. After your reading the preceding chapter, these three functions should
indicate to you that IPSec is an encryption-dependent solution. In fact, IPSec utilizes the following encryption
technologies to achieve its goals:

• The Diffie-Hellman key exchange is used to derive key material between peers on a public network.

• PKI (public key infrastructure) is used for signing the Diffie-Hellman exchanges to guarantee the
identity of the parties and avoid a man-in-the-middle attack.

• Data encryption algorithms such as Advanced Encryption Standard (AES) are used to compute
encrypted equivalents for data.

• Key hash algorithms such as HMAC are used in combination with traditional hash algorithms such as
MD5 or SHA to provide packet authentication.

• Digital certificates signed by a certificate authority are used for user identification.

To implement IPSec, a new set of headers must be added to IP datagrams. The new headers are placed after the
IP header and before the L4 protocol. Although IPSec goes a long way toward solving the current challenges
facing the Internet, it falls short in the realm of wireless networks, at least until wireless devices are built that are
compatible with IPSec.

SmartCards
SmartCards are small plastic cards about the size of a credit card. In some cases, they are credit cards, with an
embedded microchip that can be loaded with data and applications. These applications can be used for telephone
calling, electronic cash payments, storing personal medical history and data, verifying identity, tracking
purchases, and providing automatic discounts for volume or loyalty purchases.

The physical hardware for making the cards and the devices that can read these cards is currently produced by
three principal companies: Bull, Gemplus, and Schlumberger. The software that runs these cards is based on a
restricted subset of Java. The operating system to support this language is the Java Virtual Machine (JVM). The
OS provides authentication and authorization for loading new applets, managing the card's applets, and
maintaining the card's integrity. The language is referred to as JavaCard. The card manufacturer or other
software vendor contracted to develop the particular Java program or applet may write the applications.

SmartCards provide a mechanism by which data storage and processing can be performed in a secure
environment kept physically on the person on whose behalf this processing is being performed. SmartCards
allow personal information to be stored and provided to authorized services via insecure or untrusted hardware
and networks, used as the transmission medium. This capability is accomplished by providing authentication and
encryption capability on the card itself so that no personal information leaves the card bound for unauthenticated
entities, or in unencrypted form. If you are purchasing or integrating an existing SmartCard implementation into
your wireless network or device, you are trusting that the SmartCard is functioning as advertised.

By this point in the book, you should be asking yourself, how do we know that the OS and applets were
implemented properly? Aren't SmartCard developers subject to the same potential errors and omissions as OS or
application developers? How is it guaranteed that a malicious applet is not loaded onto the card? How do I know
that the encryption is done properly? How do I know that the keys are generated and stored in an appropriate
manner? What if someone steals my SmartCard—can she get my private information?

This is not a book on SmartCard security, so we will not go into detail on these issues. Typically, the cards that
reach consumers are free from errors and provide the services specified. However, we bring them up here to
reinforce the following points:

• Don't trust that others, even those whose primary purpose is security, are providing capability or
services, just because it seems logical that they are.

• Knowing what they are not providing is as important as knowing what they are.

• Just because someone or something should be doing something does not mean that they are.

Biometric Authentication
Biometric authentication is the science of authenticating someone by analyzing biological data, primarily human
bodily characteristics such as fingerprints, retinal and iris patterns, voice patterns, facial features, signature,
typing characteristics, and DNA. This form of authentication requires that the known result of the sampled
biometric characteristic be stored on a central server or on a SmartCard, which is presented with the sample for
comparison. Let us emphasize that we are not talking about biometric identification, which is a much more
complicated problem of taking a biometric sample and comparing it against a large database of samples to look
for a match. Biometric authentication means taking a sample and comparing it against a known target and
producing a yea or nay response.

In considering a biometric authentication solution, you should examine several factors:

• Perceived ease of use

• Acceptable transaction time

• Contingency measures for errors

• Location of actual authentication

• Gathering, verification, and storage of initial information

• Compatibility and connectivity issues

Other usability issues surround the use of biometric authentication, from user willingness to be subjected to
sampling, based on health concerns (as in a laser-based retinal scan, for example), to users not wanting their
fingerprint or DNA information stored on a central server that may be exploited by the government or a
corporation for some other purpose. There are ways of potentially alleviating these privacy fears, such as
encrypting the information as the sample is made and storing the encrypted form so that the original information
cannot be retrieved. (Privacy issues are discussed in greater detail in Chapter 8, "Privacy.")

Potential security vulnerabilities also must be overcome, such as the following:

• The robustness of the user interface

• The security of the interface between the authentication device and the host system

• The security of the third-party transportation network

• The security of the authentication server and application

• The security of the host device


• The security of the database

• The integrity of the biometric device performance

The key is that all these COTS or proprietary add-on products are tools. Used with full knowledge of their
capabilities and vulnerabilities, these tools can enhance the security of a process, device, or system. Used alone,
each provides some benefit, but when skillfully combined with other tools, they create an overall system that
provides security greater than the sum of the individual parts. Knowing when and which tools to use to achieve
the required benefit without imposing undue restrictions or processes is where the work really begins. In other
words, good application or system design begins with the business process requirements, and this is what drives
the design. Incorporating security from the beginning is the key. We discuss this further in Chapter 12, "Define
and Design."

Chapter 8. Privacy
Part of the inhumanity of the computer is that, once it is completely programmed and working
smoothly, it is completely honest.

—Isaac Asimov

Privacy is a necessary component of any security discussion. Privacy and security must be considered separately,
as well as together. Threads of privacy are visible throughout our procedure of building security in to a system or
product. The privacy examined in this chapter consists of a group of issues that are inherently privacy and not
directly security.

The notion of privacy is protected by governmental policies and laws in certain cases. To be well versed in
wireless security, you must have a basic grasp of wireless privacy issues. In this chapter we take a look at issues
of online privacy, the differences between online and wireless privacy, current legislation, and the effect of
location-based marketing on wireless systems and consumers.

The Online Privacy Debate in the Wired World


To understand the industry and public debates about online privacy, it is essential to understand the opt-in versus
opt-out debate. This debate has just a few technical implications at present, but properly understanding privacy
from the business perspective is important. Privacy is an essential feature of any product or service—and a core
business objective. The burden of proving the active presence of privacy to the consumer is required for a
successful product. Understanding the legal issues (at the very least, at a high level) is necessary for ensuring that
your product meets federal requirements and does not violate consumer rights. At the heart of the current online
privacy debate is the tug-of-war between those in favor of opt-in policies and those in favor of opt-out policies.

Although telecommunications carriers are prohibited from revealing certain information, except to law
enforcement authorities upon proper request, they are permitted to sell certain personal information with
consumers' consent. The types of information that fall into each category are being hotly debated. These debates
will likely continue for the foreseeable future. Just how to obtain consumer consent is the heart of privacy
arguments. Should users have to opt out of having their information shared and sold, or should companies require
them to opt in?

Opt-out policies are the most prevalent form of privacy policies in most industries today. Numerous studies have
shown that opt-out notices fail miserably at the task of protecting consumer privacy. The data in question here is
customer proprietary network information (CPNI) or consumer-identifying information. Opt-out notices are
typically vague, incoherent, and intentionally hidden in verbose agreements. Under the Gramm-Leach-Bliley
Act, we have seen that opt-out schemes do not successfully protect CPNI or consumer-identifying information. If
users do not actively opt out, businesses share personal information, including addresses, phone numbers, e-mail
addresses, purchasing patterns, and even more confidential information, such as social security numbers.
Companies profit from users' failure to read fine-print legalese that comes in an envelope full of junk mail.

The Gramm-Leach-Bliley Act requires certain financial and insurance institutions to send notice of an
opportunity to opt out before disclosing personally identifiable information. The law is written so that these
institutions compose the notices in a readable manner, yet most consumers would be shocked at the failure rate of
this requirement. The opt-out policy suggested in the Gramm-Leach-Bliley Act is so poorly constructed and
ineffective that the Federal Trade Commission is investigating the matter in a formal workshop.

The wireless carrier industry does not advocate these opt-out policies. It recognizes the burden placed on
consumers by privacy violations and does not want its medium, wireless communication, associated with
something so annoying. The wireless industry, in general, supports legislation that provides for opt-in policies
that are more protective of consumer privacy.

Opt-in policies are gaining ground in the online arena. These policies are designed to facilitate greater consumer
control over personal information. The opt-in policies are much more difficult for companies to abuse than opt-
out policies. Effective privacy policies should offer a range of choices. When planned correctly, opt-in policies
can be better sources of direct marketing than opt-out policies. The consumers who actively request marketing,
for instance, are genuinely interested, and companies can save money by targeting only interested parties with
mailings. Congress has mandated that the Federal Communications Commission (FCC) implement procedures
that protect consumer privacy when using cellular phones and other wireless devices.

Online privacy discussions cover issues besides opt-in versus opt-out privacy policies. Spam and government or
private surveillance also come into play. Unwanted messages, spam, are a pervasive problem in the wired world.
Automated technologies send e-mails to hundreds of thousands of unwanted recipients daily. Spam is taxing on
servers, annoying to consumers, and an abuse of an intended system. To thwart spam, privacy advocates
continually battle with Internet service providers (ISPs), e-mail providers, and Internet application providers,
with only moderate success. Spam is a privacy concern in that it is unsolicited. Some e-mail clients, ISPs, or
corporate servers prevent spam at various levels, but it is largely unavoidable. Every time an e-mail address is
used in registering for a Web site or mailing list or is published anywhere on the Internet, it can be picked up and
bombarded with unwanted or offensive e-mails.

Government and private surveillance of users' Web surfing habits is a subject of much debate as well. Consumers
should not assume that their Web surfing is private, but few of them know the extent to which this data is
warehoused and can be used to learn about them. Direct marketing associations learn a lot about which
marketing to push to a user by investigating data about where the user spends his time on the Internet.

In addition to tracking Web surfing habits, ISPs store e-mails in repositories for many reasons. In some cases, the
storage is for strictly legitimate purposes. If subpoenaed, ISPs are required to produce e-mails sent to or from a
given user. The government allegedly views only the name of the sender and recipient and perhaps the date and
time the e-mail was sent but not the internal content in its system, called DCS1000.

Flesh-Eating Mammal Changes Its Ways


DCS1000 was initially an FBI project with the very unfortunate name of Carnivore. FBI
spokespeople say that dubbing the system DCS1000 was the result of an upgrade, not an attempt to
remove the negative associations that resulted from naming the e-mail tracking system after a flesh-
eating mammal.

DCS1000 is an FBI technology that aids in gathering information to solve crimes. It is a packed-based
communications interception system. DCS1000 comes under heavy criticism because it is not publicly available
and because whether it tracks e-mails only after a warrant has been obtained cannot be proved. Many view this
technology as an invasion of user privacy. It is a point of contention among all players in the online world.
Several privacy groups have already expressed concern that Carnivore may be used in a pervasive fashion to
intercept wireless e-mail. These groups believe that industry companies will not be able to provide the proper
privacy safeguards with Carnivore technology in place. The FBI asserts repeatedly that it will not use Carnivore
to monitor the content of e-mails.

Privacy in the Wireless World


What is different about privacy in the wireless world? The wireless world, in surfing the Internet or viewing e-
mail, has the same privacy concerns as the wired world. Wireless carriers, ISPs, phone manufacturers, and
application providers alike have access to personal information about users that they must decide how to manage.
The wireless industry has a combination of opt-in and opt-out policies for sharing phone numbers, calling
patterns, and so on. The problems in the wireless world are a little different.

At the heart of the privacy debate is this: Who should have more control—the user or the business? If users have
too little control, they will be slow to adopt technology because of an inherent distrust of businesses. On the other
hand, if users have too much control, business marketing opportunities will be limited, and business will not take
interest in wireless technologies. The middle ground we are searching for will bring successful applications and
will build trust and contentment in consumers.

Privacy policies are the first and perhaps most obvious issue at the heart of privacy discussions. These verbose,
obscure, fine-print legalese statements are difficult to read on paper or on a PC and even more difficult to read on
a limited display screen. This point may seem trite, but it bears mentioning. The industry should be very careful
not to abuse this situation. The wireless industry tends to be sensitive to issues of connection speed, content-
heavy transmissions, and compression of data. In the interest of maintaining a strong and happy customer base,
wireless carriers will continue to seek readable, shorter privacy policies so that their consumers can remain
informed, in control, and untaxed in time, battery cycles, and billing.

Another difference in wireless privacy is the surveillance issue. Yes, wireless users are concerned about
government and private corporation surveillance of their call history, PDA Web surfing activity, or wireless
laptop data exchange, but privacy concerns go far beyond this. Wireless devices transmit their locations with a
certain degree of precision when powered on. Knowing what someone does is certainly an invasion of privacy
but does not provide information as potentially dangerous as knowing where someone is. Knowing your location
could place you near the scene of a crime, late for work, speeding on a highway. It could enable you to find a lost
Alzheimer's patient, catch a teenager coming home past curfew, or get help when lost in a foreign city. There are
pros and cons to being able to pinpoint someone's location simply by her possession of a wireless device.

Because the cons range from being invasive to being dangerous, this privacy issue should be taken very
seriously.

California—Ahead of the Privacy Curve


The State of California affords more protection of its constituents' privacy than federal laws and laws
in most other states. In 1972, the state constitution was amended to include a California resident's
explicit, inalienable right to privacy. This right to privacy is subject to legal interpretation in many
areas (including communications) but is a big step ahead of its fellow states in the privacy movement.

The Players
The players in wireless privacy discussions are similar to those we have investigated throughout this text. Each
one has a different stake in the privacy game. Wireless carriers want to keep their customers happy and do not
care what their users are buying or viewing or to whom they are talking—as long as they are paying for service.
Wireless carriers tend to be sensitive to privacy concerns and do not want spam. The wireless industry backs
proposed privacy legislation with full force. It wants to be seen as a consumer advocate and is smart in doing so.
The wireless industry sees itself as being able to set itself apart from the wired industry by establishing a higher
standard of privacy protection for consumers.

Application providers, on the other hand, abuse the system if given the chance. If not explicitly prohibited from
using opt-out policies, application providers maximize their marketing possibilities by taking advantage of any
personal information they can gather. Application providers and wireless advertising companies such as the
Wireless Advertising Association (WAA) are generally opposed to regulations on how to manage consumer-
identifying information and want the industry to "regulate itself." We have seen how (un)successful that has
proved to be in the wired world. Application providers are not concerned with minimizing or maximizing time
billed for connection service. They want to sell more of their applications and services. They are not motivated to
protect consumer privacy because they hope to avoid establishing a high standard of privacy.

Privacy advocates tend to be technical experts, consumer advocates, and civil liberties enthusiasts. By keeping
Congress, the FTC, and the FCC aware of the dangers of lax privacy regulations, privacy advocates pave the way
for safer wireless telecommunications systems.
Related Privacy Legislation and Policy
No privacy discussion is complete without investigating legal policy issues governing the industry. Most privacy
discussions hinge on laws and their implications for all parties involved in the wireless circuit. Legislation is
continually proposed to Congress on both sides of wireless privacy issues. Some legislation is designed to protect
consumer privacy, and some is designed to protect the rights of application providers or advertisers to use and
manage personal data as they see fit.

The Communications Assistance for Law Enforcement Act (CALEA)


In 1968, a federal law was instituted to permit law enforcement to conduct wiretaps, pursuant to a court order or
other legal authorization, to eavesdrop on an individual's conversations. Because technology has changed since
the law's enactment, the law has been amended to attempt to keep pace with technology. Congress attempts to
maintain harmony among technological advances, law enforcement agencies, and consumer privacy rights
advocates.

The Communications Assistance for Law Enforcement Act (CALEA) was adopted by Congress in 1994.
CALEA was not designed to expand wiretapping power and use but to standardize its current implementations.
Under CALEA, all telephone companies were required to build in to their systems mechanisms that allow the
government to intercept communications with relative ease, if necessary. Also subject to this law are wireless
service providers, local exchange providers, resellers, or anyone who offers wireless (or telecom) services for
hire to the public.

Some law enforcement agencies have used the law to attempt to broaden their justifiable use of wiretapping, but
expansion has been kept to a minimum for the most part. The overreaching purpose of this law was to require
new technology to facilitate wiretapping and pen registers when necessary. The burden of keeping law
enforcement able to use the technology is placed on the wireless carriers, not on law enforcement itself. If every
time communications companies changed their technology, law enforcement had to learn it and make changes,
this would be an undue burden in the pursuit of justice. Instead, wireless carriers design facilities in to the
technology to aid this effort.

Wireless carriers are largely frustrated with the requirements for implementation dictated by CALEA, even
several years after its enactment. They claim that they are caught between sacrificing the privacy rights of their
consumers and aiding law enforcement activities. They struggle to find a clearly defined set of boundaries inside
which they should operate.

E-911
What role does the FCC play in the privacy debate over issues concerning wireless devices? The FCC recognized
that it could use information on the whereabouts of thousands of people to help find them in cases of emergency.
This information is very valuable. It would be tough to argue that people would not want this information shared
with a fire department when they are caught in a fire. Perhaps the most well-known piece of wireless privacy
policy is E-911. E-911 rules require that cellular phone service providers maintain information about users'
locations and be able to pinpoint users within a certain range. The availability of this information for use in
emergency is a great advantage to anyone with a wireless device. The presence of this information also presents a
great risk. What else will service providers do with this information, and what does the FCC have to say about it?

The bill forbids cellular carriers with access to location information from using it without the explicit consent of
individual cell phone users. Should the information be available at all times or only in case of emergencies? How
is it determined that someone is in an emergency? Can you disable transmitting your location? What is the
information used for besides emergency assistance? Although the technology will undoubtedly be put to valuable
uses, the idea that large corporations and the government know where you are every time you use a wireless
device is daunting. If you are an application developer and have appropriately planned for security, privacy must
be considered as well.

E-911 applies to wireless carriers but not specifically to application providers. Wireless industry advocates are
lobbying for the same restrictions and limitations to be placed explicitly on application providers and all other
parties with access to consumer information in the wireless realm. Most of the E-911 discussion concerns cell
phone usage, but there are crossovers into PDAs and Global Positioning System (GPS) arenas as well. Service
providers who provide Internet access or other wireless services to PDA users should also be responsible for
managing consumer-identifying information. GPS constantly transmits location information about users—
another body of information that should be guarded carefully and disseminated only with user consent and
justified need for disclosure.

The E-911 rules required that by October 1, 2001, companies had to produce handsets equipped with location-
identifying technology or must change their networks to allow for location determination by signal strength. This
deadline to complete a portion of the implementation was extended, and requirements and timelines will continue
to evolve. The final deadline for compliance still looms ahead, in December 2005. The rules, however, leave it
up to providers to figure out how to pay for providing location information to emergency services. (This could
encourage providers to sell this information to recoup some of their costs.)

E-911 specifies mandatory conditions for nationwide carriers. Each carrier proposed upgrades that will help it
approach compliance with the E-911 requirement that it be able to identify the location of 95 percent of its users.
For all carriers, their devices and networks must be capable of identifying a user's location within a certain
distance over a certain percentage of time. Some carriers chose to implement device-based solutions, and others
chose network-based solutions.

E-911 Location Accuracy Requirements


For device-based solutions:

• 50 meters for 67 percent of the time

• 150 meters for 95 percent of the time

For network-based solutions:

• 100 meters for 67 percent of the time

• 300 meters for 95 percent of the time

The deadline of October 1, 2001, came and went without any carrier meeting the deadline. Extensions have been
granted, and eventually this capability will be status quo for all national carriers. With the new capabilities
implemented, emergency services will pinpoint a cellular phone or wireless device user from anywhere in the
country without the use of a powerful locator tool such as a directional antenna.

E-911 will continue to affect wireless projects and applications and will, it is hoped, provide help in emergency
situations without inciting violations of consumer privacy.

The Wireless Communications and Public Safety Act of 1999


The gist of the Wireless Communications and Public Safety Act is twofold. First, it establishes an official
framework for guaranteeing universal access to 911 for all Americans with wireless phones. Second, it provides
privacy protection in a general sense for an individual's location information. Before this legislation, location
information was not included in consumer information protected under any privacy statutes. This legislation
marks an important Congressional opinion that wireless privacy is inherently different from other forms of
privacy because of the nature and amount of information potentially stored and revealed about users.

The law restricts telecom companies from selling location information about consumers without their consent. It
provides privacy protection explicitly for mobile wireless location information. The law is full of holes.
Companies are hard-pressed not to test the boundaries of this law. Policing the abuse of location-based
technology is difficult. If location information is used as a basis for decision making but is properly hidden as
such, it is difficult to detect and prevent. Enforcing this policy is left to astute consumers who recognize the
abuse of power that some wireless companies could exert. Companies and the government will press the
boundaries of the law. To remain in compliance with the law and future legislation, it is extremely important to
consider protecting location information about consumers by offering only opt-in programs for disclosing this
information for profit.
The Wireless Communications and Public Safety Act serves as an impetus for expediting the implementation of
E-911 regulations nationwide. This causes relief and angst alike. Another piece of legislation that forwards the
case for governmental collection of personal communications information is the U.S.A. Patriot Act of 2001.

The U.S.A. Patriot Act of 2001


How does September 11, 2001, affect wireless privacy legislation? There was a marked concern across the nation
that our intelligence-gathering and communications-interception capabilities were not up to par. Analysis of data
collected from wiretapping (or wireless tapping, as the case may be) is at a premium and comes with a high cost.
The desire to compensate for this terrible incident is driving legislation that will facilitate government collection
of a wide selection of all forms of communication.

Many privacy advocates are concerned that privacy will take a backseat to patriotism and efforts to stamp out
terrorism. The jury is still out. Certainly it is essential to provide law enforcement with the means necessary to
track down terrorist activity in any medium, but this must be done within a system of expeditious checks and
balances. On October 11, 2001, Congress passed the U.S.A. Patriot Act (Uniting and Strengthening America by
Providing Appropriate Tools Required to Intercept and Obstruct Terrorism) of 2001. The law is termed the Anti-
Terrorism Act in some drafts and is referred to by this name when being used familiarly.

The U.S.A. Patriot Act of 2001, enacted in the wake of September 11, assigns new provisions to existing laws
and renders moot some points in others. The driving force behind the law is the need to ensure that officials have
procedures in place to facilitate the detection, prevention, and eradication of terrorism and that there are no
procedures that could slow down progress in this quest.

One feature of the law, which was signed by President George W. Bush, stipulates that law enforcement's use of
pen registers and wiretapping devices can include any communications medium, including the Internet, for the
purpose of helping detect or combat terrorism. The language of the law is person-specific, rather than device-
specific or phone line–specific. All communications of an individual can now be monitored if that person is
suspected of illegitimate activities. Although the expansion includes new mediums, it does not provide for the
interception of the communication's content (for example, the text of an e-mail, dollar amounts in financial
transactions, dialog in a wireless phone conversation). The discoverable information is limited to higher-level
information, such as dialing, routing, signaling, and addressing information.

The U.S.A. Patriot Act allows the FBI to use its DCS1000 technology for monitoring e-mail and other
communication, to avoid placing undue burden on wireless carriers to implement technical solutions in a costly
and rapid manner. This may worry staunch privacy supporters and will have to be monitored closely. If ISPs or
independent parties keep tabs on the operations, the DCS1000 system can be used effectively for fighting
terrorism but not for invading consumer privacy.

The U.S.A. Patriot Act may have implications for smaller operations than national ISPs, however. Applications
that serve any sort of communication or transaction function could be subject to governmental observation. It is
important to define clear privacy policies and explicitly detail for customers which information about them is
stored and for how long, who has access to it, and under what conditions it is disseminated.

Keeping Up with Wireless Legislation


Information about the most recent legislation relating to wireless technologies is available at
http://wireless.fcc.gov. This site typically includes wireless policies whether or not they directly
include the FCC.

Location-Based Marketing and Services and GPS


While the issues associated with E-911 are being ironed out, there is another category of location-based
information that is less defensible: E-411. Unofficially, E-411 is the use of location information about wireless
device users to tailor target marketing. The theory behind location-based marketing is that users can have instant
access to information about opportunities around them: movie theaters, stores, and restaurants, to name a few.
The marketing in this case might be in the form of a coupon to your favorite coffee bar that appears on your PDA
as you drive past a new location or a commercial jingle for a department store that plays on your cell phone when
you are in the parking lot outside a mall.

When you are on the wired Internet, you can order products from nearly anywhere at any time of the day or
night. Physical stores, however, have hours of operation and specific locations. If you are in an unfamiliar part of
town and are getting thirsty, coffee shops want to know. If you are on your way to a relative's house and need to
pick up a last-minute gift, stores want to know. In many cases, simply knowing a user's location can enable
companies to transmit meaningful information to the user. The concern you need to be aware of if you are
developing an application for such a company is that the user should be in control of this information. It should
be highly personalized, meaningful, transmitted only upon the user's initial consent, and turned off without
hassle.

Wide potential exists with GPS and location-based technologies. GPS is seen in automobiles and sometimes
integrated into wireless devices. Its primary function is to locate a user's latitude and longitude to provide maps,
directions, or other requested information based on location. A movie theater might give a cell phone company
the percentage of ticket sales that come from users who are directed to the theater while in the area. GPS and
location-based marketing will be met with high success if they are implemented respectfully.

Push marketing, marketing that is unsolicited and pushed down to a user's wireless device, does not respect a
user's privacy and should be used sparingly. Using a cell phone as an example, we note that making a phone call
is inherently different from location transmission because the phone call has to be initiated. The user must dial
and complete a call actively. Transmitting that user's location, however, is not initiated; it happens automatically
any time the phone is on.

This location transmission could be dangerous if it gets into the wrong hands. The fact that you went to a medical
clinic yesterday, for instance, should not be offered freely to any interested party. Medical information is
typically protected in many instances, and wireless location-based information should not preempt this
protection. If health insurance companies knew your history of doctor visits, they could increase your health care
premiums. Similarly, if an automobile insurance company knew that you typically travel to a section of town that
has a high crime rate, they could increase your insurance premiums.

The Fourth Amendment specifies that an unlawful search occurs when an individual's reasonable expectation of
privacy is violated. It specifies that an unlawful seizure occurs when an individual's property is taken with
unreasonable interference. In high-tech surveillance, scores of cases have been decided based on search and
seizure provisions. The courts have developed a substantial body of decisions on which we base wireless
surveillance legality decisions. With respect to GPS, however, courts have yet to make definitive rulings on the
constitutionality of law enforcement tracking.

Researchers in the commercial litigation department of Thelen, Reid and Priest, L.L.P., note that the United
States Court of Appeals in the ninth circuit addressed the installation of a GPS tracking device underneath a
suspect's car without his knowledge. The court held that there was no unconstitutional search on the part of the
law officers because the individual should not have a reasonable expectation of privacy. (His expectation of
safety, that the device not threaten his life or damage his car, was not violated either. Had it been, this would
have been a different case.) Furthermore, the court found that because his car, his property, was not out of his
control, it was also not a seizure violation. Cases such as this will continue to unfold as law enforcement
experiments with technology and the laws are tested and interpreted.

What is important to understand from a business perspective is that new technologies may be covered under
constantly changing bodies of case law and it is essential to stay current with laws related to technologies,
services, or products you may be developing. If you create a system that fits in a tight security model and your
business practices respect the privacy of your users today, this does not mean that legislation will always protect
your practices. GPS in rental cars sometimes broadcast the user's location to the rental agency. The handling of
this data is precarious and should be treated with care.

Legislation governing the maintenance and dispersal of location information varies greatly across the globe. In
Ireland, for instance, two major wireless carriers maintain what they call customer locator records. These records
contain information about where a user is located, with an accuracy of about 10–15 yards. The location
information is available during the time the phone is powered on, not just when it is placing, receiving, or in the
middle of a call.
These companies believed that they were required by Irish statute to maintain the records and had been storing
them for more than six years. The Irish Council for Civil Liberties, shocked when it learned of this storage,
investigated the matter. It turns out that holding a locator record for more than a few months is against Irish law
and European Union (EU) regulations. The Irish Data Protection Commission is attempting to reconcile privacy
and data storage regulations.

The EU has strict data privacy laws, far more stringent than U.S. privacy legislation. Some confusion exists in
the EU about which data must be stored and which data must not be stored. This presents a problem for
businesses that strive to be in compliance but are at a loss for explicit guidelines under the law. At present, U.S.
and British organizations are lobbying the EU to relax its strict data privacy regulations in favor of allowing
companies to mine data collected on its wireless users.

Big Brother Catches Speed Demon


A patron rented a car from a rental agency in Connecticut during the summer of 2001. He skimmed
the rental legal agreement, signed the form, and departed for Virginia. A few days later he used his
ATM card and discovered that the rental car agency had deducted $450 from his account. It turned
out that this unlucky patron neglected to read a clause in his contract stating that his vehicle was
equipped with a GPS tracking device and that sustaining speeds above 80 miles per hour for more
than two minutes would result in a fine. The gentleman was fined for three such incidents. The rental
agency maintains that it used this tracking system so that it could track stolen or lost cars, not as a
money-making tactic. Is this big brother at work? Are our roads safer for this? Do you read your
entire rental car contract? Should you? Should you have to? Apparently, if you don't, it can cost you.

The Middle Ground Answer


The requirements for effective privacy protection are simple and concise. If implemented correctly, a few
alterations can produce a mutually beneficial result for consumers and businesses. The Cellular
Telecommunications Industry Association (CTIA) set forth guidelines for determining appropriate privacy
protection, based on other wired industry privacy standards. The association wants consumers to feel comfortable
and safe using wireless technology and believes that the onus for providing comfort and safety falls on all
industry players. The CTIA proposed four principles to the FCC and suggested that the standards be used for
managing how vendors, manufacturers, service providers, and carriers collect and use location-based and
personal information. The principles are as follows:

• Inform all customers about the collection and use of their information.

• Allow a meaningful opportunity for consumers to agree to the collection or use of the information
(before using it).

• Protect the security and integrity of any data collected, allow customers access to the data, and give
them the opportunity to refute and revise data about themselves.

• Keep privacy rules consistent across platforms, applications, technologies, and use cases so that users
do not have undue burden following privacy options.

These principles should be applied to any system being designed. They have high-level and low-level technical
implications, coupled with a widespread effect on business. Consumers will demand privacy. Building it in from
the start saves time and money and gives your product or system an attractive market differentiator.

We are certainly not advocating against any sort of personalized marketing—we assert only that controlling the
delivery and receipt of this information should be left up to each individual. A recipe for success in using your
customers' personal information to provide additional recommendations should combine the principles just listed.
Personalization, consent, and control are of utmost importance in this day and age. As we mentioned when
discussing the need for a comprehensive, provable, and well-tailored security plan, customers will demand it. No
longer is there tolerance for security and privacy weaknesses in products or services. Consumers want a
guarantee they can trust before handing over their money. Who can blame them? Effective privacy practices and
policies enable consumers to maintain the control they need but do not obstruct businesses or marketing
effectiveness.

Progress in the Wired World


Although security in the wired world is becoming standardized and advanced, privacy is still a hot topic and up
for much consideration and change. The current progress is being made by privacy and technology industry
groups alike. Research on privacy tools, Internet filters, privacy requirements, and privacy legislation will be
ongoing. Exploitations of consumer information will rear their ugly heads if corporations do not espouse and
enforce strict, clear, feasible privacy policies. The wireless world has a chance to jump ahead and learn from the
mistakes of its wired counterparts. Privacy should be built in from the start, intertwined with security, and given
due attention

Part IV: I-ADD


Chapter 9. Identify Targets and Roles
Chance favors the prepared mind.

—Louis Pasteur

Now we begin to apply our I-ADD security analysis process, described in Chapter 2, "Security Principles." As
you may recall, the I-ADD security analysis process consists of four phases:

• Identify targets and roles.

• Analyze known attacks, vulnerabilities, and theoretical attacks, generating mitigations and protections.

• Define a strategy for security, mindful of security/functionality/management trade-offs.

• Design security in from the start.

Identify Targets
The first step in the process is to identify the system's high-level functional blocks. In Chapter 2, we identified
six high-level functional blocks of a typical wireless system (see Figure 9.1). After the blocks are identified, an
examination of each is performed to identify the resource or information targets within it that should be
protected. After you break down the wireless system to its fundamental components and produce a list of targets,
you examine these targets and generate a list of associated roles.

Figure 9.1. A typical high-level wireless system

The Wireless Device


We begin our examination of each of these high-level blocks with the wireless device. At this highest level, the
only obvious target is the device itself. A statement of the target at this level is something like the following:

Wireless Device

The wireless device itself

Although this may seem obvious, it is provided here to introduce a methodical and consistent method for
identifying targets, or components of a system that need to be protected. This is as far as you can go at this level,
so you repeat the process at the next lower functional level (see Figure 9.2).

Figure 9.2. A wireless device broken down to the next functional level

There is no right or wrong way to determine how to break the functional blocks down to their next level.
Experience and trial and error yield the best breakdown for any given system. A method or approach that works
well for one system may not provide adequate results for another. Should you choose an alternative breakdown,
such as that shown in Figure 9.3, you may encounter repeated functional blocks at lower levels. You may have
functional blocks with certain branches that can no longer be broken down and other branches that continue for
several levels, as shown in Figure 9.4.

Figure 9.3. An alternative breakdown of a multifunction phone

Figure 9.4. A continued breakdown of a multifunction phone


This does not matter, as long as you examine all aspects of the functionality of the system component being
analyzed. If our approach is followed, you cannot help but cover all aspects of the system. Figure 9.5 shows a
possible breakdown of the multifunction phone depicted in Figure 9.4, but following the delineation started in
Figure 9.2.

Figure 9.5. An alternative breakdown of a multifunction phone

We could debate whether auto dial is an offline or online function (it is assumed to be an online function for this
example). The intent is not to break down an actual cell phone completely but to demonstrate that the same result
can be reached with differing approaches. An actual cell phone can have many additional features and
administrative functions, and the transceiver could be broken down to transmitter and receiver or to
administrative/overhead transmissions and payload transmissions, and so on. The important thing to notice is that
the same 10 branches of the breakdown tree are present in both Figure 9.4 and Figure 9.5. The ends of these
branches are microphone/speaker, keypad/display, usage monitor, settings, contacts, e-mail read, e-mail
compose, auto dial, speech, and transceiver. The choice of functional breakdown is left to your preference and
the type of application or device being analyzed.

Now that we have shown a breakdown for a typical multifunction cell phone, let's assume that the wireless
device is a typical wireless PDA. This serves two purposes. First, you do not have to consider voice
communications. Second, you will not jump ahead because you already know how the next level will be broken
out. Figure 9.6 shows the wireless PDA broken down to the second functional level. Examining each functional
block, you repeat the earlier process of identifying targets or components to protect.

Figure 9.6. A wireless PDA showing the second functional level


The User Interface

Examining the user interface at this level, you consider the need to protect the display and keys from damage or
inadvertent input while the device is being transported (after all, the whole point of wireless is to enable
mobility). Now, you may be wondering, why are we concerned about damage to the display in a security book?
Recall that one of the security principles discussed in Chapter 2 is Integrity. We assert that integrity applies not
only to data but also to the system. Under this premise, the ability to access data on demand is a security concern.
Furthermore, if the screen becomes damaged, can you guarantee that the information you receive is accurate? It
is not outside the realm of possibility for a user to misinterpret a letter because the confirmation code received
contains an O instead of a U and the display is damaged where those pixels should be. Arguably, this would fall
under the Development and Operational principle of Functionality or Utility. This is certainly true, but also recall
that these principles are often related and interdependent. Your list now looks like this:

Wireless Device

The wireless device itself

User Interface

The physical interface

Access to the user interface

Offline Functions

In examining offline functions, several potential targets come to mind. Personal data— such as information in an
address book or calendar files (names, addresses, phone numbers, or public and private keys)—stored on the
PDA should be protected from unauthorized access. As m-commerce becomes more prevalent, PDAs will store
bank account, brokerage, and credit card information that must be protected. Corporate or other nonpersonal
information housed on the PDA should be protected. The list of data potentially stored on a PDA extends as far
as the imagination (and device engineers) will allow. The point is that no one other than those who are authorized
should have access to information stored on the PDA.

Online Functions

In examining online functions, the same offline concerns apply. The difference is that unauthorized access is
obtained as the information transits the air or the wired network. In addition to data, the user's activity and usage
patterns should not be available to unauthorized parties. This introduction of additional data to protect is not the
whole picture, though. The user's location and movements are also in need of protection, with the incorporation
of GPS technology into wireless devices. Finally, spoofing the user (the use of the device or a similar device by
an unauthorized user pretending to be an authorized user) to obtain service or data should be disallowed.

The Transceiver

The transceiver should be protected from tampering by someone who has gained unauthorized access to the
device. By way of example, the transceiver could be changed in such a way that it always accesses a different
service provider's transceiver or an attacker's transceiver. (Spoofing the device's service provider, vulnerabilities,
and attacks are discussed in greater detail in Chapter 10, "Analyze Attacks and Vulnerabilities.") The attacker
then communicates with the service provider, on the user's behalf, thereby giving the attacker the ability to
monitor and control the user's activities.

Now, if you are imagining the intricacies that must fall into place for this to occur, you may immediately think
that this is an awfully elaborate man-in-the-middle attack and not very likely to occur against the average
wireless user. Although we concur, keep in mind that the goal of this phase is to identify targets for
completeness, separate from any assessment of vulnerability or likelihood of realization. The task of prioritizing
and making those kinds of trade-offs occurs during the I-ADD define phase. To be aware of the full set of risks
associated with a given system, all possible attacks must be examined. Ruling out the least feasible ones is the
secondary and simpler part.

Each functional block at this level is then broken down to the next functional level. We will not do so here
because the discussion would become too dependent on the specifics of the PDA or application being used.
Further, many of the preceding issues would simply be repeated for each of the lower-level blocks, particularly
under the two Functions boxes. However, in analyzing a specific PDA or application for wireless use, this
process should continue to the same depth as the functional design process to ensure that security issues are
considered for these lower-level functional blocks as well.

Examining the targets list at this point yields the following:

Wireless Device

The wireless device itself

User Interface

The physical interface

Access to the user interface

Offline Functions

Personal data on the PDA

Corporate or third-party information

Online Functions

Personal data being sent

Corporate or third-party information being sent

User online activities, usage patterns

Location and movement

Access to network and online services

Transceiver

The transceiver itself

The Service Provider


The next functional block to examine is the transceiver of the service provider (refer to Figure 9.1). For the sake
of brevity, we use the term transceiver here, although the component we are referring to is the service provider
infrastructure, which provides wireless connectivity between the wireless device and the rest of the wired world.
At this level, the transceiver needs to be physically protected. Logically, it needs to protect its services from
unauthorized use. From the wireless side, the transceiver needs to ensure that users are authorized to use its
services. From the wired side, the transceiver—or more appropriately, the service provider—needs to ensure that
its services are accessed only by authorized entities and thereby obtains access to the wireless users.

As with the wireless device, when this level is complete, you break it down to the next functional level (see
Figure 9.7).
Figure 9.7. A second-level breakdown of the transceiver

The Transceiver

For our purposes, we do not need to drill further beyond the higher-level targets. If a functional block is
identified, it should be listed and retained so that there will be no confusion about whether it was considered by
others or during a review at some point in the future.

This is an appropriate time to state something that may or may not be obvious. A common target across all
functional elements is physical protection. Having physical access to a device or resource makes an attacker's job
much easier. Hence, perimeter security fencing, the presence of armed guards, as well as locks and alarms on
buildings contribute greatly to overall security in wired systems. With wireless systems, this fundamental aspect
of security goes right out the window. Wireless systems eliminate the need for legitimate and illegitimate users to
have physical access to the network. Put another way, unless you encase the wireless system in an RF shielded
enclosure, an attacker is going to be able to identify the network, and no number of armed guards around the
tower is going to prevent her from attempting to access the system if that is her intent. This doesn't mean that you
should just pack up your bags and head home. Quite the contrary, you need to acknowledge this fact and design
systems that are secure in spite of this big plus in the attacker's column.

The Administrative Server

Two additional targets become apparent when examining this functional block. First is user-specific data, which
must be protected from unauthorized disclosure. Second is corporate proprietary data or resources, which must
be protected from unauthorized disclosure. We will not break the service provider down to additional functional
levels. For our purposes, this is sufficient. However, we do want to point out that the administrative server can be
broken down to several additional levels, depending on the service provider architecture. Potential lower-level
functional blocks would be authentication functions, billing functions, fraud detection functions, and
performance monitoring functions.

The Network Server

This functional block is likely to have corporate proprietary data or resources that must be protected from
unauthorized disclosure. As with the administrative server functional block, we will not break this functional
block down any further because it quickly becomes provider-specific.

This completes the service provider functional block, and the targets list now looks like this:

Wireless Device

The wireless device itself

User Interface

The physical interface

Access to the user interface

Offline Functions

Personal data on the PDA


Corporate or third-party information

Online Functions

Personal data being sent

Corporate or third-party information being sent

User online activities, usage patterns, location and movement

Access to network and online services

Transceiver

The transceiver itself

Transceiver (Service Provider)

The transceiver itself

The transceiver services

Access to its subscribers

Transceiver

Administrative Server

User-specific data

Corporate proprietary data and resources

Network Server

Corporate proprietary data and resources

The identify phase is then continued to the next functional block of Figure 9.1, the gateway. The gateway's role is
discussed in Chapter 1, "Wireless Technologies." Although the term gateway is most associated with cellular
phones, its function of converting standard Web pages to the format used by wireless devices is common. These
gateways can be co-located with the Web servers or with the wireless service providers.

The Gateway

Examining the high-level functional block, you can readily identify several targets. The gateway must be
physically protected from loss or theft. User-specific data must be protected from unauthorized disclosure. The
user's data must be protected from unauthorized disclosure. Corporate proprietary data and resources must be
protected from unauthorized disclosure. Third-party data must be protected from unauthorized disclosure as it
transits the gateway. The integrity of the data processed by the gateway must be maintained.

The gateway can be broken down to additional functional levels, but we will not do so here. By now, the process
should be clear, so we do not want to belabor the point. Likewise, we will not break down the remaining high-
level functional blocks listed in Figure 9.1. The following is the complete target list:

Wireless Device

The wireless device itself

User Interface

The physical interface


Access to the user interface

Offline Functions

Personal data on the PDA

Corporate or third-party information

Online Functions

Personal data being sent

Corporate or third-party information being sent

User online activities, usage patterns, location and movement

Access to network and online services

Transceiver

The transceiver itself

Service Provider

The transceiver itself

The transceiver services

Access to its subscribers

Transceiver

Administrative Server

User-specific data

Corporate proprietary data and resources

Network Server

User data

Corporate proprietary data and resources

Gateway

The physical gateway

User-specific data

User data

Corporate proprietary data and resources

Third-party data transiting the gateway

Web Server

The physical Web server

User-specific data
User data on the Web server

Corporate proprietary data and resources on the Web server

Aggregate commercial data stored on the Web server

User or corporate data in transit

Backend System

The physical backend system

User-specific data on the backend system

User data on the backend system

Corporate proprietary data and resources on the backend system

Aggregate commercial data stored on the backend system

Identify Roles
The second step in the I-ADD process is to identify the roles associated with the system. Let's review what we
mean by roles. A role is simply an individual or group of individuals who plays a role in either protecting or
exploiting a target. As we proceed through the process of identifying roles, this should become clear. At this
point, the easiest way to proceed is to go through the targets list and identify the roles associated with each target.
We will not explain these roles in detail here. As you read through the list, try to identify why each role is listed
where it is. We discuss the roles in more detail in the section "Vulnerabilities and Theoretical Attacks" in
Chapter 10.

Malicious Users
You will soon notice the ever-present malicious user. The term malicious is used liberally. What we are referring
to is an individual or group who has the knowledge, skills, or access to compromise a system's security.
Malicious user is a generic category encompassing a variety of roles that deserve additional discussion. A
malicious user can be any of the following.

Organized Crime (Financial Motivation)

These malicious users are capable, motivated, well organized, and well funded. They are intent on operations
such as cloning cell phones or other wireless devices and stealing money, goods, and services. Organized crime
is the most capable category of attackers. Their ability stems from having the resources available to obtain the
necessary hardware, software, and knowledge to mount sophisticated attacks quickly if the potential financial
benefits justify the effort.

Hackers (Nonfinancial Motivation)

These malicious users are also capable, motivated, and well organized and may be well funded. Although hacker
interest in wireless systems may initially be sparked by the financial or proprietary information the system
protects, their attacks are generally focused on achieving notoriety. Attacks that can be expected of hackers
include small-scale and wide-scale disruption of operations and the collection and release of sensitive
information.

Malicious Programmers (Financial or Brand Damage)

These malicious users vary in their technical ability and are usually highly motivated by personal greed,
grievance, or grudge. They are usually not well organized but may possess significant knowledge of the wireless
system and access to internal processes. Malicious programmers can originate from various sources: a
disgruntled employee at a wireless manufacturer; an application programming contractor; operations and support
personnel; a knowledgeable programmer who feels wronged by someone associated with the manufacture,
distribution, or management of a wireless system or device; a programmer who feels wronged by an individual or
a company using wireless systems or devices.

Also in this group we consider attackers with nonmalicious intent whose actions can incur security issues, either
inadvertently or because of an interest in improving the system's security. The information and vulnerabilities
generated by nonmalicious attackers are capitalized on by malicious attackers if not immediately addressed by
the affected wireless component or system.

Academics and Security Researchers

These attackers are capable, motivated, well organized, and often well funded. Academics and security
researchers can analyze the security of a wireless component or system from an intellectual standpoint to
determine how the system is designed or whether and how potential vulnerabilities have been addressed. They
look at both the theoretical and practical implementation of the system, focusing primarily on issues in their area
of expertise for the purposes of advancing the field, or their standing in the field. Although this group does not
have malicious intent, malicious attackers can use their findings before mitigation or corrections are in place.
This group is more likely to inform the vendor when a vulnerability is detected, before publishing their results,
although this is not guaranteed.

Inexperienced Programmers and Designers

Although they do not fit most standard definitions of a malicious user, inexperienced programmers and designers
can inadvertently create security issues and are considered malicious for this analysis. These inexperienced
personnel are motivated to perform a specific task to support a wireless system, but they do not possess the skill
or experience necessary to execute the task properly. The mistakes and oversights made by these personnel affect
the operation of wireless components and can adversely affect the security of the wireless system. Other attackers
exploit the vulnerabilities generated by inexperienced personnel.

Mapping Roles to Targets


Wireless Device

The wireless device itself

Device manufacturer

User

Malicious user

User Interface

The physical interface

Device manufacturer

User

Environment

Access to the user interface

Device manufacturer

Application (app) developer

User

Environment
Offline Functions

Personal data on the PDA

Device manufacturer

Device support personnel

App developer

App support personnel

User

Malicious device support personnel

Malicious app developer

Malicious app support personnel

Malicious user

Corporate or third-party information

Device manufacturer

Device support personnel

App developer

App support personnel

User

Malicious device support personnel

Malicious app developer

Malicious app support personnel

Malicious user

Online Functions

Personal data being sent

Device manufacturer

Wireless service provider (WSP)

WSP operations, maintenance, and support personnel (OMS personnel)

App developer

App support personnel

User

Malicious WSP

Malicious device support personnel


Malicious WSP OMS personnel

Malicious app developer

Malicious app support personnel

Malicious user

Corporate or third-party information being sent

Device manufacturer

WSP

WSP OMS personnel

App developer

App support personnel

User

Malicious WSP

Malicious device support personnel

Malicious WSP OMS personnel

Malicious app developer

Malicious app support personnel

Malicious user

User online activities, usage patterns, location and movement

Device manufacturer

WSP

WSP OMS personnel

App developer

App support personnel

User

Malicious WSP

Malicious device support personnel

Malicious WSP OMS personnel

Malicious app developer

Malicious app support personnel

Malicious user

Access to network and online services


Device manufacturer

WSP

WSP OMS personnel

App developer

User

Malicious device support personnel

Malicious WSP OMS personnel

Malicious app developer

Malicious user

Transceiver

The transceiver itself

Device manufacturer

Device OMS personnel

User

Malicious device OMS personnel

Malicious user

Service Provider

The transceiver itself

WSP

WSP OMS personnel

Malicious OMS personnel

Malicious user

The transceiver services

WSP

WSP OMS personnel

Malicious OMS personnel

Malicious user

Access to its subscribers

WSP

WSP OMS personnel

Corporate/private servers
Corporate/private server OMS personnel

Content providers

App developer

App support personnel

User

Malicious WSP OMS personnel

Malicious corporate/private servers

Malicious corporate/private server OMS personnel

Malicious content providers

Malicious app developer

Malicious app support personnel

Malicious user

Transceiver

Administrative Server

User-specific data

WSP

WSP OMS personnel

App developer

App support personnel

Malicious WSP OMS personnel

Malicious app developer

Malicious app support personnel

Malicious user

Corporate proprietary data and resources

WSP

WSP OMS personnel

App developer

App support personnel

Malicious WSP OMS personnel

Malicious app developer

Malicious app support personnel


Malicious user

Network Server

User data

WSP

WSP OMS personnel

App developer

App support personnel

Malicious WSP OMS personnel

Malicious app developer

Malicious app support personnel

Malicious user

Corporate proprietary data and resources

WSP

WSP OMS personnel

App developer

App support personnel

Malicious WSP OMS personnel

Malicious app developer

Malicious app support personnel

Malicious user

Gateway

The physical gateway

Gateway manufacturer

OMS personnel

App developer

App support personnel

Malicious OMS personnel

Malicious app developer

Malicious app support personnel

Malicious user

User-specific data
Gateway manufacturer

OMS personnel

App developer

App support personnel

Malicious OMS personnel

Malicious app developer

Malicious app support personnel

Malicious user

User data

Gateway manufacturer

OMS personnel

App developer

App support personnel

Malicious OMS personnel

Malicious app developer

Malicious app support personnel

Malicious user

Corporate proprietary data and resources

Gateway manufacturer

OMS personnel

App developer

App support personnel

Malicious OMS personnel

Malicious app developer

Malicious app support personnel

Malicious user

Third-party data transiting the gateway

Gateway manufacturer

OMS personnel

App developer

App support personnel


Malicious OMS personnel

Malicious app developer

Malicious app support personnel

Malicious user

Web Server

The physical Web server

Web server manufacturer

Web server OMS personnel

Content providers

App developer

App support personnel

Malicious Web server OMS personnel

Malicious content providers

Malicious app developer

Malicious app support personnel

Malicious user

User-specific data

Web server manufacturer

Web server OMS personnel

Content providers

App developer

App support personnel

Malicious Web server OMS personnel

Malicious content providers

Malicious app developer

Malicious app support personnel

Malicious user

User data on the Web server

Web server manufacturer

Web server OMS personnel

Content providers
App developer

App support personnel

Malicious Web server OMS personnel

Malicious content providers

Malicious app developer

Malicious app support personnel

Malicious user

Corporate proprietary data and resources on the Web server

Web server manufacturer

Web server OMS personnel

Content providers

App developer

App support personnel

Malicious Web server OMS personnel

Malicious content providers

Malicious app developer

Malicious app support personnel

Malicious user

Aggregate commercial data stored on the Web server

Web server manufacturer

Web server OMS personnel

Content providers

App developer

App support personnel

Malicious Web server OMS personnel

Malicious content providers

Malicious app developer

Malicious app support personnel

Malicious user

User or corporate data in transit

Web server manufacturer


Web server OMS personnel

Content providers

App developer

App support personnel

User

Malicious Web server OMS personnel

Malicious content providers

Malicious app developer

Malicious app support personnel

Malicious user

Backend System

The physical backend system

Backend system manufacturer

Backend system OMS personnel

App developer

App support personnel

Malicious backend system OMS personnel

Malicious app developer

Malicious app support personnel

Malicious user

User-specific data on the backend system

Backend system manufacturer

Backend system OMS personnel

App developer

App support personnel

Malicious backend system OMS personnel

Malicious app developer

Malicious app support personnel

Malicious user

User data on the backend system

Backend system manufacturer


Backend system OMS personnel

App developer

App support personnel

Malicious backend system OMS personnel

Malicious app developer

Malicious app support personnel

Malicious user

Corporate proprietary data and resources on the backend system

Backend system manufacturer

Backend system OMS personnel

App developer

App support personnel

Malicious backend system OMS personnel

Malicious app developer

Malicious app support personnel

Malicious user

Aggregate commercial data stored on the backend system

Backend system manufacturer

Backend system OMS personnel

App developer

App support personnel

Malicious backend system OMS personnel

Malicious app developer

Malicious app support personnel

Malicious user

As you can see, this can quickly become a long list. Now that we have concluded the identification of the roles, it
is worth discussing two observations that will assist you in performing future role identification. First, in general,
whenever people are involved in protecting a target, they almost always are also listed in the malicious section
against that target. We are not saying that the same people will be involved, but that the category of people or
that group's level of access can be used maliciously.

This concludes the I-ADD identify phase. You break down the system into functional blocks and then examine
each block to determine which resources or data (targets) require protection at that level. The blocks are then
examined to see whether they should be further broken down to lower-level functional blocks, where the process
is repeated until you reach the lowest-level functional blocks practical for the type of analysis or design you are
conducting. After identifying the targets, you determine the roles that affect the targets. With the roles and targets
identified, you are ready to move to the I-ADD analyze phase.

Chapter 10. Analyze Attacks and Vulnerabilities


. . . man will occasionally stumble over the truth, but usually manages to pick himself up, walk
over or around it, and carry on.

—Winston S. Churchill

The second phase of the I-ADD security process is the analyze phase. During this phase you examine known
attacks, vulnerabilities, and theoretical attacks in order to generate protections and mitigations. These protections
and mitigations are methods or procedures used to inhibit an attacker's ability to exploit a vulnerability or
perform an attack. The protections and mitigations should be identified without consideration for other factors,
such as cost, limits to functionality, or time to implement. Trade-offs are evaluated and decisions are made
during the next I-ADD phase, the define phase.

Known Attacks
Identifying known attacks requires research of security-related Web sites, papers, and trade journals. Although
currently known attacks are few in number, relative to wired systems, they are likely to grow as wireless systems
become more prevalent and provide a richer target for the attacker community. The known attacks we cover here
are specific to the wireless portions of the system. The Web servers, backend servers, and gateways are all
subject to known attacks specific to their hardware platform, operating systems, and ancillary applications. The
importance of specifically examining known attacks separate from theoretical attacks is that known attacks are
likely to be attempted by an attacker when targeting a wireless system. Therefore, known attacks deserve a
higher priority when making trade-offs during the next I-ADD phase.

Device Theft
Device theft is just as it sounds, the physical theft of the device by an attacker. Fortunately, this is not a concept
new or unique to wireless devices or systems, so the need for protection of wireless devices and systems against
physical theft is intuitive to device and system manufacturers. Unfortunately, devising devices or systems
resistant to theft is very difficult.

Several mitigations can be employed to minimize the threat. We will not spend much time stating the obvious,
such as locking and alarming rooms that house equipment.

The Man in the Middle


The attacker, by interjecting herself between the user and the server, accomplishes the well-known man-in-the-
middle network attack. This interjection is done by gaining physical access to the logical or physical path
between the user and the server, such as sitting at the user or server's access point to the network. Alternatively,
this can be used to spoof the user to the server and the server to the user. In both scenarios, the attacker has
complete access to the communications between the user and the server.

War Driving
In the 1980s, malicious types began war dialing, calling phone numbers at random in an attempt to locate
unprotected modems and gain access to networks. The early 2000s version of war dialing is war driving, roaming
around with a laptop, wireless NIC, and an antenna and attempting to gain access to wireless networks. As we
have discussed, the vast majority of wireless networks deployed do not use WEP or use WEP without
implementing RSA's Fast Packet Keying solution to (more or less) security. With a $100–150 wireless NIC set in
promiscuous mode and a cheap parabolic grid antenna from Radio Shack, hackers have gained access to
thousands of wireless networks across the United States. In populated areas, war drivers have used simple GPS
applications in combination with the wireless NIC and antennae and have successfully mapped the location of
thousands of wireless networks to which they can gain access. No esoteric software or hardware is required. A
software application called AirSnort has the ability to analyze the intercepted WEP traffic and, after collecting
enough data, even determine the root password for the wireless system.

Denial of Service
Denial of service is a class of attacks that take many forms, from subtle to obvious. An obvious denial of service
attack against a wireless system would be to sever the coax cable on the tower between the transceiver and the
antenna. This definitely would deny service to anyone wanting to use that particular tower. A more subtle attack
would be to tie up the system with service requests or to spread a bogus e-mail such as "New and Destructive
Virus," explaining that you should e-mail everyone you know so that they can protect themselves. The desired
result is that the system becomes so bogged down with these e-mails that legitimate traffic cannot be
accommodated.

Another popular denial of service attack is the "Please help, my child is dying." An e-mail is sent saying that
someone, usually a hapless child, is suffering from a terrible affliction. The e-mail goes on to say that a
corporation has agreed to provide X amount for every e-mail it receives regarding this child, so please forward
this e-mail to everyone you know so that this child can be saved. The desired result is to overwhelm the
corporation's servers and cause them to crash.

The DoCoMo E-Mail Virus


As of the writing of this chapter, there have been two similar virus attacks against Japan's DoCoMo cellular
system. These attacks are viruses that can be downloaded into multifunction cellular phones. The viruses cause
the user's phone to automatically dial a number, such as 911, tying up both the cellular and 911 systems. With
little imagination, you can see how this type of activity can have far-reaching and dire consequences.

Vulnerabilities and Theoretical Attacks


Identifying vulnerabilities is a difficult process because you are looking for what might occur and trying to
anticipate how an attacker could attempt to exploit the system. The process is a dual-mode analysis in which you
are examining potentially vulnerable areas while anticipating theoretical attacks. Based on the success or failure
of these theoretical attacks, the particular component or resource is identified as vulnerable. Recall that you are
not making any determination at this point about the practicality of an attack or the development trade-offs
necessary to protect or mitigate the vulnerability.

To begin the examination of vulnerabilities, you begin at the top of the targets list and place yourself in the
malicious roles identified earlier. You then create theoretical attacks to which these targets would be vulnerable.
Experience and knowledge of the system's inner workings are crucial if you are to have any expectation of
identifying all its potential vulnerabilities. If you are examining an existing system, this requirement may lead
you to utilize the developers to conduct the vulnerability analysis. This is acceptable as long as the team is evenly
weighted with those who were not involved with the development. The reason is, developers know what they
were trying to accomplish, and they may make assumptions about how the system functions or responds under
certain circumstances. Further, developers know how the system was intended to function, but most attacks
attempt to cause the system to function in a manner in which it was not intended.

Vulnerabilities of the Wireless Device


Similar to identifying targets, you begin at the highest levels and work your way down to the lower functional
levels of the system. In general, the lower functional levels require more detailed knowledge, for you to analyze
and for an attacker to exploit. However, with any generality, there are always exceptions, particularly with
exploits. Once identified by someone with knowledge, even the lower-level functional levels can be successfully
exploited by others with less technical expertise. We discuss this in greater detail throughout the remainder of the
chapter, looking at specific examples. Suffice it to say that for this analysis, you must try to be as thorough as
possible to ensure that the system is fully protected. You begin by looking at the targets identified.

The Wireless Device Itself

The vulnerability, loss, or theft of this particular target is not new to wireless. Loss or theft of personal items has
been a concern since our ancient ancestors first grasped the concept of personal property as they huddled around
fires in caves. The vulnerability of wireless devices is that they can be misplaced by users or taken by malicious
users.

User Interface

The user interface should be examined in its two parts: the physical interface and access to the user interface.
These two have different issues that should be acknowledged for completeness of your risk assessment.

The Physical Interface

The physical interface is vulnerable to environmental factors such as water, shock, and abrasion—for example,
dropping the device in a puddle or spilling coffee on the device, dropping it off a table, having it slip out of the
user's hands, having the device slide across a rough surface, and having someone sit on or drive over the device.

Access to the User Interface

The user interface is vulnerable to environmental factors that cause inadvertent input—for example, a cellular
phone in someone's purse being bumped and activated when an object inside the purse depresses the Send key.

Offline Functions

Personal Data on the PDA

Here is where things become more interesting. You examine each of the malicious roles separately to ensure that
you cover all the possible vulnerabilities. Again, this is not guaranteed. To ensure a system's security, you must
review the vulnerabilities in light of new known attacks, updated information on the system, or new theoretical
attacks.

Malicious Device Support Personnel

Personal data stored on the device is vulnerable to malicious device support personnel when the device is taken
in for upgrades, maintenance, or repair. These support personnel may have access to manufacturer bypass and
diagnostic codes, equipment, or utilities that give them access to personal data stored on the device.

Poor or inexperienced device support personnel may inadvertently leave the device in a security bypass or
diagnostic mode that leaves personal data vulnerable.

Malicious App Developer

Malicious application developers can create a virus or Trojan Horse (a program that, in addition to providing an
overt useful function, performs a covert activity, usually malicious) utilities or programs that allow access to
personal data on the PDA.

Poor or inexperienced application developers may not take appropriate security measures regarding their
particular application, such as not clearing buffers and overwriting data elements, leaving personal data
vulnerable during transit.

Malicious App Support Personnel

Malicious application support personnel may dupe the user via social engineering to provide access, or
information necessary for access, to personal data under the auspices of assisting with an application issue.
Alternatively, malicious app support personnel may enable debug or other diagnostic switches within the
software, disabling security mechanisms present in the device or software.

Poor or inexperienced app support personnel may inadvertently leave debug or diagnostic switches enabled
following a support activity, rendering the personal data vulnerable.

Malicious User

Personal data is vulnerable to a malicious user who has gained access to the device. Recall that malicious user is
a catchall term encompassing a variety of activities. Although this simple statement is adequate for describing the
vulnerability, the complexity of the role becomes important and should not be forgotten when generating
mitigations and protections or performing the security-functionality trade-offs. For example, a malicious user
may pose as a member of one of the legitimate functional roles and become the functional equivalent of one of
the malicious roles just discussed.

Corporate or Third-Party Information

From a vulnerability perspective, no distinction exists between corporate and third-party information and
personal data. There may be some distinction when it comes to the security-functionality trade-offs. For example,
a device manufacturer may be willing to limit some functionality to ensure the protection of the user's personal
data but may decide that the same trade-off for corporate data is unnecessary because its obli-gation ends with
the user.

Online Functions

Personal Data Being Sent

This target is personal data as it is in transit. You will notice that all the previous roles are present, with the
addition of a few others because of the data's increased exposure during transport.

Malicious Wireless Service Provider (WSP)

Your first thought may be, "How could a WSP be malicious?" In general, WSPs are not. They are in the business
of providing wireless services, so performing any untoward activity would be counterproductive. However,
consider the following example, based on the office complex scenario introduced in Chapter 1, "Wireless
Technologies."

Suppose that AdEx Inc., as a courtesy to its clients, offers wireless access through its network. NitroSoft is
visiting AdEx for a presentation of a proposed new marketing campaign. During breaks in the presentation, the
NitroSoft representative sends and receives e-mail via his wireless PDA. This information is related to the
campaign, including price limits and current bids from other representatives attending similar presentations
around the country. The connectivity is much appreciated by the NitroSoft representative because he can
discreetly communicate the current status to his NitroSoft co-workers to ensure that NitroSoft receives the best
marketing campaign for the money.

What the NitroSoft representative doesn't know is that someone from the AdEx IT staff is monitoring the
NitroSoft representative's communications and relaying any pertinent information to AdEx's marketing staff so
that they will be well informed of her feelings about the presentation, any misgivings she may have, what
NitroSoft's bottom line will be, and possibly what the bids are from other marketing firms.

In this example, is AdEx just doing smart business? After all, AdEx owns the wireless connectivity hardware,
and by extension, everything it transports. Or is AdEx a malicious WSP? Unless AdEx had the NitroSoft
representative sign an agreement to access its wireless network and this agreement contained a waiver granting
AdEx access to anything transmitted over the network, we would vote for the latter. Therefore, personal data
transmitted by the device may be vulnerable to a malicious WSP.

Malicious Device Support Personnel

Personal data transmitted by the device can be made vulnerable by malicious device support personnel when the
device is taken in for upgrades, maintenance, or repair. These support personnel may have access to
manufacturer bypass and diagnostic codes, equipment, or utilities that allow them to bypass security features,
leaving personal data transmitted by the device vulnerable.

Poor or inexperienced device support personnel may inadvertently leave the device in a security bypass or
diagnostic mode that renders personal data vulnerable during transit.

Malicious WSP OMS Personnel

Personal data transmitted by the device is vulnerable to malicious WSP OMS personnel who have access to the
WSP transceiver and wireless network equipment.
Malicious App Developer

Malicious application developers may create a virus or Trojan Horse utilities or programs that cause the
transmitted data to be vulnerable. An example would be an encryption utility containing nonunique or known
keys. To the user, the data appears encrypted, but it is readily accessible to unauthorized individuals who know
the key. Alternatively, an e-mail utility may send a blind copy of every message sent or received by the device to
a predefined address.

Poor or inexperienced application developers may not take appropriate security measures regarding their
particular application, rendering personal data vulnerable during transit.

Malicious App Support Personnel

Malicious application support personnel may coerce the user via social engineering to provide access, or
information necessary for access, to personal data under the auspices of assisting with an application issue.
Alternatively, malicious app support personnel may enable debug or other diagnostic switches within the
software, disabling security mechanisms present in the device or software.

Poor or inexperienced app support personnel may inadvertently leave debug or diagnostic switches enabled at the
conclusion of a support activity, rendering the personal data vulnerable during transit.

Malicious User

Personal data is vulnerable to a malicious user who has access to, or has built a receiver that can monitor, the
transmission of the PDA and can reconstruct the data transmitted and received. Again, a malicious user can
assume any of the preceding malicious roles to gain access necessary to exploit a vulnerability.

Corporate or Third-Party Information Being Sent

As with offline functions, from a vulnerability perspective there is no distinction between corporate or third-party
information and personal data in transit.

User Online Activities, Usage Patterns, Location and Movement

This category can be considered a subset or equivalent to user personal data as far as vulnerabilities are
concerned. The difference lies in how this type of information can be protected, which we discuss in Chapter 12,
"Define and Design."

Access to Network and Online Services

As used here, access to network and online services means the use of the device or information on the device to
gain access to network and online services. This distinction separates it from similar activities occurring against
the service provider, which we will discuss shortly.

Malicious Device Support Personnel

User network and online services access credentials are vulnerable to device support personnel who have access
to the device for upgrade, maintenance, or repair purposes. Device support personnel may have access to
manufacturer bypass and diagnostic codes, equipment, or utilities that give them access to network and online
services access credentials on the device.

Malicious WSP OMS Personnel

User network and online services access credentials are vulnerable to WSP OMS personnel when this
information is received and processed by the WSP equipment. The user may also be coerced into providing
network or online access credentials to WSP OMS personnel.

Malicious App Developer


User network and online services access credentials are vulnerable to applications that can copy and store, or
forward, these credentials to the developer.

Malicious User

Access to network and online services are vulnerable to a malicious user. A malicious user may gain access to
the device and retrieve network and online services credentials, to be used on another device or at a later time. A
malicious user may monitor transmissions, discussed under "Malicious User" for personal data being sent to
obtain network and online services credentials. Again, a malicious user can assume any of the preceding
malicious roles to gain access necessary to exploit a vulnerability.

Transceiver

The Transceiver Itself

Malicious Device OMS Personnel

The transceiver is vulnerable to manipulation or modification by malicious device OMS personnel.

Malicious User

The transceiver is vulnerable to manipulation or modification by a malicious user. For example, this may be done
to assist a man-in-the-middle attack.

Vulnerabilities of the Service Provider


The Transceiver Itself

When we use the term transceiver in regard to the service provider, we are considering a transceiver system
consisting of the antenna array, tower, coax, transceiver, and switching equipment.

Malicious Device OMS Personnel

The transceiver is vulnerable to manipulation or modification by malicious device OMS personnel.

Malicious User

The transceiver is vulnerable to manipulation or modification by a malicious user. For example, this may be done
to deny service to areas or individuals at crucial times.

The Transceiver Services

Malicious Device OMS Personnel

The transceiver services are vulnerable to manipulation or modification by malicious device OMS personnel—
for example, granting network access to unauthorized users by providing maintenance or diagnostic access
credentials to these unauthorized users.

Malicious User

The transceiver is vulnerable to manipulation or modification by a malicious user. For example, a malicious user
may obtain access credentials to utilize the service without paying for the privilege.

Access to Its Subscribers

Malicious WSP OMS Personnel

The service provider is vulnerable to WSP OMS personnel who can grant access to the network, and thereby its
subscribers, for spam or other unsolicited purposes.

Malicious Corporate/Private Servers


The service provider is vulnerable to malicious corporate or private servers that access the service provider to
deliver advertising, marketing, or other spam to the service provider's subscribers.

Malicious Corporate/Private Server OMS Personnel

The service provider is vulnerable to malicious corporate or private server OMS personnel who utilize authorized
servers to perform unauthorized access to subscribers. For example, service provider subscribers receive stock
quotes as part of their service plan. OMS personnel with access to the quote server that provides this service
could alter the server to deliver anything in addition to, or in place of, the stock quotes.

Malicious Content Providers

The service provider is vulnerable to malicious content providers who use the service provider resources to spam
or otherwise deliver their payload to the subscribers.

Malicious App Developer

The service provider is vulnerable to malicious app developers who include back doors or Trojan Horse utilities
or programs that the service provider uses. These app developers can then use the privileged access available to
their legitimate applications to obtain illegitimate access to the subscribers.

Malicious App Support Personnel

Service provider subscribers are vulnerable to malicious application support personnel who enable debug or
other diagnostic switches within the software, disabling security mechanisms that protect access to the
subscribers.

Poor or inexperienced app support personnel may inadvertently leave debug or diagnostic switches enabled at the
conclusion of a support activity, rendering corporate proprietary data and resources vulnerable on the network
server.

Malicious User

The service provider is vulnerable to malicious users gaining network access to allow them access to the service
provider's subscribers, either by these malicious users' acting in one of the preceding roles or by exploiting a
vulnerability in the overall service provider's system.

Transceiver

Recall that there were no targets for the transceiver beyond those identified for the higher-level functional block.

Administrative Server

By administrative server, we are referring to the billing, maintenance, and support systems associated with
keeping the wireless infrastructure functional.

User-Specific Data

User-specific data is information such as credit card numbers, address, finances, call and access log information
that resides on the administrative server.

Malicious WSP OMS Personnel

User-specific data resident on the administrative server is vulnerable to malicious WSP OMS personnel who
exploit their system access to gain access to user-specific data.

Malicious App Developer

User-specific data resident on the administrative server is vulnerable to malicious app developers who include
back doors or Trojan Horse utilities or programs that the service provider uses. These app developers then use the
privileged access available to their legitimate applications to obtain illegitimate access to user-specific data.
Malicious App Support Personnel

User-specific data is vulnerable to malicious application support personnel who enable debug or other diagnostic
switches within the administrative server software that disable security mechanisms.

Poor or inexperienced app support personnel may inadvertently leave debug or diagnostic switches enabled at the
conclusion of a support activity, leaving the user-specific data vulnerable on the administrative server.

Malicious User

User-specific data resident on the administrative server is vulnerable to malicious users' gaining access to the
service provider's network and thereby accessing user-specific data. The service provider's network access may
be obtained by these malicious users' acting in one of the preceding roles or exploiting a vulnerability in the
overall service provider's system.

Corporate Proprietary Data and Resources

Corporate proprietary data and resources refer to information resident on the administrative server that provides
network details, fraud detection scheme information, and the like.

Malicious WSP OMS Personnel

Corporate proprietary data and resources resident on the administrative server are vulnerable to malicious WSP
OMS personnel who exploit their system access to gain access to corporate proprietary data and resources.

Malicious App Developer

Corporate proprietary data and resources resident on the administrative server are vulnerable to malicious app
developers who include back doors or Trojan Horse utilities or programs that the service provider uses. These
app developers can then use the privileged access available to their legitimate applications to obtain illegitimate
access to corporate proprietary data and resources.

Malicious App Support Personnel

Corporate proprietary data and resources are vulnerable to malicious application support personnel who enable
debug or other diagnostic switches within the software that disable security mechanisms present in the network
server.

Poor or inexperienced app support personnel may inadvertently leave debug or diagnostic switches enabled at the
conclusion of a support activity, leaving corporate proprietary data and resources vulnerable on the network
server.

Malicious User

Corporate proprietary data and resources resident on the administrative server are vulnerable to malicious users
gaining access to the service provider's network, and thereby access to corporate proprietary data and resources.
The service provider's network access may be obtained by these malicious users' acting in one of the preceding
roles or exploiting a vulnerability in the overall service provider's system.

Network Server

User-Specific Data

User-specific data is information such as credit card numbers, addresses, and data such as e-mail and Web traffic
that transits the network server.

Malicious WSP OMS Personnel

User-specific data transiting the network server is vulnerable to malicious WSP OMS personnel who have access
to the network server.
Malicious App Developer

Malicious application developers can create virus or Trojan Horse utilities or programs that cause the transit data
to be vulnerable. An example would be a network routing utility containing code that routes a copy of the transit
data to the app developer.

Poor or inexperienced application developers may not take appropriate security measures regarding their
particular application, rendering user data vulnerable during transit.

Malicious App Support Personnel

User-specific data is vulnerable to malicious application support personnel who enable debug or other diagnostic
switches within the software that disable security mechanisms present in the network server.

Poor or inexperienced app support personnel may inadvertently leave debug or diagnostic switches enabled at the
conclusion of a support activity, leaving the user data vulnerable during transit of the network server.

Malicious User

User-specific data is vulnerable to a malicious user who has access to, or has assumed one of the preceding roles
to get access to, the network server.

Corporate Proprietary Data and Resources

Much the same as for the administrative server, corporate proprietary data and resources refer to information
resident on the network server. We are referring to the system that connects the service provider's transceivers to
the remainder of the wired world.

Malicious WSP OMS Personnel

Corporate proprietary data and resources resident on the network server are vulnerable to malicious WSP OMS
personnel who exploit their system access to gain access to corporate proprietary data and resources.

Malicious App Developer

Corporate proprietary data and resources resident on the administrative server are vulnerable to malicious app
developers who include back doors or Trojan Horse utilities or programs that the service provider uses. These
app developers can then use the privileged access available to their legitimate applications to obtain illegitimate
access to corporate proprietary data and resources.

Malicious App Support Personnel

Corporate proprietary data and resources are vulnerable to malicious application support personnel who enable
debug or other diagnostic switches within the software that disable security mechanisms present in the network
server.

Poor or inexperienced app support personnel may inadvertently leave debug or diagnostic switches enabled at the
conclusion of a support activity, leaving corporate proprietary data and resources vulnerable on the network
server.

Malicious User

Corporate proprietary data and resources resident on the administrative server are vulnerable to malicious users
gaining access to the service provider's network, and thereby access to corporate proprietary data and resources.
The service provider's network access can be obtained by these malicious users' acting in one of the preceding
roles or exploiting a vulnerability in the overall service provider's system.

Vulnerabilities of the Gateway


The gateway is functionally not much more than a server that performs processing to convert Web traffic to a
form compatible with the wireless device. You will notice that the vulnerabilities listed mirror those for the
administrative and network servers. The Web server and backend server also have similar vulnerabilities.
Therefore, we will not cover the vulnerabilities for the Web server and backend server. Further, no additional
vulnerability is associated with having those servers linked to a wireless system (with the exception of no longer
needing physical access) than to a totally wired system.

The Physical Gateway

Malicious OMS Personnel

The gateway is vulnerable to manipulation or modification by malicious OMS personnel.

Malicious App Developer

The gateway is vulnerable to malicious app developers who include back doors or Trojan Horse utilities or
programs that the gateway uses. These app developers can then use the privileged access available to their
legitimate applications to obtain illegitimate access to gateway services.

Malicious App Support Personnel

The gateway is vulnerable to malicious application support personnel who enable debug or other diagnostic
switches within the software that disable security mechanisms present in the gateway.

Poor or inexperienced app support personnel may inadvertently leave debug or diagnostic switches enabled at the
conclusion of a support activity, leaving the gateway vulnerable.

Malicious User

The gateway is vulnerable to manipulation or modification by a malicious user who has assumed one of the
preceding roles or has otherwise gained access to the gateway.

User-Specific Data

Malicious OMS Personnel

User-specific data transiting or resident on the gateway is vulnerable to malicious WSP OMS personnel who
have access to the network server.

Malicious App Developer

Malicious application developers can create virus or Trojan Horse utilities or programs that cause the user-
specific data to be vulnerable.

Poor or inexperienced application developers may not take appropriate security measures regarding their
particular application, rendering user-specific data vulnerable during transit or storage on the gateway.

Malicious App Support Personnel

User-specific data is vulnerable to malicious application support personnel who enable debug or other diagnostic
switches within the gateway software that disable security mechanisms.

Poor or inexperienced app support personnel may inadvertently leave debug or diagnostic switches enabled at the
conclusion of a support activity, rendering the user-specific data vulnerable during transit or storage on the
gateway.

Malicious User

User-specific data is vulnerable to a malicious user who has access to, or has assumed one of the preceding roles
to get access to, the gateway.
User Data

Malicious OMS Personnel

User data transiting the gateway is vulnerable to malicious OMS personnel who have access to the gateway.

Malicious App Developer

Malicious application developers can create virus or Trojan Horse utilities or programs that cause the user data to
be vulnerable.

Poor or inexperienced application developers may not take appropriate security measures regarding their
particular application, rendering user data vulnerable during transit of the gateway.

Malicious App Support Personnel

User data is vulnerable to malicious application support personnel who enable debug or other diagnostic switches
within the gateway software that disable security mechanisms.

Poor or inexperienced app support personnel may inadvertently leave debug or diagnostic switches enabled at the
conclusion of a support activity, rendering the user data vulnerable during transit of the gateway.

Malicious User

User data is vulnerable to a malicious user who has access to, or has assumed one of the preceding roles to get
access to, the gateway.

Corporate Proprietary Data and Resources

Malicious OMS Personnel

Corporate proprietary data and resources on the gateway are vulnerable to malicious OMS personnel who have
access to the gateway.

Malicious App Developer

Malicious application developers can create virus or Trojan Horse utilities or programs that cause the corporate
proprietary data and resources to be vulnerable.

Poor or inexperienced application developers may not take appropriate security measures regarding their
particular application, leaving corporate proprietary data and resources vulnerable on the gateway.

Malicious App Support Personnel

Corporate proprietary data and resources are vulnerable to malicious application support personnel who enable
debug or other diagnostic switches within the gateway software that disable security mechanisms.

Poor or inexperienced app support personnel may inadvertently leave debug or diagnostic switches enabled at the
conclusion of a support activity, rendering the corporate proprietary data and resources accessible from the
gateway vulnerable.

Malicious User

Corporate proprietary data and resources are vulnerable to a malicious user who has access to, or has assumed
one of the preceding roles to get access to, the gateway.

Third-Party Data Transiting the Gateway

Malicious OMS Personnel


Third-party data transiting or resident on the gateway is vulnerable to malicious OMS personnel who have access
to the gateway.

Malicious App Developer

Malicious application developers can create virus or Trojan Horse utilities or programs that cause third-party data
to be vulnerable.

Poor or inexperienced application developers may not take appropriate security measures regarding their
particular application, rendering third-party data vulnerable during transit or storage on the gateway.

Malicious App Support Personnel

Third-party data is vulnerable to malicious application support personnel who enable debug or other diagnostic
switches within the gateway software that disable security mechanisms.

Poor or inexperienced app support personnel may inadvertently leave debug or diagnostic switches enabled at the
conclusion of a support activity, rendering third-party data vulnerable during transit or storage on the gateway.

Malicious User

Third-party data is vulnerable to a malicious user who has access to, or has assumed one of the preceding roles to
get access to, the gateway.

Vulnerabilities of the Web Server and the Backend Server


The Web server and backend server have nearly identical vulnerabilities as those identified for the gateway.
Because we are concentrating on the wireless aspects of security, we will not explicitly go through the exercise
of listing the vulnerabilities of these two functional blocks. Keep in mind that although the vulnerabilities may be
identical, the protections or mitigations chosen can differ considerably because of the analysis of likelihood and
the functionality trade-offs considered.

It should be clear that when you have identified the targets and roles, stating the vulnerabilities becomes simple.
It should also be obvious how these vulnerability statements can be easily modified to become requirement
statements.

Chapter 11. Analyze Mitigations and Protections


Victory goes to the player who makes the next-to-last mistake.

—Chessmaster Savielly Grigorievitch Tartakower (1887–1956)

Now that you have established a knowledge base, it is time to continue the I-ADD process. In review, you divide
the system into its functional blocks. You then examine each of these blocks to determine the potential targets of
interest to an attacker. These blocks are then broken down to their functional subcomponents, and the process is
repeated until the lowest functional blocks are identified and analyzed.

Next, you examine the targets and identify the roles that have influence over the targets. You identify known
attacks against similar systems and determine your system's vulnerabilities, based on examining the targets and
roles. As discussed in Chapter 2, "Security Principles," the goal of this analysis phase is to develop an
understanding of which items deserve additional resources to protect, which items are "nice to have," and which
can be placed on an "acknowledged with no action necessary" list.

You begin the process of generating mitigations and protections by examining each vulnerability and
determining the appropriate solution for protecting or mitigating the risk this vulnerability imposes on the
system. The key here is the word appropriate; there can be many ways of mitigating the risk introduced by a
vulnerability. This is where a security expert can save a lot of time and resources in helping to identify what is an
appropriate solution and what is not. Should you decide to perform this analysis on your own, you must be
cautious that solutions identified to mitigate one risk do not generate other vulnerabilities. Repeating the I-ADD
process on the system with the mitigations in place is one way to accomplish this.
We shall now examine each vulnerability previously identified and discuss possible ways of mitigating or
protecting it from attack. Because this is a hypothetical system and we have identified quite a few vulnerabilities,
we will cover aspects with wider applicability in more detail. Other vulnerabilities will be covered only briefly.
Again, the purpose is to give you guidance on how to go about analyzing or performing this type of analysis on
your own particular system or component. Keep in mind that having multilevel protection adds significantly to a
system's overall security. Further, you are not necessarily making implementation or functional trade-offs at this
time. However, it does little good to list solutions that are impractical, such as "Assign two heavily armed guards
to protect the device from theft." In general, attackers look for areas where their efforts will have the greatest
payoff, areas with only a single protection mechanism to bypass or with known vulnerabilities.

Protecting the Wireless Device


Two approaches can be taken to mitigate the risk of loss or theft, depending on your concern. Are you concerned
about someone gaining access to the information or services provided by the device? Are you concerned that you
may not have access to the information or resources if you misplace or lose the device? Let's examine loss and
theft separately.

Limiting the Vulnerability to Loss


The device could be made smaller so that it is easier to keep on one's person. However, a smaller device is easier
to misplace. A wearable device is easier to keep track of and less likely to be misplaced than a device not worn
by the user.

The device could have a holster or carrying case. For example, belt holsters are common with cell phones and
PDAs. These holsters are fine for men, but women do not usually want to wear these devices. Women are forced
to carry the device in their purse, backpack, or bag.

Extending the preceding case, a wearable interface is better than a wearable device. The interface would be
smaller, less intrusive, and possibly more widely accepted by both male and female users. An example would be
a heads-up PDA display of the screen on the user's glasses or sunglasses; the input portion could be worn like a
watch. The main device could be kept in a purse or worn on a holster. When the device is moved outside
communications range of the remote interface devices, an alarm would sound to alert the user.

A second suggestion is a voice-activated phone. Only the microphone and the earpiece would be worn. The
microphone could look like a lapel pin. The earpiece could look like earrings or be small enough to fit within the
ear canal, similar to digital hearing aids. Being voice activated, there would be no need for a keypad.

The device could be equipped with some type of proximity sensors. These sensors would come in pairs, one
sensor worn like a ring or bracelet and the other within the device. If the two became separated beyond the range
of the sensors, one or both would alarm to notify the user of the separation and the device's location.

The device could be made to be the access mechanism for applications and data stored on a server somewhere.
The device itself would simply be the interface and, as such, would be inexpensive. Its loss would be
inconsequential.

Limiting the Vulnerability to Theft


The protections described for loss apply as well to theft. However, with the exception of the last protection, they
are aimed at notifying the user of the theft and not at deterring the theft. In general, theft occurs when the thief
perceives value in having the device—from selling or using the device or by using the information contained on
the device. With this in mind, you should consider the following protections or mitigations.

The device could be made inexpensive and readily available. Charge for service rather than for the device. If the
device's value drops below a certain threshold, taking it will not be worth a thief's effort.

The device could be configured as tiny wearable components that would be difficult for a thief to obtain.
The device could require an external authentication mechanism, such as a SmartCard or proximity device, to
enable it. For the device to be useful, a thief would have to take the device and the enabling token. The enabling
device, being smaller, could be better protected on one's person when not in use.

The device could be made so that it is useful only to the owner. Employing a form of biometric authentication to
access the device would do this. One scenario would be to have the device personalized when it is purchased.
This would be a one-time, nonreversible activity that would link the device and user; only the user could access
the device. Biometric authentication, such as a fingerprint, retinal scans, a voiceprint, or even DNA analysis,
would eliminate the threat of random theft. Other issues could be associated with this solution, such as user
acceptance and processing power limitations.

Protecting the Physical Interface


Protecting physical access to the device is instrumental in your system. Restricting access to something that is
mobile, not stationary, is tricky. To minimize potential damage caused by unauthorized physical access, attention
should be paid to developing code governing the user interface and all data stored on the device. By designing
protection mechanisms inside the device, you can prepare for worst-case scenarios of unauthorized and malicious
physical access.

Protecting Access to the User Interface


We will not cover this vulnerability in great detail because it is being addressed by the manufacturers. They are
making strides to mitigate this risk by providing protective cases, flip-up covers, and impervious membranes,
locking the keyboard/keypad, and rugged-izing cases.

Protecting Personal Data on the PDA


We will examine the protection of personal data by looking at the various roles identified, similar to the approach
we took in identifying the vulnerabilities. We also restate the vulnerability so that you do not have to refer to
Chapter 6, "Cryptography." This approach need not be taken with every vulnerability, but it helps ensure
completeness.

Malicious Device Support Personnel

Personal data stored on the device can be vulnerable to malicious device support personnel when the device is
taken in for upgrades, maintenance, or repair. These support personnel may have access to manufacturer bypass
and diagnostic codes, equipment, or utilities that allow them access to personal data stored on the device.

Poor or inexperienced device support personnel may inadvertently leave the device in a security bypass or
diagnostic mode, leaving personal data vulnerable.

• Have all maintenance and support activities performed by maintenance teams with rotating members,
rather than by individuals. This limits the opportunity for support personnel with malicious intent to
exploit their privileged access. By rotating team members, alliances that encourage malicious activity
are less likely to form.

• An alternative to teams would be video monitoring of work areas. Security or managerial personnel
monitor support activities to ensure the device's integrity.

• Institute a checklist and an oversight procedure for processing devices that are in for support, to ensure
that all security bypass or diagnostic modes have been properly reset to operational settings. This
prevents poor or inexperienced personnel from inadvertently leaving the device vulnerable.

• Make the personal data inaccessible even if someone does have privileged access. This can be
accomplished by storing all personal data on the device in encrypted form. As long as the encryption is
cryptographically sound, it will be extremely difficult for someone to obtain useful personal
information from the device.
• Following on the preceding track of making the personal data inaccessible, store all personal data on a
removable device such as a SmartCard. The SmartCard provides authentication before allowing access
to the personal data. Also, the SmartCard can directly communicate the personal information (such as a
credit card number) with the application requiring the information.

• Price the device so that obtaining a new one is more cost-effective than repairing the old unit.

Malicious App Developer

Malicious application developers can create a virus or Trojan Horse utilities or programs that provide access to
personal data on the PDA.

Poor or inexperienced application developers may not take appropriate security measures regarding their
particular application (such as not clearing buffers and overwriting data elements), leaving personal data stored
on the device vulnerable.

• Institute a certification program, enforced by the OS, allowing only digitally signed code from
authorized certifiers to be loaded on the device. This would require that the software be examined by
the device manufacturer or an independent third party to validate that the software is secure and reliable
and functions as advertised. This certification would also digitally sign the code to ensure the code's
(and certificate's) authenticity and integrity.

• Implement a trusted OS on the device that establishes virtual environments for programs. The program
believes that it has complete and direct access to the device's resources, but the OS continually monitors
and processes the requests on the program's behalf. In this way, should the program attempt to do
something untoward, the OS can simply return an error or otherwise keep the activity from occurring.

• Have the device perform a hardware resident integrity check of the device and the OS to ensure that the
device's integrity is intact before initializing the system. (This is a result of the preceding protection and
can be implied from the term trusted OS, but we specifically choose to list this separately because it has
other uses.)

Malicious App Support Personnel

Malicious application support personnel may coerce the user via social engineering to provide access, or
information necessary for access, to personal data under the auspices of assisting with an application issue.
Alternatively, malicious app support personnel may enable debug or other diagnostic switches within the
software that disable security mechanisms present in the device or software.

Poor or inexperienced app support personnel may inadvertently leave debug or diagnostic switches enabled
following a support activity, leaving the personal data vulnerable.

Interestingly enough, and logically, the protections applicable for this role are a combination of the protections
for the malicious device support personnel and malicious app developer.

• Have all maintenance and support activities performed by maintenance teams with rotating members,
rather than by individuals. This limits the opportunity for support personnel with malicious intent to
exploit their privileged access.

• Institute a checklist and an oversight procedure for processing devices that are in for support, to ensure
that all security bypass or diagnostic modes have been properly reset to operational settings.

• Store all personal data on the device in encrypted form. As long as the encryption is cryptographically
sound, it will be extremely difficult for someone to obtain useful personal information from the device.

• Store all personal data on a removable device, such as a SmartCard. The SmartCard provides
authentication before allowing access to the personal data.
• Institute a certification program, enforced by the OS, allowing only digitally signed code from
authorized certifiers to be loaded on the device. This would require that the software be examined by
the device manufacturer or an independent third party to validate that the software is secure and reliable
and functions as advertised. This certification would also digitally sign the code to ensure the code's
(and certificate's) authenticity and integrity.

• Implement a trusted OS on the device that establishes virtual environments for programs. The program
believes that it has complete and direct access to the device's resources, but the OS continually monitors
and processes the requests on the program's behalf. In this way, should the program attempt to do
something untoward, the OS can simply return an error or otherwise keep the activity from occurring.

• Have the device perform a hardware resident integrity check of the device and the OS to ensure that the
device's integrity is intact before initializing the system.

Malicious User

Personal data is vulnerable to a malicious user who has gained access to the device. Recall that malicious user
is a catchall term encompassing a variety of activities. Although this simple statement is adequate for describing
the vulnerability, the complexity of the role may become important when generating mitigations and protections
or performing the security-functionality trade-offs, and it should not be forgotten. For example, a malicious user
may pose as a member of one of the legitimate functional roles and become the functional equivalent of one of
the preceding, listed malicious roles.

Because the malicious user may pose as a member of one of the legitimate functional roles and become the
functional equivalent of one of the preceding, listed malicious roles, each of the preceding protections applies
here as well. Therefore, we list only protections not previously covered in the other roles:

• Shield the device from physical and nonphysical technical attacks against memory, data emanation,
power analysis, and the like. Most of these issues are beyond the scope of this book, so suffice it to say
that if these types of issues are of concern in your particular application or architecture, you have to
become involved with the manufacturer to determine the susceptibility of a given component or device
to such attacks.

• Make the device hardware tamper-proof by not allowing the case to be opened without destroying or
clearing the memory.

• Have the device perform a hardware resident integrity check of the device and the OS to ensure that the
device's integrity is intact before initializing the system.

Protecting Corporate or Third-Party Information


From a vulnerability perspective, no distinction exists between corporate or third-party information and
personal data. There may be some distinction when it comes to the security-functionality trade-offs.

Protections also indicate no distinction between corporate or third-party information and personal data.

Protecting Personal Data Being Sent by the Wireless Device

This target is the personal data mentioned in the preceding section, but here the target is the data as it is in
transit. You will notice that all the preceding roles are present, with the addition of a few others due to the
increased exposure of the data during transport.

Malicious Wireless Service Provider (WSP)

Recall the office complex case study example in Chapter 9, "Identify Targets and Roles," in which a company
provides gratis access to a client, only to monitor the client's activities.

• Encrypt the data to be transmitted so that only the desired recipient can decrypt it.
Malicious Device Support Personnel

Personal data transmitted by the device may be made vulnerable by malicious device support personnel when the
device is taken in for upgrades, maintenance, or repair. These support personnel may have access to
manufacturer bypass and diagnostic codes, equipment, or utilities that allow them to intentionally bypass
security features, leaving personal data transmitted by the device vulnerable.

Poor or inexperienced device support personnel may inadvertently leave the device in a security bypass or
diagnostic mode, making personal data vulnerable during transit.

• Have all maintenance and support activities performed by maintenance teams with rotating members,
rather than by individuals. This limits the opportunity for support personnel with malicious intent to
exploit their privileged access.

• Institute a checklist and an oversight procedure for processing devices that are in for support, to ensure
that all security bypass or diagnostic modes have been properly reset to operational settings.

• Encrypt the data to be transmitted so that only the desired recipient can decrypt it.

• Have the device perform a hardware resident integrity check of the device and the OS to ensure that the
device's integrity is intact before initializing the system. This ensures that critical procedures such as the
encryption applications have not been disabled or tampered with.

• Price the device so that obtaining a new one is more cost-effective than repairing the old unit.

Malicious WSP OMS Personnel

Personal data transmitted by the device is vulnerable to malicious WSP OMS personnel who have access to the
WSP transceiver and wireless network equipment.

• Have all maintenance and support activities performed by maintenance teams with rotating members,
rather than by individuals. This limits the opportunity for support personnel with malicious intent to
exploit their privileged access.

• Encrypt the data to be transmitted so that only the desired recipient can decrypt it.

Malicious App Developer

Malicious application developers may create a virus or Trojan Horse utilities or programs that cause the
transmitted data to be vulnerable. An example would be an encryption utility containing nonunique or known
keys. To the user, the data appears encrypted, but it is readily accessible to unauthorized individuals who know
the key. Alternatively, an e-mail utility may send a blind copy of every message sent or received by the device to
a predefined address.

Poor or inexperienced application developers may not take appropriate security measures regarding their
particular application, leaving personal data vulnerable during transit.

• Institute a certification program, enforced by the OS, allowing only digitally signed code from
authorized certifiers to be loaded on the device. This requires that the software be examined by the
device manufacturer or an independent third party to validate that the software is secure and reliable
and functions as advertised. This certification would also digitally sign the code to ensure the code's
(and certificate's) authenticity and integrity.

• Implement a trusted OS on the device that establishes virtual environments for programs. The program
believes that it has complete and direct access to the device's resources, but the OS continually monitors
and processes the requests on the program's behalf. In this way, should the program attempt to do
something untoward, the OS can simply return an error or otherwise keep the activity from occurring.
• Have the device perform a hardware resident integrity check of the device, OS, and critical software to
ensure that the device's integrity is intact before initializing the system.

• Store all personal data on the device in encrypted form.

• Store all personal data on a removable device such as a SmartCard. The SmartCard provides
authentication before allowing access to the personal data. Have the SmartCard perform the
communication activity itself so that the device is merely a conduit.

• Encrypt the data to be transmitted so that only the desired recipient can decrypt it.

Malicious App Support Personnel

Malicious application support personnel may coerce the user via social engineering to provide access, or
information necessary for access, to personal data under the auspices of assisting with an application issue.
Alternatively, malicious app support personnel may enable debug or other diagnostic switches within the
software that disable security mechanisms present in the device or software.

Poor or inexperienced app support personnel may inadvertently leave debug or diagnostic switches enabled at
the conclusion of a support activity, leaving the personal data vulnerable during transit.

• Have all maintenance and support activities performed by maintenance teams with rotating members,
rather than by individuals. This limits the opportunity for support personnel with malicious intent to
exploit their privileged access.

• Institute a checklist and an oversight procedure for processing devices that are in for support, to ensure
that all security bypass or diagnostic modes have been properly reset to operational settings.

• Store all personal data on the device in encrypted form.

• Store all personal data on a removable device such as a SmartCard. The SmartCard provides
authentication before allowing access to the personal data. Have the SmartCard perform the
communication activity itself so that the device is merely a conduit.

• Institute a certification program, enforced by the OS, allowing only digitally signed code from
authorized certifiers to be loaded on the device. This would require that the software be examined by
the device manufacturer or an independent third party to validate that the software is secure and reliable
and functions as advertised. This certification would also digitally sign the code to ensure the code's
(and certificate's) authenticity and integrity.

• Implement a trusted OS on the device that establishes virtual environments for programs. The program
believes that it has complete and direct access to the device's resources, but the OS continually monitors
and processes the requests on the program's behalf. In this way, should the program attempt to do
something untoward, the OS can simply return an error or otherwise keep the activity from occurring.

• Have the device perform a hardware resident integrity check of the device and the OS to ensure that the
device's integrity is intact before initializing the system.

• Encrypt the data to be transmitted so that only the desired recipient can decrypt it.

Malicious User

Personal data is vulnerable to a malicious user who has access to, or has built a receiver that monitors, the
transmission of the PDA and can reconstruct the data transmitted and received. Again, a malicious user may
assume any of the preceding malicious roles to gain access necessary to exploit a vulnerability.

All the protections for the preceding roles apply here.

Protecting Corporate or Third-Party Information Being Sent


As with offline functions, from a vulnerability perspective there is no distinction between corporate or third-party
information and personal data in transit.

Protections also indicate no distinction between corporate or third-party information and personal data being
sent.

Protecting User Online Activities, Usage Patterns, Location, and Movement

This category can be considered a subset of or equivalent to user personal data as far as vulnerabilities are
concerned.

Protection of user online activities, usage patterns, location, and movement can be treated as personal data if and
while it is stored on the device. In this situation, the protections for personal data on the device apply. Now,
consider protecting this information during transit—a different problem altogether. The difficulty is that the
wired Internet was not originally designed with security and privacy in mind. In fact, quite the opposite, it was
designed as an open and available architecture for freely sharing ideas and data. Only recently, with the desire to
commercialize the Internet, has the need for security and privacy become important. The requirement is to
protect user information traveling on an open and accessible infrastructure.

• Encryption is the first thing that comes to mind. One solution is that you do not transmit in unencrypted
form any information that specifically identifies the user. This would likely require the cooperation of
application developers. Applications would have to be capable of accepting user-specific information in
encrypted form. This is not as easy as it sounds. The use of encryption protects the information
communicated but not the fact that the communication is occurring. To protect the user's privacy, an
unauthorized party should not be able to determine that the user communicated with the server.

• Have the WSP act as a proxy server for all activity so that malicious individuals see only that a wireless
user is involved. The WSP performs the routing of packets to the true user. This places a lot of
additional processing burden on the WSP, and although it would solve the dilemma of providing
privacy, it is unlikely that WSPs will provide this service unless consumers begin to refuse to accept
service without privacy. It is the classic "We can forgo the security because consumers will demand the
functionality and will give up security to get it."

Protecting Access to Network and Online Services


As used here, access to network and online services means the use of the device or information on the device to
gain access to network and online services. This distinction separates it from similar activities occurring against
the service provider, which we will discuss shortly.

Malicious Device Support Personnel

User network and online services access credentials are vulnerable to device support personnel who have access
to the device for upgrade, maintenance, or repair purposes. Device support personnel may have access to
manufacturer bypass and diagnostic codes, equipment, or utilities that give them access to network and online
service access credentials on the device.

• Have all maintenance and support activities performed by maintenance teams with rotating members,
rather than by individuals. This limits the opportunity for support personnel with malicious intent to
exploit their privileged access.

• Utilize video monitoring of work areas so that security or managerial personnel can monitor support
activities to ensure the device's integrity.

• Encrypt the credentials stored on the device, and transmit the credentials and access code in encrypted
form so that only the desired recipient can decrypt the data.

• The network or online services could require some form of biometric or SmartCard authentication so
that the information on the device itself is insufficient to gain access to the resources.
• Price the device so that obtaining a new one is more cost-effective than repairing the old unit.

Malicious WSP OMS Personnel

User network and online services access credentials are vulnerable to WSP OMS personnel when this
information is received and processed by the WSP equipment. The user may also be coerced into providing
network or online access credentials to WSP OMS personnel.

• Have all maintenance and support activities performed by maintenance teams with rotating members,
rather than by individuals. This limits the opportunity for support personnel with malicious intent to
exploit their privileged access.

• Implement access control and logging systems to monitor OMS personnel access to sensitive
equipment and areas.

Note

Logging is beneficial only if someone examines the logs. To derive the greatest benefit
from logs, they should be examined by an automated process that detects anomalies or alert
conditions and sends notification to the appropriate authority.

• Utilize video monitoring of critical areas so that security or managerial personnel can monitor support
activities to ensure the system's integrity.

• Encrypt the credentials stored on the device, and transmit the credentials and access code in encrypted
form so that only the desired recipient can decrypt the data.

• The network or online services could require a form of biometric or SmartCard authentication so that
obtaining the information on the device itself is insufficient to gain access to the resources.

Malicious App Developer

User network and online services access credentials are vulnerable to applications that can copy and store, or
forward, these credentials to the developer.

• Encrypt the credentials stored on the device, and transmit the credentials and access code in encrypted
form so that only the desired recipient can decrypt the data.

• The network or online services could require a form of biometric or SmartCard authentication so that
obtaining the information on the device itself is insufficient to gain access to the resources.

Malicious User

Access to network and online services are vulnerable to a malicious user. A malicious user may gain access to
the device and retrieve network and online services credentials to be used on another device or at a later time. A
malicious user may monitor transmissions (as discussed in the "Malicious User" section under "Protecting
Personal Data Being Sent by the Wireless Device") to obtain network and online services credentials. Again, a
malicious user may assume any of the preceding malicious roles to gain access necessary to exploit a
vulnerability.

All the protections for the preceding roles apply here.

Protecting the Transceiver


Protecting the Transceiver Itself
Malicious Device OMS Personnel

The transceiver is vulnerable to manipulation or modification by malicious device OMS personnel.

• Have all maintenance and support activities performed by maintenance teams with rotating members,
rather than by individuals. This limits the opportunity for support personnel with malicious intent to
exploit their privileged access.

• Have all maintenance and support activities reviewed by a Quality Assurance/Security team where the
transceiver is tested and inspected.

• Utilize video monitoring of work areas so that security or managerial personnel can monitor support
activities to ensure the device's integrity.

• Make the transceiver a nonservicable, tamper-proof component that is replaced as a unit if it fails.

• Price the device so that obtaining a new one is more cost-effective than repairing the old unit.

Malicious User

The transceiver is vulnerable to manipulation or modification by a malicious user. For example, manipulating
the transceiver can be done to assist a man-in-the-middle attack.

• Make the transceiver a nonservicable, tamper-proof component that is replaced as a unit if it fails.

Protecting Vulnerabilities of the Service Provider


Protecting the Transceiver Itself

When we use the term transceiver in regard to the service provider, we are considering the transceiver system as
consisting of the antenna array, tower, coax, transceiver, and switching equipment.

Malicious WSP OMS Personnel

The transceiver is vulnerable to manipulation or modification by malicious WSP OMS personnel.

• Have all maintenance and support activities performed by maintenance teams with rotating members,
rather than by individuals. This limits the opportunity for support personnel with malicious intent to
exploit their privileged access.

• Implement access control and logging systems to monitor OMS personnel access to sensitive
equipment and areas.

• Have all maintenance and support activities reviewed by a Quality Assurance/Security team where the
transceiver is tested and inspected.

• Utilize video monitoring of work areas so that security or managerial personnel can monitor support
activities to ensure the transceiver's integrity.

Malicious User

The transceiver is vulnerable to manipulation or modification by a malicious user. For example, this may be
done to deny service to areas or individuals at crucial times.

• Implement access control and logging systems to monitor OMS personnel access to sensitive
equipment and areas.

• Utilize video monitoring of work areas so that security or managerial personnel can monitor support
activities to ensure the transceiver's integrity.
Protecting the Transceiver Services
Malicious WSP OMS Personnel

The transceiver services are vulnerable to manipulation or modification by malicious WSP OMS personnel, for
example, granting access to the network to unauthorized users by providing them with maintenance or diagnostic
access credentials.

• Implement access control and logging systems to monitor OMS personnel access to network and
sensitive areas.

• Require the use of biometric SmartCard authentication or another physical access token, in addition to
any maintenance or diagnostic access credentials.

Malicious User

The transceiver is vulnerable to manipulation or modification by a malicious user. For example, a malicious
user may obtain access credentials to utilize the service without paying for the privilege.

• Implement access control and logging systems to monitor OMS personnel access to sensitive
equipment and areas.

• Utilize video monitoring of work areas so that security or managerial personnel can monitor support
activities to ensure the transceiver's integrity.

• Require the use of biometric SmartCard authentication or another physical access token, in addition to
any maintenance or diagnostic access credentials.

Protecting Access to Its Subscribers


Malicious WSP OMS Personnel

The service provider is vulnerable to WSP OMS personnel who grant access to the network and thereby its
subscribers for spam or other unsolicited purposes.

• Have all maintenance and support activities performed by maintenance teams with rotating members,
rather than by individuals. This limits the opportunity for support personnel with malicious intent to
exploit their privileged access.

• Implement access control and logging systems to monitor OMS personnel access to sensitive
equipment and areas.

• Have all maintenance and support activities reviewed by a Quality Assurance/Security team where the
transceiver is tested and inspected.

• Utilize video monitoring of work areas so that security or managerial personnel can monitor support
activities to ensure the integrity of the service provider's system.

Malicious Corporate and Private Servers

The service provider is vulnerable to malicious corporate or private servers that access the service provider to
deliver advertising, marketing, or other spam to the service provider's subscribers.

• Do not allow subscriber information to become available to outside servers.

• Maintain subscriber information on a separate server, and require authentication for processes or
entities requesting access to this server.

• Store subscriber information in encrypted form.


• Establish a firewall/proxy server so that details of the network are not available to external entities.

• Do not allow nonspecifically addressed messages to be processed by the system.

• Implement access control and logging systems to monitor access to equipment and resources.

Malicious Corporate and Private Server OMS Personnel

The service provider is vulnerable to malicious corporate or private server OMS personnel who utilize
authorized servers to allow unauthorized access to subscribers. For example, a service provider's subscribers
receive stock quotes as part of their service plan. OMS personnel with access to the quote server that provides
this service can alter the server to deliver anything in addition to, or in place of, the stock quotes.

• Implement access control and logging systems to monitor access to equipment and resources.
Prohibiting this type of abuse is problematic because the attacker is taking advantage of an authorized
capability. The best the service provider can do is to have logs in place that identify this activity and
report it to the server's security or administrative personnel. Alternatively, the service provider can deny
any further access by that particular server or company.

Malicious Content Providers

The service provider is vulnerable to malicious content providers who use the service provider's resources to
spam or otherwise deliver their payload to the subscribers.

• Do not allow subscriber information to become available to outside servers.

- Maintain subscriber information on a separate server, and require authentication for processes or
entities requesting access to this server.

- Store subscriber information in encrypted form.

- Establish a firewall/proxy server so that details of the network are not available to external entities.

• Do not allow nonspecifically addressed messages to be processed by the system.

• Implement access control and logging systems to monitor access to equipment and resources.

Malicious App Developer

The service provider is vulnerable to malicious app developers who include back doors or Trojan Horse utilities
or programs that the service provider uses. These app developers can then use the privileged access available to
their legitimate applications to obtain illegitimate access to the subscribers.

• Institute a certification program, enforced by the OS, allowing only digitally signed code from
authorized certifiers to be loaded on the service provider's systems. This would require that the software
be examined by the service provider or an independent third party to validate that the software is secure
and reliable and functions as advertised. This certification would also digitally sign the code to ensure
the code's (and certificate's) authenticity and integrity.

• Implement a trusted OS on the service provider's systems that establishes virtual environments for
programs. The program believes that it has complete and direct access to the service provider's
resources, but the OS continually monitors and processes the requests on the program's behalf. In this
way, should the program attempt to do something untoward, the OS can simply return an error or
otherwise keep the activity from occurring.

• Have the information systems perform a hardware resident integrity check of the system, OS, and
critical software to ensure that the system's integrity is intact before initializing the system.

• Store all subscriber data on the system in encrypted form.


• Require authentication before allowing access to subscriber data. Have the access and activity on the
system logged.

Malicious App Support Personnel

Service provider subscribers are vulnerable to malicious application support personnel who enable debug or
other diagnostic switches within the software that disable security mechanisms present to protect access to the
service provider's subscribers.

Poor or inexperienced app support personnel may inadvertently leave debug or diagnostic switches enabled at
the conclusion of a support activity, leaving corporate proprietary data and resources vulnerable on the network
server.

• Have all maintenance and support activities performed by maintenance teams with rotating members,
rather than by individuals. This limits the opportunity for support personnel with malicious intent to
exploit their privileged access.

• Institute a checklist and an oversight procedure for app support activities to ensure that all security
bypass or diagnostic modes have been properly reset to operational settings.

• Store all subscriber data on the system in encrypted form.

• Institute a certification program, enforced by the OS, allowing only digitally signed code from
authorized certifiers to be loaded on the system. This would require that the software be examined by
the service provider or an independent third party to validate that the software is secure and reliable and
functions as advertised. This certification would also digitally sign the code to ensure the code's (and
certificate's) authenticity and integrity.

• Implement a trusted OS on the service provider's systems that establishes virtual environments for
programs. The program believes that it has complete and direct access to the service provider's
resources, but the OS continually monitors and processes the requests on the program's behalf. In this
way, should the program attempt to do something untoward, the OS can simply return an error or
otherwise keep the activity from occurring.

• Have the information systems perform a hardware resident integrity check of the system, OS, and
critical software to ensure that the system's integrity is intact before initializing the system.

• Implement access control and logging systems to monitor access to equipment and resources.

Malicious User

The service provider is vulnerable to malicious users' gaining access to its network to allow them access to the
service provider's subscribers, either by acting in one of the preceding roles or by exploiting a vulnerability in
the service provider's overall system.

All the protections for the preceding roles apply here.

• Continually monitor bug and vulnerability reports of software and information systems in use to ensure
that new vulnerabilities and exploits are properly mitigated in a timely fashion.

• Periodically perform security risk analysis of the system to ensure that something has not been
overlooked or some change or update to one part of the system has not left another part vulnerable to
exploitation.

Protecting the Transceiver


Recall that there are no additional targets for the transceiver beyond those identified for the higher-level
functional block. Likewise, there would likely not be any additional protections or mitigations to identify.
Protecting the Administrative Server
By administrative server, we are referring to the billing, maintenance, and support systems associated with
keeping the wireless infrastructure functional.

Protecting User-Specific Data


User-specific data is information such as credit card numbers, addresses, finances, call and access log
information that resides on the administrative server.

Malicious WSP OMS Personnel

User-specific data resident on the administrative server is vulnerable to malicious WSP OMS personnel who
exploit their system access to gain access to user-specific data.

• Have all maintenance and support activities performed by maintenance teams with rotating members,
rather than by individuals. This limits the opportunity for support personnel with malicious intent to
exploit their privileged access.

• Implement access control and logging systems to monitor OMS personnel access to sensitive
equipment and areas.

• Have all maintenance and support activities reviewed by a Quality Assurance/Security team. This
should include logs of system and information access associated with the support activity.

• Utilize video monitoring of work areas so that security or managerial personnel can monitor support
activities to ensure the system's integrity.

Malicious App Developer

User-specific data resident on the administrative server is vulnerable to malicious app developers who include
back doors or Trojan Horse utilities or programs that the service provider uses. These app developers can then
use the privileged access available to their legitimate applications to obtain illegitimate access to user-specific
data.

• Institute a certification program, enforced by the OS, allowing only digitally signed code from
authorized certifiers to be loaded on the system. This would require that the software be examined by
the service provider or an independent third party to validate that the software is secure and reliable and
functions as advertised. This certification would also digitally sign the code to ensure the code's (and
certificate's) authenticity and integrity.

• Implement a trusted OS on the administrative server that establishes virtual environments for programs.
The program believes that it has complete and direct access to the administrative server's resources, but
the OS continually monitors and processes the requests on the program's behalf. In this way, should the
program attempt to do something untoward, the OS can simply return an error or otherwise keep the
activity from occurring.

• Have the administrative server perform a hardware resident integrity check of the system, OS, and
critical software to ensure that the system's integrity is intact before initializing the system.

• Store all user-specific data on the system in encrypted form.

• Require authentication before allowing access to user-specific data.

• Have the access and activity on the system logged.

• Require the use of a physical token as part of the authentication process.

Malicious App Support Personnel


User-specific data is vulnerable to malicious application support personnel who enable debug or other
diagnostic switches within the administrative server software that disable security mechanisms.

Poor or inexperienced app support personnel may inadvertently leave debug or diagnostic switches enabled at
the conclusion of a support activity, leaving the user-specific data vulnerable on the administrative server.

• Have all maintenance and support activities performed by maintenance teams with rotating members,
rather than by individuals. This limits the opportunity for support personnel with malicious intent to
exploit their privileged access.

• Institute a checklist and an oversight procedure for processing app support activities to ensure that all
security bypass or diagnostic modes have been properly reset to operational settings.

• Store all user-specific data on the system in encrypted form.

• Institute a certification program, enforced by the OS, allowing only digitally signed code from
authorized certifiers to be loaded on the system. This would require that the software be examined by
the service provider or an independent third party to validate that the software is secure and reliable and
functions as advertised. This certification would also digitally sign the code to ensure the code's (and
certificate's) authenticity and integrity.

• Implement a trusted OS on the service provider's systems that establishes virtual environments for
programs. The program believes that it has complete and direct access to the service provider's
resources, but the OS continually monitors and processes the requests on the program's behalf. In this
way, should the program attempt to do something untoward, the OS can simply return an error or
otherwise keep the activity from occurring.

• Have the information systems perform a hardware resident integrity check of the system, OS, and
critical software to ensure that the system's integrity is intact before initializing the system.

• Implement access control and logging systems to monitor access to equipment and resources.

Malicious User

User-specific data resident on the administrative server is vulnerable to malicious users' gaining access to the
service provider's network and thereby access to user-specific data. The service provider's network access can be
obtained by these malicious users' acting in one of the preceding roles or exploiting a vulnerability in the service
provider's overall system.

All the protections for the preceding roles apply here.

• Continually monitor bug and vulnerability reports of software and information systems in use to ensure
that new vulnerabilities and exploits are properly mitigated in a timely fashion.

• Periodically perform security risk analysis of the system to ensure that something has not been
overlooked or some change or update to one part of the system has not left another part vulnerable to
exploitation.

Protecting Corporate Proprietary Data and Resources

Corporate proprietary data and resources refers to information resident on the administrative server that provides
network details, fraud detection scheme information, and the like.

Malicious WSP OMS Personnel

Corporate proprietary data and resources resident on the administrative server are vulnerable to malicious WSP
OMS personnel who exploit their system access to gain access to corporate proprietary data and resources.
• Have all maintenance and support activities performed by maintenance teams with rotating members,
rather than by individuals. This limits the opportunity for support personnel with malicious intent to
exploit their privileged access.

• Implement access control and logging systems to monitor OMS personnel access to sensitive
equipment and areas.

• Have all maintenance and support activities reviewed by a Quality Assurance/Security team. This
should include logs of system and information access associated with the support activity.

• Utilize video monitoring of work areas so that security or managerial personnel can monitor support
activities to ensure the system's integrity.

• Store all corporate proprietary data on the system in encrypted form.

Malicious App Developer

Corporate proprietary data and resources resident on the administrative server are vulnerable to malicious app
developers who include back doors or Trojan Horse utilities or programs that the service provider uses. These
app developers can then use the privileged access available to their legitimate applications to obtain illegitimate
access to corporate proprietary data and resources.

• Institute a certification program, enforced by the OS, allowing only digitally signed code from
authorized certifiers to be loaded on the system. This would require that the software be examined by
the service provider or an independent third party to validate that the software is secure and reliable and
functions as advertised. This certification would also digitally sign the code to ensure the code's (and
certificate's) authenticity and integrity.

• Implement a trusted OS on the administrative server that establishes virtual environments for programs.
The program believes that it has complete and direct access to the administrative server's resources, but
the OS continually monitors and processes the requests on the program's behalf. In this way, should the
program attempt to do something untoward, the OS can simply return an error or otherwise keep the
activity from occurring.

• Have the administrative server perform a hardware resident integrity check of the system, OS, and
critical software to ensure that the system's integrity is intact before initializing the system.

• Store all corporate proprietary data on the system in encrypted form.

• Require authentication before allowing access to corporate proprietary data.

• Have the access and activity on the system logged.

• Require the use of a physical token as part of the authentication process.

Malicious App Support Personnel

Corporate proprietary data and resources are vulnerable to malicious application support personnel who enable
debug or other diagnostic switches within the software that disable security mechanisms present in the network
server.

Poor or inexperienced app support personnel may inadvertently leave debug or diagnostic switches enabled at
the conclusion of a support activity, leaving corporate proprietary data and resources vulnerable on the network
server.

• Have all maintenance and support activities performed by maintenance teams with rotating members,
rather than by individuals. This limits the opportunity for support personnel with malicious intent to
exploit their privileged access.
• Institute a checklist and an oversight procedure for processing app support activities to ensure that all
security bypass or diagnostic modes have been properly reset to operational settings.

• Store all corporate proprietary data on the system in encrypted form.

• Institute a certification program, enforced by the OS, allowing only digitally signed code from
authorized certifiers to be loaded on the system. This would require that the software be examined by
the service provider or an independent third party to validate that the software is secure and reliable and
functions as advertised. This certification would also digitally sign the code to ensure the code's (and
certificate's) authenticity and integrity.

• Implement a trusted OS on the service provider's systems that establishes virtual environments for
programs. The program believes that it has complete and direct access to the service provider's
resources, but the OS continually monitors and processes the requests on the program's behalf. In this
way, should the program attempt to do something untoward, the OS can simply return an error or
otherwise keep the activity from occurring.

• Have the information systems perform a hardware resident integrity check of the system, OS, and
critical software to ensure that the system's integrity is intact before initializing the system.

• Implement access control and logging systems to monitor access to equipment and resources.

• Require authentication before allowing access to corporate proprietary data.

• Have the access and activity on the system logged.

• Require the use of a physical token as part of the authentication process.

Malicious User

Corporate proprietary data and resources resident on the administrative server are vulnerable to malicious
users' gaining access to the service provider's network and thereby access to corporate proprietary data and
resources. The service provider's network access can be obtained by these malicious users' acting in one of the
preceding roles or exploiting a vulnerability in the service provider's overall system.

All the protections for the preceding roles apply here.

• Continually monitor bug and vulnerability reports of software and information systems in use to ensure
that new vulnerabilities and exploits are properly mitigated in a timely fashion.

• Periodically perform security risk analysis of the system to ensure that something has not been
overlooked or some change or update to one part of the system has not left another part vulnerable to
exploitation.

Protecting the Network Server


Protecting User-Specific Data

User-specific data is information such as credit card numbers, addresses, and data such as e-mail and Web traffic
that transits the network server.

Malicious WSP OMS Personnel

User data transiting the network server is vulnerable to malicious WSP OMS personnel who have access to the
network server.

The protections here are the same as the protections employed for the administrative server.

Note
The only additional concern to consider for this role, as well as the following roles, is the potential
for attacks or access via the network to which the network server is connected. Network-based
attacks are not unique to wireless systems and are well publicized. Plenty of available resources
cover this area of security, so we will not cover it in any detail here.

Malicious App Developer

Malicious application developers can create a virus or Trojan Horse utilities or programs that cause the transit
data to be vulnerable. An example would be a network routing utility containing code that routes a copy of the
transit data to the app developer.

Poor or inexperienced application developers may not take appropriate security measures regarding their
particular application, leaving user data vulnerable during transit.

The protections here are the same as the protections for the administrative server.

Malicious App Support Personnel

User data is vulnerable to malicious application support personnel who can enable debug or other diagnostic
switches within the software that disable security mechanisms present in the network server.

Poor or inexperienced app support personnel may inadvertently leave debug or diagnostic switches enabled at
the conclusion of a support activity, leaving the user data vulnerable during transit of the network server.

The protections here are the same as the protections for the administrative server.

Malicious User

User data is vulnerable to a malicious user who has access, or has assumed one of the preceding roles to get
access, to the network server.

The protections here are the same as the protections for the administrative server.

Protecting Corporate Proprietary Data and Resources


Much the same as for the administrative server, corporate proprietary data and resources refers to information
resident on the network server. We are referring to the system that connects the service provider's transceivers to
the remainder of the wired world.

Malicious WSP OMS Personnel

Corporate proprietary data and resources resident on the network server are vulnerable to malicious WSP OMS
personnel who exploit their system access to gain access to corporate proprietary data and resources.

The protections here are the same as the protections for the administrative server.

Malicious App Developer

Corporate proprietary data and resources resident on the administrative server are vulnerable to malicious app
developers who include back doors or Trojan Horse utilities or programs that the service provider uses. These
app developers can then use the privileged access available to their legitimate applications to obtain illegitimate
access to corporate proprietary data and resources.

The protections here are the same as the protections for the administrative server.

Malicious App Support Personnel


Corporate proprietary data and resources are vulnerable to malicious application support personnel who enable
debug or other diagnostic switches within the software that disable security mechanisms present in the network
server.

Poor or inexperienced app support personnel may inadvertently leave debug or diagnostic switches enabled at
the conclusion of a support activity, leaving corporate proprietary data and resources vulnerable on the network
server.

The protections here are the same as the protections for the administrative server.

Malicious User

Corporate proprietary data and resources resident on the administrative server are vulnerable to malicious
users' gaining access to the service provider's network, and thereby access to corporate proprietary data and
resources. The service provider's network access can be obtained by these malicious users' acting in one of the
preceding roles or exploiting a vulnerability in the service provider's overall system.

The protections here are the same as the protections for the administrative server.

Protecting Vulnerabilities of the Gateway


As stated in Chapter 10, the gateway is functionally not much more than a server that performs some processing
to convert Web traffic to a form compatible with the wireless device. You will notice that vulnerabilities and
protections listed for the network server mirror those for the administrative server. Likewise, the vulnerabilities
and protections for the gateway, Web server, and backend server are similar to those for the administrative
server. Therefore, we will not specifically cover the protections for the gateway, Web server, and backend server.
In performing an analysis of an actual system, you would want to call them out specifically so that you can
perform the define phase, where the trade-offs between security, functionality, and managerial properties are
decided.

Prioritizing
Unless you have unlimited resources to apply to the system, you are probably wondering how to prioritize the
protections identified. We have done a lot of work so far, and you have a lot of data that must be examined
during the next I-ADD define phase, where you make the decisions about which to implement and which to
forego. To assist with prioritizing the protections, we recommend creating a vulnerability/protection covering
matrix. For those not familiar with covering matrices, what we are referring to is basically a chart. For a large or
complicated system, this could be a huge chart.

Along the vertical side (rows), you list the vulnerabilities, and along the horizontal side (columns), you list the
protections. There are two ways to fill in the chart, depending on what you are trying to accomplish. You can
simply put an X or some other symbol to indicate that the protection works against a given vulnerability. An
alternative is to provide a numerical value, on a scale that makes sense for your particular system, indicating how
well the protection works against the given vulnerability. We prefer the latter in most cases because it enables
you to identify or prioritize the protections based on utility and not simply on the minimum protections needed to
cover all the vulnerabilities.

We want to stress that filling the matrix with numerical values is not a trivial task. It requires security expertise to
assess the utility of a protection mechanism accurately against a particular vulnerability. Performing this activity
without this expertise could provide a false or inaccurate assessment, which would cause your system to be
under-protected and vulnerable. Although this may be acceptable, the risk being assumed should be a conscious
effort. This can be accomplished only by having complete and accurate information on which to base these
decisions.

As an example, let us create a portion of the vulnerability/protection covering matrix for our sample generic
system. We will look at the vulnerabilities of the wireless device itself to loss and theft. We will examine only
these two aspects so that the chart doesn't get too big. This should be sufficient to demonstrate how the matrix is
created.
You generate a matrix with the vulnerabilities of the physical device along the rows and the protections along the
top. You then fill in the matrix with values indicating how well the protection works against the vulnerability.
We will use a scale of 1–5, in which 5 indicates the most protection and 1 the least. Again, how you determine
the scale and the differentiators between a 4 and 5, for example, is dependent on the system, your level of
expertise, and how you intend to use the information. The scale and differentiators should be identified and
agreed on ahead of time if this is going to be a team effort. Because this is a nonspecific system, you use the
numbers as a relative weighting against the other protections. Table 11.1 depicts what your vulnerability/
protection covering matrix would look like.

You can now see how certain protections provide utility against more than one vulnerability. Also note that not
all protections have utility against a given vulnerability, particularly as you expand the matrix to encompass the
entire system. One additional aspect (which isn't captured here but could be) is to indicate that a listed protection
may actually assist a vulnerability to be exploited. For example, a smaller device may make it easier to steal, so
perhaps there should be a -2 in that column.

Table 11.1. A Vulnerability/Protection Covering Matrix

Protection Smaller Carrying Wearable Proximity Lower External Biometric


Vulnerability Device Case or Interface Sensor Device Authentication Authorization
Holster Token Cost

Loss 1 2 4 4 5

Theft 1 3 2 5 4 5

Looking only at the chart, we see that making the device inexpensive clearly is the best single choice to address
both these issues. Logically, this makes sense but may not be an acceptable trade-off when weighed against the
managerial factors. However, that analysis is done during the next phase. This table should strictly capture the
utility of the protections without regard to implementation or other factors. During the next phase, these numbers
can be modified in another matrix, which captures the utility considering these other factors. Then an overall
matrix containing a weighted number combining all the previous charts can be made. Again, there is flexibility
here to use this tool as it best fits the needs of the analysis being performed.

One final point we want to emphasize is that although picking a single protection for each vulnerability may be
tempting, it is not the best approach from a security perspective. The use of a single protection mechanism may
provide a simple obstacle for the would-be attacker to overcome and, therefore, would be worth the attacker's
effort to exploit. The use of multiple protections tends to increase the overall security tremendously because
compromising a single protection does not necessarily leave the system vulnerable.

Additionally, the use of one protection can enhance or be enhanced by another protection. For example,
implementing a wearable device provides some theft protection, as well as loss, but allows biometric
authentication to be performed more readily, which has great utility against theft. Redundancy of protections
against a vulnerability should be attempted when feasible, particularly if the individual protections do not have
very high utility.

Building Trust—Application Security


But that can't happen to us because it's always been a matter of trust.

—Billy Joel, "A Matter of Trust"

In closing this chapter, we get around to the core component over which developers have control: application
security. The number of protections you place in an application is dependent on the level of trust you place in the
rest of the system components' having done the right thing. Before going any further, let us share a colorful
phrase often used by one of our co-workers that summarizes one way to view this: "If you didn't build it, it's
crap!"
Granted, this may be extreme in most situations. It does, however, set the stage for the key to application
security, which, when it comes down to it, is the entire purpose of this chapter. This key is, application
developers should do what they can to ensure the integrity, security, and utility of the data or resources their
applications provide, utilize, or transport. Further, developers should not rely on others for their security and
should do what they can in this regard.

To put it more generically, those responsible for each component of a system should do what is in their power to
provide protections for resources or potential targets for malicious attackers. If it is necessary for you to rely on
another component or resource to provide this protection for you, do not assume that this has been done. Verify
that what you expect to be present and functional actually is. Anticipate failures or compromises in these
protections, and have redundant or supporting mechanisms in place to protect critical information or components.
If the protections required are outside the scope of your portion of a system, you should monitor or log activity
and issue a warning when something potentially unscrupulous is detected.

In closing, we leave you with another quote, which verges on being rather "spy versus spy" in nature. You have
followed the I-ADD process and have done all that is within your power to protect the resources of concern.
There may still be areas where the system is vulnerable, either because of the security/functionality trade-offs
(more on this in Chapter 12, "Define and Design") or because you intentionally presented or left a vulnerability
unmitigated. However, this seeming vulnerability is heavily monitored as an early warning system for potential
malicious activity. The point being, sometimes knowing when you have been compromised can be as important
as protection against the compromise. If you can't protect, detect.

Chapter 12. Define and Design


In theory, there is no difference between theory and practice. But, in practice, there is.

—Jan L.A. van de Snepscheut

Now, if you have just finished reading Chapter 11, "Analyze Mitigations and Protections," you might be feeling
overwhelmed. The ideal situation for securing components of a wireless system was presented to you. If we had
been in a room with a whiteboard, we would have attempted to capture the information of a best-case scenario
for protection. You should feel assured that you have investigated the intricacies of each component of the
system. You have investigated the networks, technologies, development languages, and devices. We have also
outlined the steps necessary for you to analyze the risks in a given system and integrate those risks with the
knowledge of a system. At this point, having completed the I-ADD identify and analyze phases, you should feel
confident that you know the system you are investigating.

In your own wireless system, whether at home or in an office, perhaps some years after this book is printed, you
will need to do your own investigating into the details of each component of your system. Perhaps the mobile
device you use to connect to a wireless LAN in your home will be nothing like those described in Chapter 4,
"Devices." You should consult whitepapers, technical documents, SDKs, and the like, to determine the specific
security issues associated with that device. Perhaps the WAP debate will be over. WAP may be a distant memory
of the early part of this decade, and Bluetooth may have never gained enough marketshare to be considered a
contender for a wireless cable replacement technology. Perhaps the IEEE will be revising its 802.11t standard,
having gone through ten ramifications since 802.11b. Whether any or all of this is true, you have learned the
process necessary to secure a wireless, or any information processing, system.

To recap, you must know the system inside and out to protect it. You must understand its capabilities and its
hindrances. You must have a firm grasp of security concepts to know what you are working towards. With this
knowledge, you sit down, as demonstrated in Chapter 11, and map out the perfect security solution for your
system. Then you will take a step back and realize one fatal flaw—you do not have a budget large enough to
afford the solutions you have proffered. Also, you will have very angry users if you develop applications and
technologies so airtight that they can protect life-threatening information. Why? Because the price for this
protection is severely restricted functionality. Unless you work for an organization that protects national security,
the "best" (and, mind you, we use that term loosely) security solution in theory is not necessarily the best one in
practice. Certain trade-offs need to be made. This trade-off process is much more difficult than developing
security solutions in a budget-free and constraint-free environment. That environment is not the real world,
though.
The challenge during these last two phases (define and design) is choosing from the available options to create a
tailored, specific, feasible solution that fits within budget and time constraints and offers the right amounts of
security to the appropriate portions of your system. Fortunately, you have already gathered all the tools necessary
for completing this seemingly insurmountable task. The identify and analyze phases are both time- and resource-
intensive. The define and design phases are straightforward. In smaller systems, the two processes can be
completed simultaneously.

To make good decisions with respect to security/functionality trade-offs, you need priorities. These have already
been decided. In the identify phase you determine the data to be protected and the roles of persons who have
influence over targets (we compiled an extensive and exhaustive list). In Chapter 11, you use these items in the
analyze phase to develop general mitigations and protections.

To give meaning to these mitigations and to prioritize during the define and design phases, you will revisit the
case studies from the first chapter. Only when examined in the context of an actual system is it possible to
identify the appropriate strategy for developing a security solution. For the sake of demonstrating this security/
functionality trade-off process, we will introduce specifics into each case study in order to develop an appropriate
solution. In Chapter 11 we suggest the use of vulnerability/protection matrices. These should be used in security
planning. In the interest of space, we are not going to present comprehensive, exhaustive charts here (the process
of filling in those charts is outlined in the Chapter 11). Instead, we will provide an analysis based on the results
of those charts that are important in understanding the ramifications of each wireless system in the case studies.

The design phase is not an assessment phase like the previous three phases (identify, analyze, and define). It is
more a philosophy or methodology that allows the incorporation of security measures into a system efficiently
and cost-effectively. If the time is taken to perform the process on a system during the early stages, the security
measures are much easier to design in to the system, rather than try to patch them into a system already under
development or developed. Knowing the risks you are assuming is as important as knowing what protections are
in place.

The first step in the I-ADD define phase of making security functionality trade-offs is to eliminate target
categories that are beyond control. In all four of the case studies, the transceiver is beyond control. Unless you
are the device manufacturer, you cannot implement the mitigations identified for transceivers. Therefore, to this
end, you have to evaluate the risks you incur when you cannot mitigate. In the case studies, the risks may be of
relatively low importance. The transceiver is left vulnerable to malicious parties causing the following:

• Modification and manipulation

• Denial of service at crucial times

• Access to unauthorized users (who may not pay, may use the service to spam, and so on)

• Transmission of inaccurate information

These risks are relevant but unavoidable at this time. Because they are beyond your control, mechanisms to
detect each should be determined during the design phase. An appropriate plan must be devised that would go
into effect should any of the problems listed come to pass. The risks involved with vulnerabilities at the gateway
fall in the same category.

The rest of the targets discussed in the mitigations and protections bear some relevance in each case study. Now
your job is to determine which are important enough to mitigate and at what cost.

The Case Studies Revisited


Without further ado, let us look at the first case study, the hospital (refer to Chapter 1, "Wireless Technologies").

The Hospital
In the hospital scenario, much is at stake. You are concerned with protecting a lot of information (now required
by federal law—see Chapter 8, "Privacy," for information on privacy and legislation), some of which is critical to
Reggie's staying alive. In this case study, you are concerned with avoiding, first and foremost, loss of life, then
financial loss and loss of privacy, and last, scheduling conflicts. Several pieces of data that need to be protected
in transit, during storage on servers, and in presentation and storage on the device could be critical to Reggie's
life and health:

• Medical records and history

• Diagnoses

• Recommendation for surgery

• Prescription information

Also, certain data that needs to be protected in transit, during storage on servers, and in presentation and storage
on Reggie's device could, if compromised, cause him to incur financial loss:

• Credit card information

• Voice approval for credit card authorization

• Insurance information

Other pieces of data that need to be protected in transit, during storage on servers, and in presentation and storage
on Reggie's device could, if compromised, result in his loss of privacy:

• Medical records and history (including all related events of the day)

• Insurance information

• Credit card information

Finally, certain data that should be protected in transit, during storage on servers, and in presentation and storage
on his device could, if compromised, result in an inconvenience (namely, a scheduling or timing conflict):

• Surgery scheduling

• Appointment scheduling for follow-up

These groups of data fit in different places in the mitigation and protection schema. Six of the identified
protection categories are most significant in this case study. In making security/functionality trade-offs, the most
relevant should be considered first. For the hospital, the following should be considered:

• Protecting the wireless device

• Protecting personal data on the PDA

• Protecting corporate or third-party information

• Protecting corporate proprietary data and resources

• Protecting the network server

The next step is to integrate the data priorities with the protection categories. We will go through the identified
protection categories in the bulleted list and apply our priorities in choosing mitigations. All mitigations are
chosen from the information in Chapter 11.

Protecting the Wireless Device

Although device manufacturers play a certain role in this protection category, it is up to Reggie and Anne to take
necessary precautions. Reggie risks losing his own personal data, which, although potentially dangerous, affects
only him. Anne, on the other hand, puts many other people at risk by losing her device, so she must go to greater
lengths to protect hers. Reggie could simply employ a holster so that his device is wearable, but Anne could be
required by the hospital to use an external authentication mechanism to access the device to ensure that, in the
event of loss, no one else can access the information available to the device. (See the protection/vulnerability
matrix at the end of Chapter 11 for how we arrived at this conclusion.)

Protecting Personal Data on the PDA

In this protection category and case study, we will assume that the application on Anne's PDA that allows her to
view medical records and the like is a proprietary application. Taking a look at the mitigations suggested in this
category, we can be astute architects of this system and have the foresight to implement a certification program.
Enforced by the OS on all PDAs used by doctors in this hospital, this would allow only digitally signed code
from authorized certifiers to be loaded on the device. The software could then be validated to be secure, reliable,
and functioning as intended. The authenticity and integrity of the code could be maintained, and we could
mitigate the risk of a malicious application developer's compromising Reggie's surgery by inserting dangerous
code into Anne's PDA.

In this hospital's budget, security is important but limited. The hospital believes that this code certification
solution is enough to mitigate the application risk. It does not want to invest in a trusted OS for each device, nor
the time and money involved in implementing a hardware resident integrity check of the device and OS. The
hospital will, however, provide funding to purchase an additional encryption program for the PDAs so that
malicious application support personnel cannot compromise stored data. There is a drawback to this added
encryption program, however. Anne's battery lasts only for about 24 hours. Sometimes Anne is required to work
36-hour shifts, and on these shifts she must operate without her Compaq iPAQ PDA for 2 hours while the battery
is recharging. She is not pleased with this solution because she needs constant access to her information, so she is
pressing the hospital wireless security team for a better solution.

Reggie's funds are far less than those of the hospital. He spends most of his money in co-pays these days and
cannot afford to purchase an extended encryption program for his PDA. He, instead, relies on himself to keep the
device out of the wrong hands.

Protecting Corporate or Third-Party Information

In the hospital, device and application support personnel are one and the same. The encryption application on the
device has no back doors and offers reasonable protection against these individuals' possible attempts to
commandeer a device or its data. You do not have control over the WSP personnel in this case. The risks left
open there are somewhat mitigated by the extra encryption. The mitigations used in the preceding section to
protect information apply here as well.

Protecting Corporate Proprietary Data and Resources and the Network Server

Data on the administrative and network servers that needs to be protected is subject to malicious WSP personnel,
application developers, application support personnel, and users. You do not have control over the WSP
personnel, and risks there are significant. Any persons with access to the hospital's servers can access and
manipulate patient data, records, X rays, scheduling, and prescriptions. This is extremely dangerous.

To protect the server, we will choose mitigations from the lists developed (in Chapter 11). Sensitive data will be
stored in an encrypted form, authentication will be required before allowing access to sensitive data, access and
activity on the system will be logged, and use of a physical token will be required as part of the authentication
process. These will mitigate risks posed by all noted potentially malicious parties. The servers are mitigated
outside the realm of a wireless system. They are secured with common wired-technology techniques. In addition
to these mitigations, it is important to secure the wireless link administration interfaces that connect to the servers
with the same protection afforded Internet or other untrusted traffic.

Considerations

Several mitigation techniques were selected, bearing in mind that some data is to be protected at greater cost than
other data. The servers and devices in this case study contain highly sensitive data that requires integrity and
privacy to a paranoid degree. The hospital does not go to the nth degree in securing its systems. Not every
offered mitigation technique was employed, but an appropriate set was determined. Anne still wants to find a
better solution for her necessity to recharge her device in the middle of her shift. This can be done in a variety of
ways, whether it be engineering an encryption solution in a development language such as J2ME, which is less
battery-intensive, employing an Elliptical Curve Cryptography encryption algorithm that is less battery-intensive,
purchasing longer-lasting batteries for her device, or perhaps even switching to a less resource-intensive device,
such as a Handspring Visor Edge with a Palm OS that has an expansion capability for memory expansion, as
well as encryption.

Reggie, by the way, hangs on tightly to his device and recovers easily from his surgery, updating Anne
periodically with status reports that allow him to minimize his trips to the hospital for follow-up visits. Because
the hospital appropriately protected his credit card information (and was fortunate not to have malicious insiders
compromise the system), he has not seen any unauthorized transactions on his statements and was billed
correctly for his co-pay.

Using Wireless Devices in a Medical Environment


Another aspect to our hospital case study is the use of wireless devices in a hospital. We do not
investigate the networks and communication in great detail, but implementing a wireless solution in a
medical environment is tricky. Wireless devices can cause conflicts, malfunctions, or other problems
with heart monitors and other critical medical technologies. Wireless communication can be
implemented in a hospital only with strict adherence to restrictions and very close communication
among medical technologists and wireless technologists to develop explicit and exhaustive plans for
use or prohibition of different types of wireless technology throughout the hospital system.

The Office Complex


AdEx and NitroSoft are embarking on a presentation. In this office complex case study, you do not have any data
that is critical to someone's life or health. No data can cause direct financial loss. You do have time-critical
messages and proprietary sensitive data. After examining a hospital situation, this seems almost pedestrian. It is
business-critical, however, and businesses invest good money in wireless solutions that must work.

To close this deal, Kathleen needed to deliver the best presentation possible. Without instant communication
from her team members, she would not have been able to schedule the lunch meeting and subsequent
presentation properly. Losing or damaging the acquisition information Louis sent her to incorporate into the
presentation would have been potentially damaging to their relationship and a potential dealkiller. If Louis saw a
similar pitch from three other advertising firms and Kathleen was able to adapt the most quickly and make
efficient use of Louis's time, he will choose to do business with AdEx every time. The data that needs to be
protected here are the presentation slides, the information sent by Louis, and the contents (and transmission
expediency) of the e-mail exchanges.

The data in this case study fits in different places in the mitigation and protection schema. Two of the identified
protection categories are most significant in this case study. In making security/functionality trade-offs, we will
consider the most relevant first. For this second case study, the following should be considered:

• Protecting the wireless device

• Protecting corporate or third-party information

This may seem troubling—only two of the protection categories? For this isolated example, that's exactly right.
The users are not accessing sensitive information on corporate servers, and they should not be storing personal
information on devices used for work. The devices are used to send and receive information in this case.

Protecting the Wireless Device

In this case study, no extraordinary means need be taken to secure the physical devices. Kathleen, Louis, and
their respective co-workers are conscientious and enable passwords to secure their devices.

Protecting Corporate or Third-Party Information


The presentation and e-mail messages exchanged between the players in this case study should be protected in
transmission and on the device. Only designated recipients should receive e-mails, and the time stamps on those
e-mails are critical. Device support personnel, WSP personnel, application developers, and application support
personnel are beyond your control because this is not a proprietary application or device.

The wireless solutions used by Kathleen and the AdEx staff are Pocket PCs. They make use of the slimmed-
down office products available for their devices, and their company has not engineered any solutions on top of
the devices. The Pocket PCs are used out of the box and have no encryption capabilities. AdEx is operating on a
limited budget, and even the few mobile devices it has consented to procure are stretching its funds.

The appropriate mitigation for this situation is institution of a checklist and an oversight procedure. This
technique is typically applied for application support personnel to ensure that any security bypass settings or
diagnostic modes have been properly reset to operation settings. In this case, however, it is used to ensure that
the settings on each device are compliant with the company security standards. What Kathleen and her team can
do to protect others from viewing their presentation is to protect the individual document with a password.

The data is at risk during transmission because no encryption is employed. The group could switch to a
BlackBerry solution so that encryption is native to the device. However, they would not be able to view
presentations or other office documents on their PDAs. The trade-off they are willing to make is that, although
their transmitted data is vulnerable to capture by anyone sniffing the wireless network, they are afforded some
protection by password-protecting the document. They accept this risk in return for being able to integrate office
documents quickly and easily between their desktops and Pocket PCs.

A key point here is that the users and not just the system engineers must understand and accept these limitations.
When trade-offs are made that affect how a system operates or there are areas where users may make
assumptions about how the system operates, users must be informed about the risks they are assuming and the
appropriate actions they should take to mitigate these risks.

The NitroSoft group does use BlackBerry devices, so they cannot view attachments but are able to forward them
to the AdEx folks. Their communication is secured only between the redirector on each one's desktop and the
BlackBerry device. When they forward the message, the message and its attachment can be viewed in the clear.

Considerations

In an office scenario, financial motives, incentives, and risks are often a driving force in decision-making. When
other features, such as reliability, speed, profit, or commercial appeal, are at stake, security is often relegated to
the lowest priority. Sometimes this is acceptable, and sometimes it isn't. Very few development projects finish
ahead of schedule and under budget, so finding ways to recover time and money is important. For NitroSoft,
implementing all the security solutions we suggested may or may not be financially sensible. Performing a return
on investment (ROI) analysis might be a good idea here. Because you are not concerned with people's lives, you
do not have the same urgency as a hospital scenario. You do want to ensure profitability and sound business
practices, though, so you do not want to let security fall by the wayside. Examining the financial benefits of
building security into your environment gives you information essential to making good business decisions.

The University Campus


The university case study opens the door to interesting questions of data security. The data to be protected may
not be apparent at first glance, but the biggest risk is cheating. Steve, Brian, and Jessie's laptops are subject to
college students' attempts to gather information about assignments, projects, and tests while online. During Brian
and Jessie's NetMeetings, they are targets for streams of attacks. Additionally, if students could access Steve's
grading spreadsheet, they could give themselves better grades and not worry about cheating on subsequent tests.
All the data to be protected here is of an academic and integrity-oriented nature. Furthermore, Steve, Brian, and
Jessie's personal data needs to be protected so that, unbothered, they can continue with their own research.

The devices are the easy part of this case study. They are all laptops with wireless NICs that operate via 802.11b.
The data on these laptops can be encrypted, but the transmission cannot. If you recall from Chapter 6,
"Cryptography," the WEP algorithm that encrypts wireless traffic is easily broken. Unfortunately, as of yet, not
much can be done outside the university's implementing a VPN solution.
The data in this case study fits in different places in the mitigation and protection schema. Two of the identified
protection categories are most significant in this third case study. In making security/functionality trade-offs, you
first consider the most relevant. For the university case study, the following should be considered:

• Protecting corporate or third-party information

• Protecting user online activities, usage patterns, location, and movement

• Protecting corporate proprietary data and resources

Protecting Corporate or Third-Party Information

In this case, the corporate information and third-party information are separate. The corporate information
includes files such as assignments and future tests; the third-party information includes student files stored on
Steve, Brian, or Jessie's laptops for grading and evaluation purposes. In this case study, as in the previous two,
you are not afforded access to the WSP personnel. Also beyond your control are the application developers and
support personnel—all software used in this case study is commercial and not tailored to the university setting.

The mitigation techniques used are unique here (that is, thinking outside the box) because the devices have far
more processing power, memory, and space than PDAs. These laptops can be equipped with encryption software
that the TAs can use to encrypt, for free, the data stored on their laptops. By simply encrypting the data,
compromises by intercepting wireless traffic are negated. Students in the class could capture files but would not
be able to decrypt them without knowledge of the owner's password and possession of the owner's private key.
Alternatively, duplicate copies could be stored on the network or elsewhere so that comparisons could be made,
or some other logging activities could be implemented so that unwanted activity could be detected.

Protecting User Online Activities, Usage Patterns, Location, and Movement

Students who know the TAs' whereabouts and habits can make them easier targets for attempted attacks. The
mitigation techniques that could be employed (as noted in the explanation in Chapter 11) would fall victim to the
situation in which a security solution gets tossed by the wayside because functionality will be lost. The TAs
would, assumedly, trust their students to a certain degree and demand functionality instead of cumbersome
processes necessary to obfuscate their data transmission and location information. This is a risk they assume that
could result in students' cheating or altering grades.

Considerations

This case study uses solutions that are not present in the list of protections for their given vulnerabilities. We
intentionally placed this here to drive home one of the points of this book: Everything has to be tailored.
Sometimes, when following the process prescribed here, you will have to repeat steps, complete them out of
order, or disregard previous research in favor of a new idea. As long as the process is documented and proper
justification is seen for implementing a new solution, its inclusion warrants investigation.

Solutions introduced at this stage of the game are acceptable, but an auxiliary process should be undertaken if
this is the case. When a solution is introduced at a late stage, the security/functionality trade-off process should
be put on hold for a brief moment. During this hiatus, the new solution should be put to the same rigorous tests
and justification process as the previously developed solutions. The new solution must meet the same strict
standards set forth and must accomplish a viable goal.

If, at this stage, you are finding yourself inventing more new solutions than using already developed ones,
something went wrong before the security/functionality trade-off piece. Perhaps, when investigating devices or
technologies during the research phases, something was missed—perhaps during the identification of the roles
and targets or later at the mitigation development phase. Extra time built in to a planning schedule is a nice buffer
for this kind of obstacle. Nowhere in our instructions to you about devising good, solid security solutions do we
say that steps cannot be repeated or revisited. To the contrary, we state that this is an iterative process, so steps
can be repeated at any time necessary, while keeping in mind final goals and objectives.

The Home
The last case study in our set is one that differs for every family's implementation. In this case, Doug, the father,
uses the wireless home network for business use. He needs to protect his information both in transit and in
storage so that he can maintain client confidentiality. Emily uses the Internet via her wireless laptop to conduct
research for a law firm. The information she views is considered sensitive. Anyone tracking her Web surfing
habits or pages accessed could learn information about the cases she researches, which could be used against the
firm's clients. She needs to protect her activity, as well as her data and the transmission of that data to the firm's
corporate network. The children's systems introduce some extra vulnerability to the system, but this does not
supersede vulnerabilities already present by the parents' use of the wireless system.

Protection of the physical device is not important in this scenario. The devices are assumed to be safe because
they are not left unattended outside the home. Protecting network and administrative servers is also not critical to
this case study.

• Protecting corporate or third-party information

• Protecting user online activities, usage patterns, location, and movement

• Protecting access to network and online services

Protecting Corporate or Third-Party Information

To protect business information on Doug's laptop and legal case–related information on Emily's, the two encrypt
the data on their laptops. Furthermore, they offload any sensitive data before getting the systems serviced.

Protecting User Online Activities, Usage Patterns, Location, and Movement

Emily's law office is concerned enough with protecting her activity online that it is willing to negotiate with a
local ISP to provide a VPN. The office realizes that several of its staff will benefit, so this is worth investing in.
Emily will also encrypt her traffic by using WEP encryption, recognizing that this offers only a thin layer of
protection. The combination of these two provides adequate security for her purposes.

Protecting Access to Network and Online Services

A lesser concern in this case study, but a concern nonetheless, is protecting access to the family's wireless
network. The access point the family is using can be accessed from as far as 150 feet away. Their neighbors
could access the wireless network in Doug and Emily's house from their own back porch. Also, someone driving
by could access their network. They configure the access point to accept traffic only from the MAC addresses of
the cards in each of the authorized laptops and desktops. In a corporate environment this protection would not be
sufficient because it can be circumvented. Doug and Emily are sure that their neighbors will not bother. They
cannot be sure that they are protected from someone driving by but are not concerned about the risk.

In this case study, you do not have control of device support personal, ISP personnel, or application developers.
The family does employ passwords (which they change once a month) to access online and network services.
This provides an adequate level of authentication for their situation.

Considerations

Of all four case studies, this could arguably require the least stringent security. In a typical family network,
however, privacy should be considered for more reasons than protecting your private data. As of yet, sales and
marketing have not been exploiting home networks to target potential customers. That is not to say that this is not
around the corner. By protecting your family's information and Internet habits, you can reduce future privacy
risks. The methods employed in a home network should be commensurate with your own needs. Unfortunately,
this means relying on potentially amateur advice from a local technical support company. To determine the right
amount of security for you, if you are a home user, learning about risks and evaluating the differences between
personal and corporate solutions is the best way to go.

Case Studies Conclusion


These case studies provide real-world examples of putting security assessment and planning techniques into
action. The solutions in each one were determined after the comprehensive process outlined in the preceding
chapters of this book. One factor not discussed here (mostly because it instills fear into the heart of every security
architect) is the human factor.

Different groups of people will inevitably arrive at different conclusions about appropriate security for identical
systems. By following this process, however, the delta between two groups should be minimal. By identifying
fundamental information, each group can lay out the whole system and analyze it piecemeal before making
decisions. By establishing justifications for security recommendations, justifications and research can be
compared to analyze discrepancies, thereby compiling the results obtained by the two groups and merging them
into one security solution agreeable to all.

Just the Beginning


Although this is the functional end of our teaching you how to design and implement a comprehensive security
solution, it should be the beginning of your security process. After reading this book, you have the tools
necessary to begin investigating security solutions of your own. This book cannot possibly provide the right
answers for every security need. You have learned that security solutions cannot come out of a box in a neat little
package. They need to be specifically tailored to each individual situation and should be bolstered by copious
amounts of in-depth research. To effectively protect something that needs protection, you must first understand
the technologies, devices, networks, and languages, as well as the risks, business needs, and users. A good
security solution should take a long time to construct. It should be reviewed, and reviewed again. The first step in
designing a robust solution is to know the system you are working with. Only then can you protect it with the
force it deserves.

Wireless security is not fundamentally different from any other type of security. The intricacies vary, yes, but the
process is the same. What we have shown in this book is that many issues associated with wireless systems need
to be understood to implement tried and true security plans effectively. The security must be woven through a
system and not tacked on as an afterthought. As the wireless industry grows and changes, it would behoove all
players involved to make security a top priority. This will make users happy, increase revenue, and lead to secure
systems that are leaps and bounds ahead of their wired predecessors.

Afterword: The Future of Wireless Security


This book is just the beginning. Wireless security will evolve to fit continually with the needs of advanced
technology. As devices change, the requirements for protecting them will, too. As infrastructure changes, the
requirements for implementing comprehensive security plans will follow suit. As the public's attitude towards
wireless security and privacy changes, designers and developers will introduce new market differentiators.

Will wireless devices, as we know them today, exist ten years from now? Not a chance. If you compare the first
computer you saw with the one you own now, you can appreciate what the difference will be. The process we
taught you in this text, however, will make your security solutions robust and tailored in such a way as to apply
to third-generation and fourth-generation devices. Maybe laptops will connect seamlessly and securely to
wireless networks across the globe without user intervention. Maybe cryptographic solutions will advance to the
point that they can encrypt wireless communication to the same degree as wired without introducing latency into
a system. Then again, maybe none of this will happen.

What is more important than playing guessing games about the next best-seller wireless technology is to build
security solutions that can be expanded if necessary. A strong security plan should have hooks built in for future
modifications. If you follow the process taught in this book and build a solution in to your system from the
beginning, you won't want to start from scratch every time a component of the system changes.

Our research will continue to evolve as well. Security professionals and wireless professionals alike will continue
to investigate new ways to make the technology better, faster, smaller, cheaper, and more secure. Each
subsequent version of WEP will be investigated. New flaws will be found, and new standards created. Research
will be focused in each arena we discussed here. Everything from the virtual machines, to the browsers,
networks, technologies, and devices used in wireless systems should be systematically torn apart from time to
time and reinvestigated. The bulk of our research will focus on the software realm. Keeping abreast of changes in
your systems and addressing them via application security are crucial because application security is the realm
over which you have the most control. To protect your business assets, confidential information, consumer
interests and proprietary code, you need to architect solutions that are creative yet concrete.
We expect increasing standardization in the wireless world. As this happens, architecting security solutions will
become less tedious. As the industry becomes more streamlined, there will be room for security standardization
and widespread acceptance. The nature of wireless security is global. Technologies across continents will likely
merge and shape one another's growth. Wireless security will change to meet the global needs of anyone using a
laptop with a wireless NIC, a PDA, or a cell phone. The wireless market will be redefined, and wireless security
research will take a different shape. The same security principles will apply, but we will continue to have new
and exciting obstacles to scale. We wish you the best of luck in your endeavors.

You might also like