Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more
Download
Standard view
Full view
of .
Look up keyword
Like this
6Activity
0 of .
Results for:
No results containing your search query
P. 1
05-00-00 Validating Voting Machine Software

05-00-00 Validating Voting Machine Software

Ratings: (0)|Views: 119 |Likes:
Abstract
We studied the notion of human verification of
software-based attestation, which we base on the Pioneer
framework. We demonstrate that the current state
of the art in software-based attestation is not sufficiently
robust to provide humanly verifiable voting machine integrity
in practice. We design and implement a selfattesting
machine based on Pioneer and modify, and in
some cases, correct the Pioneer code to make it functional
and more secure. We then implement it into the
GRUB bootloader, along with several other modifications,
to produce a voting machine that authenticates and
loads both the Diebold AccuVote-TS voting software as
well as its underlying operating system. Finally, we implement
an attack on the system that indicates that it is
currently impractical for use and argue that as technology
advances, the attack will likely become more effective.
Abstract
We studied the notion of human verification of
software-based attestation, which we base on the Pioneer
framework. We demonstrate that the current state
of the art in software-based attestation is not sufficiently
robust to provide humanly verifiable voting machine integrity
in practice. We design and implement a selfattesting
machine based on Pioneer and modify, and in
some cases, correct the Pioneer code to make it functional
and more secure. We then implement it into the
GRUB bootloader, along with several other modifications,
to produce a voting machine that authenticates and
loads both the Diebold AccuVote-TS voting software as
well as its underlying operating system. Finally, we implement
an attack on the system that indicates that it is
currently impractical for use and argue that as technology
advances, the attack will likely become more effective.

More info:

Published by: Human Rights Alert, NGO on Aug 30, 2010
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less

05/24/2012

pdf

text

original

 
On the Difficulty of Validating Voting MachineSoftware with Software
Ryan Gardner
ryan@cs.jhu.edu
Sujata Garera
sgarera@cs.jhu.edu
Aviel D. Rubin
rubin@jhu.edu
Abstract
We studied the notion of human verification of software-based attestation, which we base on the Pio-neer framework. We demonstrate that the current stateof the art in software-based attestation is not sufficientlyrobust to provide humanly verifiable voting machine in-tegrity in practice. We design and implement a self-attesting machine based on Pioneer and modify, and insome cases, correct the Pioneer code to make it func-tional and more secure. We then implement it into theGRUB bootloader, along with several other modifica-tions, to produce a voting machine that authenticates andloads both the Diebold AccuVote-TS voting software aswell as its underlying operating system. Finally, we im-plement an attack on the system that indicates that it iscurrentlyimpracticalforuseandarguethatastechnologyadvances, the attack will likely become more effective.
1 Introduction
The Florida 2000 debacle in the United States resultedin passage of the Help America Vote Act, which pro-vided billions of dollars in funding to the states to investin electronic voting systems. As a result, the use of theDirect Recording Electronic (DRE) became widespread.These software-based systems have come under fire fromsecurity experts and activists claiming, among otherthings, that there is no way to verify that the machinesdo not contain malicious code. This is a valid concern.In this paper, we address a subset of this problem.Weexaminethepossibilityofamechanismbywhichapoll worker, on election day, could validate that the soft-ware in a voting machine is the software that was pro-duced by the vendor, without modification. Although
This work was supported by NSF grant CNS-0524252. This pa-per originally appeared in the USENIX/ACCURATE Electronic VotingTechnology Workshop, August 2007.
Johns Hopkins University, Baltimore, Maryland
we developed the idea independently, the notion of hu-manverifiableattestationwasfirstrecentlyintroducedbyFranklin, Luk, Seshadri, and Perrig [12,13]. Our con-tribution highlights the difficulties in achieving votingsoftware integrity and demonstrates that it is extremelyunlikely that current software attestation techniques canprovide security for electronic voting.Validating that software has not changed is not thesame as ensuring that it does not contain malicious code.Although making strong guarantees about the security orquality of the software itself may, in many ways, presentmore challenges than ensuring authenticity, our work does not make any claims in this regard. Other researchis developing improvements in this area [6,26,36].The current solution to the problem of validating thatcorrect, unchanged software is on a DRE is inadequate.While voting machine vendors are encouraged to sub-mit hashes of their DREs’ code to the National SoftwareReference Library (NSRL) [2], there is unfortunately noprocess in place for verifying these hashes.Even if there were a method to check the hash of theexecuting binary on the voting machines in the precincts,it would require complete trust in the hash verificationfunction. Thus, the approach reduces back to the un-solved problem of ensuring that software is authentic.Potential solutions using trusted hardware, such as a pro-cessor with Intel Lagrande [20] or AMD Pacifica [4]technology and a Trusted Platform Module (TPM) [33]are similarly problematic. Although they could be usedto provide securely signed hashes of the software, onewould still require some completely trusted mechanismto verify these signatures.The goal of our work is to examine the possibility of a solution whereby a human poll worker could validatethat the software running on a voting machine has notbeen modified, without the assistance of any computingdevice. We describe our implementation of a voting ma-chine verification system designed to allow for the attes-tation and verification of its software at boot time. Our
 
framework employs the Diebold voting machine soft-ware that was found on the Internet in 2003 [17].The core of our system is Pioneer [27], a softwarebased tool for verifying the integrity of code execution.Pioneer provides a mechanism for validating that correctcode is running based on some of the performance char-acteristics of the computer. In particular, it relies on thefact that certain operations would require a noticeable in-crease in computation time if code were modified. Weattempt to build on this concept by extending the Pio-neer primitive so that the time difference between a runwith legitimate software and one with modified softwareis easily discernible by a poll worker.We implement a version of the best known attack onour system, and show that to achieve the desired humanverifiability, Pioneer must run uninterrupted for roughly31 minutes on each machine before every election andthe operation must be timed to within several seconds.Every voting machine in the precinct needs to undergothis procedure. We believe that our research demon-strates that this solution is not practical.Our data and analysis differs considerably from thatpresented in [27]. We provide evidence that technologi-caladvances inprocessorarchitecturereducethesecurityof current software based attestation techniques. In par-ticular, the increased parallelization of execution makesit difficult to achieve uniqueness of a run-time optimalimplementation of a function.
2 Threat Model
Every system designed to provide security features needsto be evaluated within a threat model. As such, our work does not address all of the security problems in elec-tronic voting. Rather, we are concerned with the issueof determining whether the software running on a votingmachine on election day is the same software that wasproduced by the manufacturer. In our design, we makeseveral assumptions about the capabilities of the adver-sary and point out that in the real world, an adversary islikely to be more powerful.We assume that an adversary will not replace any of the hardware components in the voting system or theirfirmware. We limit attacks to modifications in the soft-ware. Such modifications could occur, for example, if an insider at the manufacturer changes the code afterthe hash values are computed and before the system isreleased. The software is also vulnerable to changeswhen the machines are in storage. It is common prac-tice prior to elections to store voting machines overnightin the churches, synagogues and schools where the elec-tion will take place. In many instances, there are multiplepeople with physical access to the machines, and it is ourexperience (based on working at the polls) that the tam-
Step Poll Worker Voting Machinea Enter Challenge
b Start Timerc Compute Self Checksumd
Report Checksume Stop Timerf Hash Voting Softwareg
Report Hashh Verify Time Differencei Verify Checksum j Verify Hash of Software
Figure 1: Poll worker election day procedureper seals on these machines are not very effective againsta determined and resourceful adversary.We assume that an adversary may be able to modifyand control all of the DRE software, including the BIOS,bootloader, operating system, and voting application. Inour specific implementation, we also assume that an at-tacker does not modify the BIOS. But, this assumption isbased on our specific implementation and is not a designconstraint.
3 Our Approach
In this section, we describe a high level overview of ourframework, without delving much into the implementa-tion details, which are covered in a later section. One of the primary goals of this architecture is to provide humanverifiable attestation [12,13].
3.1 Poll Worker Procedures
In our approach, each election day begins with a pollworker verifying that each voting machine produces acorrect checksum, given a challenge, and that its soft-ware matches an authentic hash. To that end, the pollworker must be provided in advance with a trusted copyof the correct hash value. Furthermore, she must be ableto produce a random, fixed-length challenge for each ma-chine that cannot be anticipated by an adversary and forwhich she can verify the response. One way we envisionthat these values could be available to the poll worker isto provide a card with values that are hidden under a graycoating that could be scratched off by the poll workerwhen each machine is set up.The hash value could be printed on the card in visiblefashion, and the random challenge and corresponding re-sponse would be kept secret with the scratch-off in thesame way numbers on lottery tickets are protected. Tofurther assure randomness and unpredictability, perhapsthe poll worker could pick such a card at random out of a box full of cards. There are many ways that a ran-dom challenge with a valid response could be chosen bya poll worker, but all of them require that the poll worker
 
be given the values in advance and that the values be keptsecret.At the start of an election day, the poll worker wouldfollow the procedures outlined in Figure 1. As illus-trated, the poll worker enters a random challenge using akey pad.
1
In response to this challenge, the voting ma-chine displays a checksum value and a hash value as de-picted in Steps d and g. The poll worker verifies that thechecksum is identical to the expected response, based onthe value on the scratch-off card. The poll worker alsoverifies that the time taken by the voting machine to com-pute the checksum is below a preset threshold
2
and thatthe hash of the software is correct. If all verification stepssucceed, the machine’s software would be considered au-thentic.
3.2 The Attestation Process
We now explain the process by which our system se-curely attests itself. The attestation occurs in two distinctphases:
Self-Checksumming.
Shortly after the machine is pow-ered on, the bootloader computes a checksum overall its running code, and reports the result to theuser. If the response is verified to be correct, thisdynamically establishes a root of trust in the boot-loader.
Hashing.
The now trusted bootloader computes a stan-dard cryptographic hash over the disk image aboutto be booted. It then reports this hash to the user,who can verify it and thus validate that the softwarebeing booted is authentic.
BOOTLOADERVOTINGMACHINE1)Challenge2) SelfChecksumDISPLAY3) Checksum5) Hash4) ComputeHash
Figure 2: Proposed architectureThe attestation process is depicted in Figure 2, andwe explain it in more detail below. Since the processof hashing is straightforward, we limit our discussion tothe self-checksum and the attestation process as a whole.To compute the self-checksum, we leverage Pio-neer [27]. Pioneer is the work of Seshadri, Luk, Shi,Perrig, van Doorn, and Khosla. It is a challenge-responsebased tool originally designed for enforcing untamperedcode execution and produces checksums intended forverification by remote computers. Although we refer thereader to [27] for its details, we briefly describe Pioneerin the context of our work.Pioneer can simply be thought of as an
optimally im- plemented 
function over a challenge and the state of a specific range of memory that contains its code. Itreads from pseudo-random addresses within the speci-fied memory range and incorporates their values into itsoutput (the checksum) and the pseudo-randomness itself.Notice that in this process, Pioneer reads and attests itsown code.The general idea behind Pioneer is that some author-ity will run the authentic code of interest (in our case,the honest bootloader), and can provide the correct func-tion outputs corresponding to the “good” state of mem-ory. These outputs can then be verified against the out-puts of executions on other machines, and in this way, itoperates like a standard checksum.The critical property of Pioneer that distinguishes itfrom a regular checksum is its
optimality
. It was de-signed with the intention of preventing any code fromcomputing the Pioneer function faster than it is currentlyimplemented, i.e. when it is computing a legitimatechecksum. Thus, an adversary may be able to writemalicious code other than Pioneer that gives the samechecksums for the same challenges, but that code shouldnot run as fast as Pioneer. The verification process thenmakes use of this fact by requiring the verification of thePioneer execution time as well as the correctness of itschecksum. It is in this way that Pioneer can dynamicallyestablish a root of trust in itself.The major difference between the original Pio-neer [27] and our use of Pioneer, is that we designedour version to be human verifiable, an idea that was firstexplored independently by Franklin, Luk, Seshadri, andPerrig [12,13]. The original Pioneer provides a means of ensuring code execution on remote machines and is thusdesigned to run very quickly. As a result, the maliciousand honest Pioneer execution times were distinguishableonly by machines. In our work, we drastically increasethe number of checksum iterations executed by Pioneerto force the best known attack version to run approxi-mately 3 seconds slower than the legitimate one. Withthe aid of a clock, a human challenging our modified Pi-oneer can independently verify its integrity. We describe

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->