Te Purchase, Installation and Use of

Hewlett-Packard Computer Hardware
within the City of Oakland and Port
of Oakland Joint Domain Awareness Center
Violates “Oakland Nuclear Free Zone Act”
January 21, 2014
by Joshua Smith
occupyoakland@riseup.net
TABLE OF CONTENTS (EXHIBITS)
EXHIBIT A (01-03) [City of ] “Oakland Nuclear Free Zone Act” (1992)
EXHIBIT B (01-02) Hewlett-Packard / Atomic Weapons Establishment (UK)
EXHIBIT C (01-04) Hewlett-Packard / Los Alamos National Laboratory (US)
EXHIBIT D (01-02) County of Marin, CA: Peace Conversion Commission
Nuclear Weapons Contractors
EXHIBIT E (01-18) DAC Specifc HP Equipment Examples & Invoices
On November 19, 2013 the City Council of Oakland California broke ties with
U.S. Defense and Intelligence contractor Science Applications International
Corporation (SAIC) who was contracted as the City of Oakland and Port
of Oakland Joint Domain Awareness Center (DAC) Phase 1 Vendor. On this date
the City Council publicly and openly recognized that SAIC was indeed
a corporation involved in systems and components directly related to Nuclear
Weapons. Te City instructed that the DAC Phase 2 contract be reopened and
supplemental RFPs be sent to a recent and existing pool of interested vendors.
Te grounds leading to dismissal of SAIC are founded in the adherence to the
1992 [City of ] “Oakland Nuclear Free Zone Act” (aka NFZO.) Te City has now
established adherence to this municipal ordinance and established legal precedent.
DAC Phase 1 vendor SAIC was not only in violation of the NFZO as a service
vendor but also SAIC coordinated and facilitated the installation of computer
hardware from Hewlett-Packard (HP), another corporation involved in systems
and components directly related to Nuclear Weapons. Te purchase, installation
and use of HP computer hardware [now existing within the DAC] is in
violation of the NFZO and must be removed immediately. Tis report displays
the elements supporting the grounds upon which HP is proven to be involved in
Nuclears Weapons and provides a detailed record of the specifc HP hardware.
EXHIBIT A (01) – 1992 "Oakland Nuclear Free Zone Act"
http://www2.oaklandnet.com/oakca1/groups/contracting/documents/webcontent/oak042285.pdf
Page 01 of 10
Page 02 of 10
EXHIBIT A (02) – 1992 "Oakland Nuclear Free Zone Act"
http://www2.oaklandnet.com/oakca1/groups/contracting/documents/webcontent/oak042285.pdf
EXHIBIT A (03) – 1992 "Oakland Nuclear Free Zone Act"
WIKIPEDIA / NUCLEAR-FREE ZONE
On November 8, 1988 the city of Oakland, California passed "Measure T" with 57% of the
vote, making that city a nuclear free zone. Under Ordinance No. 11062 CMS then passed on
December 6, 1988, the city is restricted from doing business with "any entity knowingly
engaged in nuclear weapons work and any of its agents, subsidiaries or afliates which are
engaged in nuclear weapons work." Te measure was invalidated in federal court, on the
grounds that it interfered with the Federal Government's constitutional authority over national
defense and atomic energy. Te issue being Oakland is a major port, and like Berkeley, and
Davis, has major freeway and train arteries running through it. In 1992, the Oakland City
Council unanimously reinstated modifed elements of the older ordinance, reportedly bringing
the total number of Nuclear Free Zones in the United States at that time to 188, with a total
population of over 17 million in 27 states.
http://en.wikipedia.org/wiki/Nuclear-free_zone
Pertinent follow up issues:
Page 09 of 10 (partial)
http://www2.oaklandnet.com/oakca1/groups/contracting/documents/webcontent/oak042285.pdf
EXHIBIT B (01) – Hewlett-Packard (HP) Nuclear Weapons
AWE Selects HP to Help Transform its Technology Infrastructure
UK defence contractor signs $66 million infrastructure services agreement
LONDON, UK, June 10, 2010 – HP Enterprise Services today announced that UK-based
AWE plc has signed a 10-year, $66 million services agreement that will enable the Atomic
Weapons Establishment (AWE) to enhance user productivity and service levels.
AWE makes and maintains warheads for the UK’s nuclear deterrent. It has done so for more
than 50 years, serving the country safely and securely.
"It is important for our staf to have access to the best technology and services to complete their
jobs with speed and efciency," said David Maitland, chief information ofcer for AWE. "Te
HP team consistently delivers quality services with a collaborative and innovative approach,
which will help us achieve best value for money."
With this agreement, HP will provide workplace services to manage and support all AWE
end users across its sites. Workplace services include asset management, license management
and procurement of computing devices. Tese services together with service desk and site
support services, will be delivered to over 7,000 end-users including employees, integrated
personnel and task based contractors. Due to the stringent security measures, the services are
delivered by an HP team based on site. In addition the contract includes an ongoing PC refresh
programme.
Te agreement renews and extends an existing workplace services contract by deploying many
of HP’s software innovations to deliver consistently against AWE’s demanding service level
requirements. During the next phase of the contract many of HP’s Business Technology
Optimization software products will be installed as part of a comprehensive end-to-end
service management programme. Tis is designed to reduce outage times and provide proactive
service management and therefore improve overall service availability and the overall IT
experience for the end users.
"It is critical that AWE has an environment that enables fast and secure access to its systems
and data and the ability for employees to collaborate at all levels," said Craig Wilson, vice
president, HP Enterprise Services, UK & Ireland. "We will continue to apply proven processes
using our deep industry and technology knowledge to deliver services that will allow AWE to
meet its cost and customer service goals."
EXHIBIT B (02) – Hewlett-Packard (HP) Nuclear Weapons
About AWE
AWE has played a crucial role in the defence of the United Kingdom for over 50 years. It
provides warheads for Trident, a submarine-launched ballistic missile system and the country’s
nuclear deterrent. AWE’s work covers the entire lifecycle of nuclear warheads – from initial
research and design, through to development, manufacture and maintenance, and fnally
decommissioning and disposal. Visit www.awe.co.uk for more information.
About HP
HP creates new possibilities for technology to have a meaningful impact on people, businesses,
governments and society. Te world’s largest technology company, HP brings together a
portfolio that spans printing, personal computing, software, services and IT infrastructure to
solve customer problems. More information about HP (NYSE: HPQ) is available at
http://www.hp.com/.
http://www8.hp.com/uk/en/hp-news/press-release.html?id=535795
Atomic Weapons Establishment Website (http://www.awe.co.uk)
AWE has been central to the defence of the United Kingdom for more than 60 years.
We provide and maintain the warheads for the country’s nuclear deterrent, Trident.
http://www.awe.co.uk/aboutus/what_we_do_27815.html
Atomic Weapons Establishment Outsources Tech Transformation
Te Atomic Weapons Establishment (AWE) is outsourcing technology services to HP Enterprise
Services for another 10 years.
Te defence contractor will spend $66m with HP over the next decade to improve its IT
infrastructure and related services. It already had an agreement with HP and the renewal will
see the manufacturer and maintainer of nuclear warheads receive hardware, software and services
from HP.
HP will provide workplace services to manage and support all AWE 7000 users across its
sites. Tese services will include asset management, licence management and procurement of
computing devices.
Te contract, which also includes a PC refresh programme, will see HP software rolled out.
http://www.computerweekly.com/news/1280092989/Atomic-Weapons-Establishment-out-
sources-technology-transformation
EXHIBIT C (01) – Hewlett-Packard (HP) Nuclear Weapons
Los Alamos National Laboratory (or LANL; previously known at various times as Project
Y, Los Alamos Laboratory, and Los Alamos Scientifc Laboratory) is one of two laboratories
in the United States where classifed work towards the design of nuclear weapons is
undertaken. Te other, since 1952, is Lawrence Livermore National Laboratory. LANL is a
United States Department of Energy (DOE) national laboratory, managed and operated by
Los Alamos National Security (LANS), located in Los Alamos, New Mexico. Te laboratory is
one of the largest science and technology institutions in the world. It conducts multidisciplinary
research in felds such as national security, space exploration, renewable energy, medicine,
nanotechnology, and supercomputing.
http://en.wikipedia.org/wiki/Los_Alamos_National_Laboratory
Hewlett-Packard / Los Alamos National Laboratory

Every year for the past 17 years, the director of Los Alamos
National Laboratory has had a legally required task: write
a letter—a personal assessment of Los Alamos–designed
warheads and bombs in the U.S. nuclear stockpile. Tis
letter is sent to the secretaries of Energy and Defense and to
the Nuclear Weapons Council. Trough themthe letter goes
to the president of the United States.
Te technical basis for the director’s assessment comes from
the Laboratory’s ongoing execution of the nation’s Stockpile
Stewardship Program; Los Alamos’ mission is to study its
portion of the aging stockpile, fnd any problems, and address
them. And for the past 17 years, the director’s letter has said,
in efect, that any problems that have arisen in Los Alamos
weapons are being addressed and resolved without the need
for full-scale underground nuclear testing.
When it comes to the Laboratory’s work on the annual assess-
ment, the director’s letter is just the tip of the iceberg. Te
director composes the letter with the expert advice of the
Laboratory’s nuclear weapons experts, who, in turn, depend
on the results fromanother year’s worth of intense scientifc
investigation and analysis done across the 36 square miles of
Laboratory property.
One key component of all that work, the one that the director
and the Laboratory’s experts depend on to an ever-increasing
degree, is the Laboratory’s supercomputers. In the absence of
real-world testing, supercomputers provide the only viable
alternative for assessing the safety, reliability, and perfor-
mance of the stockpile: virtual-world simulations.
I, Iceberg
Hollywood movies such as the Matrix series or I, Robot
typically portray supercomputers as massive, room-flling
machines that churn out answers to the most complex
questions—all by themselves. In fact, like the director’s
Annual Assessment Letter, supercomputers are themselves
the tip of an iceberg.
Without people, a supercomputer
would be no more than a humble jumble
of wires, bits, and boxes.
Although these rows of huge machines are the most visible
component of supercomputing, they are but one leg of
today’s supercomputing environment, which has three main
components. Te frst leg is the supercomputers, which
are the processors that run the simulations. Te triad also
includes a huge, separate systemfor storing simulation data
(and other data). Tis leg is composed of racks of shelves
containing thousands of data-storage disks sealed inside
temperature- and humidity-controlled automated libraries.
Remotely controlled robotic “librarians” are sent to retrieve
the desired disks or return themto the shelves afer they are
The most important assets in the Laboratory’s supercomputing environment are the people—designing, building, programming, and maintaining the comput-
ers that have become such a critical part of national security science. (Photo: Los Alamos)
Los Alamos National Laboratory
played on the libraries’ disk readers. Te third leg consists
of the many non-supercomputers at the national security
laboratories. Te users of these computers request their data,
over specially designed networks, fromthe robotic librarians
so they can visualize and analyze the simulations fromafar.
Te Los Alamos supercomputers are supported by a grand
infrastructure of equipment used to cool the supercomputers
and to feed themthe enormous amounts of electrical power
they need. Tey also need vast amounts of experimental data
as input for the simulation codes they run, along with the
simulation codes themselves (also called programs, or appli-
cations), tailored to run efciently on the supercomputers. In
addition, systemsofware is necessary to execute the codes,
manage the fow of work, and store and analyze data.
People are the most vital component of any supercomputer’s
supporting infrastructure. It takes hundreds of computer
scientists, engineers, and support staf to design, build,
maintain, and operate a supercomputer and all the system
sofware and codes it takes to do valuable science. Without
such people, a supercomputer would be no more than a
humble jumble of wires, bits, and boxes.
The computer room’s vast foor space
is 43,500 square feet, essentially an acre—
90 percent of a football feld.
Supercomputers That Fill a Stadium
At Los Alamos, supercomputers, and the immense amount
of machinery that backs themup, are in the Nicholas C.
Metropolis Center for Modeling and Simulation, known
pragmatically as the Strategic Computing Center (SCC).
Roadrunner, the world’s frst petafop computer, joined other
supercomputers in the SCC’s computer roomin 2008. It is
a big machine, containing 57 miles of fber-optic cables and
weighing a half-million pounds. It covers over 6,000 square
feet of foor space, 1,200 square feet more than a football
feld’s end zone. But that represents only a portion of the
computer room’s vast foor space, which is 43,500 square
feet, essentially an acre—90 percent of a football feld (minus
the end zones). (Roadrunner has fnished its work for the
Laboratory and is currently being shut down.)
What is really amazing, however, lies beneath the super-
computer roomfoor. A trip straight down reveals more vast
spaces crowded with machinery that users never see.
Te computer roomis the SCC’s second foor, but that one
foor is actually two, separated by almost four feet. Tat
4-foot space hosts the miles of bundled network cables,
electrical power lines inside large-diameter conduit, and
other subfoor equipment the supercomputers rely on.
Te double foor provides enough roomfor engineers and
maintenance staf, decked out like spelunkers in hardhats and
headlamps, to build and manage these subfoor systems.
Below this double foor, on the building’s frst foor, is another
acre-size room, a half-acre of which holds row upon row of
cabin-size air-conditioning units. Tese cool the air and then
blow it upwards into the computing room, where it draws the
heat of the hard-working computers. Te now-warmed air
then rises to the third foor (basically an acre of empty space),
whereupon it is drawn back down, at the rate of 2.5 million
cubic feet per minute, to the frst foor by the air coolers so
the cooling cycle can begin again.
An additional half-acre of foor space stretches beyond the
cooling roomand holds the SCC’s electric power infrastruc-
ture, the machines that collectively keep the supercomputers
running. Tere are rows of towering power distribution units
(PDUs), containing transformers and circuit breakers, and for
backup power, rotary uninterruptible power supply (RUPS)
generators. Each RUPS uses motor generator technology.
Electricity fed into the RUPS is used to build kinetic
energy in a 9-foot-diameter fywheel that, in turn,
generates electricity.
Supercomputers are cooled by
chilled air circulating at the rate
of 2.5 million cubic feet per minute.
Tat bit of extra electricity evens out the fow of power
to the supercomputers in the case of a power surge from,
for example, a lightning strike, a common occurrence in
summertime Los Alamos. In the case of a power outage, there
is enough kinetic energy built up in the fywheel to provide
8–12 seconds of electricity to the supercomputers. Tose few
seconds are long enough for data about the current state of
a running calculation to be written to memory, reducing the
loss of valuable data.
The Metropolis Center, also called the Strategic Computing Center,
is the home of Los Alamos’ supercomputers and the vast infrastructure
that supports them. (Photo: Los Alamos)
up. If a sofware problemoccurs outside regular business
hours, the administrators can be called in and must report to
the SCC within two hours.
Evolution to Revolution
Looking for all the world like row upon row of large gym
lockers, a supercomputer is visibly very diferent froma
personal computer (PC). But the real diference is in the work
supercomputers do and the way they do it.
The guardians are expected to be
able to fx both hardware and software
problems in about an hour.
Today’s supercomputers are collections of tens of thousands
of processors housed in “racks,” cabinets holding the
processors and supporting equipment. Te large number of
processors is needed because supercomputers run immense
calculations that no PC could do. Te calculations are
divided into smaller portions that the processors work on
concurrently. Tis is parallel computing or actually, for a
supercomputer, massively parallel computing.
A new supercomputer for Los Alamos can take years to
create. Te process begins with an intense collaboration
between commercial computer companies, like IBM,
Te PDUs transformthe incoming high-voltage electric
power feed into lower voltage and distribute it to the super-
computers according to each machine’s particular voltage
needs, for example, 220 volts for Roadrunner and 480 volts
for the supercomputer named Cielo.
The Guardians
Because the Laboratory’s supercomputers work on national
security problems 24 hours a day, 365 days a year, they
require dedicated overseers who stay onsite and collectively
keep the same exhausting schedule. Te members of the
SCC’s operations staf, a teamof 22, are the experts who keep
things running and make sure anything that goes wrong gets
fxed, right away.
Divided into three shifs, the members of the operations staf
tend monitoring equipment and keep watch frominside
the Operations Center, a high, windowed nerve center that
overlooks the computer room. Te staf’s tasks are many
and varied, as they are charged not only with overseeing the
computer hardware and sofware but also, for example, with
keeping tabs on the cooling system. Te computer room’s
environment must stay cool enough to prevent damage to the
valuable computers; too much heat is a major threat.
Tese dedicated guardians are expected to be able to fx
both hardware and sofware problems in about an hour. For
sofware problems requiring additional support, a teamof
30 sofware administrators, also stationed onsite, backs them
The Laboratory’s Luna supercomputer can be accessed by all three national security labs (Los Alamos, Livermore, and Sandia), making it a “trilab” machine.
Roadrunner and Cielo are also trilab machines. (Photo: Los Alamos)

Los Alamos National Laboratory
Cray, Hewlett-Packard, etc., and Los Alamos’ computer
experts, who have extensive experience both operating and
designing supercomputers. Los Alamos computer personnel
involve themselves in the creation of each new Laboratory
supercomputer fromthe generation of the frst ideas to the
machine’s delivery . . . and afer its delivery. Once it is built
and delivered, before it is put to work, a supercomputer is
disassembled, inspected, and reassembled to ensure that
it can handle classifed data securely and can be fxed and
maintained by Laboratory staf.
Once it is built and delivered,
a supercomputer is disassembled,
inspected, and reassembled to ensure
that it can handle classifed data securely.
As a practical and economic necessity, each new Los Alamos
supercomputer takes advantage of commercial technological
advances. And in the 21st century, beginning with Road-
runner, technology fromthe private sector is being evolved in
innovative ways that are, in efect, a reinvention of how
a supercomputer is built. Roadrunner, for example, used
video game technology originally conceived for the Sony
PlayStation 3, and with that technology, it became the world’s
frst hybrid supercomputer, with an architecture that linked
two diferent types of processors to share computational
functions. Tis particular evolutionary step in supercomputer
architecture let Roadrunner surge onto the global stage as
the world’s frst petafop computer.
Architectures are still evolving, so the next generation of
machines will be radically new, even revolutionary, as will
Trinity, Los Alamos’ next supercomputer, projected to arrive
in 2015–2016. On Trinity, Laboratory designers and their
industry partners will be trying out numerous innovations
that will directly afect supercomputing’s future. So Trinity
will be unlike any other computer Los Alamos researchers
have used. And by the way, it will be 40 to 50 times faster
than Roadrunner.
Te exact formTrinity will take is still being decided, as
design discussions are still underway, but whatever the fnal
design is, it will be a means to an end. Te formeach new
supercomputer takes is dictated by what the Laboratory
needs the machine to do. In general that always means it
must answer more questions, answer new kinds of questions
about new and bigger problems, compute more data, and
compute more data faster.
Los Alamos’ specifc need, however, is focused on the
stockpiled nuclear weapons and the continuous analysis of
them. Laboratory supercomputers are already simulating the
detonation of nuclear weapons, but Trinity and the computers
that will succeed it at the Laboratory will need to simulate
more and more of the entire weapon (button-to-boom) and
in the fnest-possible detail. Design eforts for Trinity will
be aimed at that goal, and a great deal of efort will go into
creating the many new and complex subsystems that the
computer will need.
Saving Checkpoints Is the Name of the Game
At the systemlevel, some design requirements remain the
same fromsupercomputer to supercomputer, even when the
next one is as fundamentally diferent as Trinity will be. For
example, while a PC serves one user at a time, Laboratory
supercomputers must serve many users simultaneously—
users fromthe Laboratory’s various divisions and from
the other national security labs far beyond Los Alamos.
Te computer they use must be designed not only to
accommodate that multitude of users but also to provide
ultra-secure access for the protection of classifed data.
Every Los Alamos supercomputer must also be designed to
enable an operator to quickly and easily identify and locate
which component within the computer’s 6,000 square feet
(or more) of equipment needs repair. And repairs will always
be needed because of the ever-increasing size and speed of
supercomputers. As these machines get larger and faster, they
naturally become more and more subject to breakdown.
Tink about this: If a PC crashes once year and a
supercomputer is equal to at least 10,000 PCs, one might
expect to see 11 failures per hour on a supercomputer.
Consider what such a failure rate could mean for an
extensive computation. At Los Alamos, a nuclear weapon
simulation can take weeks or even months to be completed,
and those weeks and months are already costly in terms of
computer time flled and electrical power used. In addition,
successful simulations require a large collaborative efort
between, for example, the weapons scientists, computer
designers, computer code developers, and members of the
supercomputer operations team. A breakdown equals time
and money lost.
With downtime being a supercomputing inevitability, it
is commonplace to mitigate the loss by “checkpointing,”
which is like hitting “Save.” At predetermined times—say,
every four hours—the calculation is paused and the results
of the computation up to that point (the “checkpoint”) are
downloaded to memory. Returning the simulation to the
closest checkpoint allows a simulation (or other type of
calculation) to be restarted afer a crash with the least amount
of data loss.
Unfortunately, the compute time lost even to checkpointing is
becoming dearer as supercomputers grow larger and there-
fore more prone to periodic crashes, so Trinity’s designers
are working on new checkpointing methods and systems that
will maintain a higher level of computational productivity.
Los Alamos is working closely with industry to develop this
kind of defensive capability.
An ItchThat Needs Scratching
PCs are all fundamentally the same, similarly designed to do
the same tasks. Users can just go out and buy the sofware they
need for their brand of PC. But supercomputers are diferent.
Designed and built to fll a specifc need, each one scratches a
hard-to-reach itch. At Los Alamos, the special need is scien-
tifc computing and simulation, and a super-computer’s users
need specially written codes for each project.
Who develops the advanced codes used on Los Alamos
supercomputers—the codes for weapon simulation or for
general science research? Tose highly specialized programs
are created in-house, and for many years, the Laboratory’s
successive supercomputers have had enough in common that
existing codes adapted well to them. Trinity’s architecture and
performance characteristics, however, will presage a com-
plete upheaval. Te codes will need to be overhauled, not just
adapted: more of a “build it fromscratch” compared with just
an updating.
Te developers are already busy making codes “Trinity
friendly” and doing so without having anywhere near the
variety and amount of resources the giant commercial
computer companies have available. For this work, developers
depend on partnering with a range of Laboratory scientists,
who provide the unique algorithms for solving the basic
physics equations governing how the dynamics of a complex
systemplay out over time. Tis is true whether the system
being studied is the climate or a nuclear explosion. Te
nature of the scientists’ algorithms and the new data
generated as a systemchanges with time determine how the
code developers design and build a code to make efcient use
of the supercomputer and its data storage and networking
connections. In this age of “big data,” building programs
that efciently generate unbelievably massive datasets on a
supercomputer and make themuseful has become a grand
challenge. (See the article “Big Data, Fast Data—Prepping for
Exascale” in this issue.)
A Titanic Achievement
Designing, building, operating, and maintaining a super-
computer are completely diferent experiences than working
with Word or Excel on a PC at home or at the ofce. Tat is
true today and will be true, in spades, tomorrow. Computer
architectures continue to evolve, leading to the upcoming
Trinity and eventually to machines still unimagined.
Te Laboratory’s supercomputers cannot exist without
massively complex and expensive infrastructures, which
are ofen unacknowledged and unappreciated, and without
the efort and creative thinking of hundreds of scientists,
engineers, and technicians. Working together, they meet the
challenge of providing the most-advanced supercomputing
environments in the world and then use themto performthe
national security science that makes the director’s Annual
Assessment Letter possible.
It is hard work, and it is certainly worth it.
~ Clay Dillingham
Using supercomputers, scientists can interact with simulations of everything from nuclear detonations to protein synthesis or the birth of galaxies. These simu-
lations can boggle the mind—and at the same time provide clarity. Scientists wear special glasses to viewsimulations in 3Dat extremely high resolution. They
can even manipulate the simulations, as the viewer shown here is doing. (Photo: Los Alamos)
`otc·o' ´e:.·t, ´:e·:e · ^,·' 70¯ ´
Floor Space
The SCC is a 300,000-square-foot building.
The vast foor of the supercomputing room
is 43,500 square feet, almost an acre in size.
The Guardians
The Strategic Computing Center (SCC) operations staf
oversees the Laboratory’s supercomputers
24 hours a day, 7 days a week, 365 days a year, in
8-hour shifts, from the Operations Center.
These experts keep supercomputers, like Cielo
(shown outside the windows) running at their best.

Electric Power
The amount and cost of electric power required
to run a supercomputer are staggering.
Today, a megawatt (MW) of power costs
$1 million per year. Roadrunner uses
2 MW per year. Cielo, the Laboratory’s newest
supercomputer, is a 3-MW machine. Trinity
will be a 12-MW machine.
The combined supercomputing facilities
at Los Alamos use $17 million per year
of electricity.
Using all that electric power means that
supercomputers generate lots of heat. If not
kept cool, a supercomputer will get too hot
and overheat, causing processors to fail and
the machine to need costly, timely repairs.
Managers at the SCC inspect the double
foor beneath the acre-size supercomputing
room. Several of the giant air-cooling units
are visible in the foreground and behind
the managers.
7 `otc·o' ´e:.·t, ´:e·:e · ^,·' 70¯
An electrician, wearing personal protective gear, works
on a 480-volt breaker inside a power distribution unit.
Los Alamos National Laboratory
To capture the dust and dirt that
might otherwise blowinto the super-
computers, the 84 giant air-coolers
use 1,848 air flters. It takes two staf
members an entire month to
change the flters.
Air-Cooling
Beneath the acre-size supercomputing room in the SCC is a 1.5-acre foor that houses 84 giant
40-ton air-cooling units. Together, these units can move 2.5 million cubic feet of chilled air
per minute through the supercomputing room above.
The air-cooling units use water, cooled by evaporation, to chill the air before it is blown upward
to circulate around the supercomputers.
The air, now heated by cooling the supercomputers, is drawn back down to the lower foor and
back into the air-cooling units. This process transfers the heat from the air to the water, which
is then recooled by evaporation.
The high winds blowing beneath
the supercomputer room are
generated by the massive
air-cooling units.
22 `otc·o' ´e:.·t, ´:e·:e · ^,·' 70¯
Water for Cooling
The amount of water required to cool the air that, in turn, cools a supercomputer
is also staggering. The SCC uses 45,200,000 gallons of water per year to cool its
supercomputers. This amount of water costs approximately $120,000 per year.
By the end of the decade, as supercomputers become more powerful and require
more cooling, the SCC is predicted to double its water use to 100,000,000 gallons.
The SCC has fve evaporative cooling towers. These towers evaporate water to
dissipate the heat absorbed by the water in the air-cooling units.
There is room to add an additional cooling tower as the supercomputing needs
of the Stockpile Stewardship Program increase.
23
THE TIP OF THE ICEBERG
A new supercomputer for Los Alamos can take
years to create. Te process begins with an intense
collaboration between commercial computer
companies, like IBM, Cray, Hewlett-Packard, etc.,
and Los Alamos’ computer experts, who have
extensive experience both operating and designing
supercomputers. Los Alamos computer personnel
involve themselves in the creation of each new
Laboratory supercomputer from the generation of
the frst ideas to the machine’s delivery . . . and after
its delivery. Once it is built and delivered, before it
is put to work, a supercomputer is disassembled,
inspected, and reassembled to ensure that it can
handle classifed data securely and can be fxed and
maintained by Laboratory staf.
Los Alamos’ specifc need, however, is focused on
the stockpiled nuclear weapons and the continuous
analysis of them. Laboratory supercomputers are
already simulating the detonation of nuclear
weapons, but Trinity and the computers that will
succeed it at the Laboratory will need to simulate
more and more of the entire weapon (button-to-
boom) and in the fnest-possible detail. Design
eforts for Trinity will be aimed at that goal, and a
great deal of efort will go into creating the many
new and complex subsystems that the computer
will need.
Every year for the past 17 years, the director of
Los Alamos National Laboratory has had a legally
required task: write a letter—a personal assessment
of Los Alamos–designed warheads and bombs in
the U.S. nuclear stockpile. Tis letter is sent to
the secretaries of Energy and Defense and to
the Nuclear Weapons Council. Trough them the
letter goes to the president of the United States.
http://www.lanl.gov/newsroom/publications/
national-security-science/2013
april/_assets/docs/under-supercomputer.pdf
EXHIBIT C (03) – Hewlett-Packard (HP) Nuclear Weapons
TRANSFORMING DATA INTO KNOWLEDGE Super Computing 2002
The ASCI Q System:
30 TeraOPS
Capability at Los
Alamos National
Laboratory
The Q supercomputing system at Los
Alamos National Laboratory (LANL)
is the most recent component of the
Advanced Simulation and Computing
(ASCI) program, a collaboration
between the U. S. Department of
Energy's National Nuclear Security
Administration and the Sandia,
Lawrence Livermore, and Los Alamos national
laboratories. ASCI's mission is to create and use
leading-edge capabilities in simulation and com-
putational modeling. In an era without nuclear
testing, these computational goals are vital for
maintaining the safety and reliability of the
nation's aging nuclear stockpile.
ASCI Q Hardware
The Q system, when complete, will include
3 segments, each providing 10 TeraOPS capabil-
ity. The three segments will be able to operate
independently or as a single system. One-third
of the final system has been available to users
for classified ASCI codes since August 2002
(with a smaller initial system available since
April). This portion of the system, known as
QA, comprises 1024 AlphaServer ES45 SMPs
from Hewlett Packard (HP), each with 4 Alpha
21264 EV-68 processors. Each of these 4,096 CPUs
has 1.25-GHz capability, creating an aggregate
10 TeraOPS.
An identical segment, QB, is now being tested with
unclassified scientific runs, but will soon be available
for secure computing. Los Alamos has an option to
purchase the third 10 TeraOPS system from HP.
The final Q system will provide 30 TeraOPS
capability:
• 3072 AlphaServer ES45s from Hewlett
Packard (formerly Compaq)
• 12,288 EV-68 1.25-GHz CPUs with 16-MB cache
• 33 Terabytes (TB) memory
• Gigabit fiber-channel disk drives providing
664 TB of global storage
• Dual controller accessible 72 GB drives
arranged in 1536 5+1 RAID5 stor-
age arrays, interconnected
through fiber-channel switch-
es to 384 file server nodes.
Figure 1 : The first sections of the ASCI Q 30-TeraOPS supercomputer being
installed at Los Alamos National Laboratory are now up and running.
On QA, the Linpack benchmark ran at 7.727
TeraOPS. This is 75.48% of the 10.24-TeraOPS
theoretical peak of the system.
Page 1 of 2: http://www.sandia.gov/supercomp/sc2002/fyers/ASCI_Q_rev.pdf
EXHIBIT C (04) – Hewlett-Packard (HP) Nuclear Weapons
The Network - Tying together 3072 SMPs
Very integral to Q is the Quadrics (QSW) dual-
rail switch interconnect, which uses a fat-tree
configuration. The final switch system will
include 6144 QSW PCI adapters and six 1024-way
QSW federated switches, providing high band-
width (250 Mbytes/s/rail) and low latency (~5 us).
The Quadrics network enables high-performance
file serving within the segments. A6th level
Quadrics network will connect the 3 segments.
Performance on QA
Even at one-third of the final capability, perform-
ance on ASCI Q is impressive. Several ASCI codes
have scaled to the full 4096 processors, and many
applications have experienced significant perform-
ance increases (5-8 times faster) over previous
ASCI systems. LANL will run its December 2002
ASCI Milepost calculation on the QAsegment.
Supporting Q - Facilities
Q is housed in the new 303,000 sq ft Nicholas C.
Metropolis Center for Modeling and Simulation.
The Metropolis Center includes a 43,500 sq ft
unobstructed computer room and office space for
about 300 staff. In addition, it has the facilities to
support air or water cooling of computers and
7.1 MW of power, expandable to 30 MW. The
final 30T Q system will occupy 20,000 sq ft and
use about 3 MW power. The final system will
comprise about 900 cabinets for the 3072
AlphaServer ES45 SMPS and related
peripherals. Cable trays 1.8 miles in
length will hold about 204 miles of
cable under the floor.
Supporting Q - Staff
Ateam of about 50 Los Alamos and HP
employees supports the Q system. The
work of this team involves extensive
systems integration, tying together sys-
tem management, networking, security,
distributed resource management, data
storage, applications support, develop-
ment of parallel tools, user consulta-
tion, documentation, problem tracking,
usage monitoring, operations, and
facilities management. In addition to
the Q system segments described here, this team
manages several other Q-like clusters, providing
additional resources to users.
For more information about ASCI Q, contact:
John Morrison, CCN Division Leader (jfm@lanl.gov
or 505-667-6164), James Peery, LANL Deputy
Associate Director for ASCI (jspeery@lanl.gov or
505-667-0940), Manuel Vigil, Q Project Leader
(mbv@lanl.gov or 505-665-1960), Ray Miller,
Q Project team member (rdm@lanl.gov or
505-665-3222), or Cheryl Wampler, CCN-7 Group
Leader (clw@lanl.gov or 505-667-0147)
ASCI Q is a DOE/NNSA/ASCI/HP (formerly Compaq)
Partnership, operated by the Computing, Communications
and Networking (CCN) Division at Los Alamos National
Laboratory. http://www.lanl.gov/asci
LALP-02-0230
The new Nicholas C. Metropolis Center for Modeling and Simulation houses Q,
the ASCI 30-TeraOPS supercomputer at Los Alamos National Laboratory.
Page 2 of 2: http://www.sandia.gov/supercomp/sc2002/fyers/ASCI_Q_rev.pdf
EXHIBIT D (01) – County of Marin
Peace Converstion Commission: Nuclear Weapons Contractors
http://www.marincounty.org/depts/bs/boards-and-commissions/commissions/peaceconversion
EXHIBIT D (02) – County of Marin
Peace Converstion Commission: Nuclear Weapons Contractors
Marin County Peace Conversion Commission
c/o The Marin County Board of Supervisors
Telephone number for Commission Chair, Jon Oldfather: (415) 377-3931
Email for Commission Chair: jon.oldfather@gmail.com
Please Email a copy to PCC Secretary: Ann Gregory @ don.ann.gregory@juno.com



Date _____________
From: (Name of person and name of department)
To: Marin County Peace Conversion Commission
Subject: Request for Override
We request an override for the following product or service: (Description of product or service):
_______________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________

Name of Entity which provides the needed product or service, but which is on the Marin County List of
Nuclear Weapons Contractors:

______________________________________________________________________________________

The reason that the entity named is the only practical source for the product or service is as follows:
_______________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________
______________________________________________________________________________________
_________________

Thank you for your attention.
Sincerely,



http://www.marincounty.org/depts/bs/boards-and-
commissions/commissions/~/media/Files/Departments/BS/Boards%20and%20Commission/C
ommissions/OverrideRequestLetter.ashx
EXHIBIT E (01) – EOC / DOMAIN AWARENESS CENTER / JUNE SAIC INVOICES
Big possibilities. Compact form factor.
With its innovative design, the HP Z620 Workstation gives you a near silent computing solution in
a ¦orm ¦ac¦or ¦|a¦'· a per¦ec¦ ¦¦ ¦or ·pace-con·¦ra|neo env|ronmen¦·. /no ¦or ea·, ·erv|c|nç ano
upçraoe·, |¦ ¦ea¦ure· a comp|e¦e|, ¦oo|-|e·· c|a··|· w|¦| |n¦eçra¦eo |ano|e· ano a ¦oo|-¦ree
power ·upp|,.
The performance you demand.
Ce¦ ma··|ve ·,·¦em per¦ormance w|¦| a ·ma|| ¦oo¦pr|n¦. T|e || Z6¯0 ¦ea¦ure· ¦|e nex¦ evo|u¦|on
|n proce··or ¦ec|no|oç, ano ·,·¦em arc||¦ec¦ure, ·e¦¦|nç ¦|e ·¦anoaro ¦or ver·a¦|||¦, w|¦| ·uppor¦
for a single Intel
®
Xeon
®
proce··or |¯-1600 or oua| |n¦e|
®
Xeon
®
proce··or |¯-¯600 ·er|e·.
1,¯,3,4
|ow
w|¦| up ¦o 16 core·, ¦|e || Z6¯0 power|ou·e ·uppor¦· a ¦u|| rançe o¦ proce··or·, ¦o |e|p ,ou çe¦
more oone ever, m|nu¦e.
Bring your ideas to life faster.
T|e || Z6¯0 |· oe·|çneo ¦o ·uppor¦ nex¦ çenera¦|on |C| expre·· Cen3 çrap||c· ¦ec|no|oç, ¦|a¦
oou||e· ¦|e |anow|o¦| |n ano ou¦ o¦ ¦|e caro. T|e || Z6¯0 ouer· a |uçe var|e¦, o¦ pro¦e··|ona|
çrap||c· ¦rom |v|||/ ano /||- ¦rom |ro ¯| ¦o |x¦reme 3|. /no w|¦| 800w 90½ emc|en¦
power ·upp|, ano ·uppor¦ ¦or up ¦o 8 o|·p|a,·, ¦|e || Z6¯0 ç|ve· ,ou ¦|e ¦reeoom o¦ oo|nç ano
·ee|nç more.
Modify your machine.
Cu·¦om|ze ¦|e || Z6¯0 wor|·¦a¦|on ¦|e wa, ,ou wan¦ ¦o w|¦| a var|e¦, o¦ expan·|on op¦|on·,
|nc|uo|nç US| 3.0 ¦or ||az|nç ¦a·¦ ·peeo· ano up ¦o 1¯ memor, ·|o¦· capa||e o¦ ·uppor¦|nç 96C|
o¦ ¦|e |a¦e·¦ çenera¦|on o¦ |||3 memor,. w|¦| 3 |n¦erna| or|ve |a,· ano ¯ ex¦erna| |a,·, c|oo·e
¦rom a var|e¦, o¦ ·¦oraçe ¦,pe· |nc|uo|nç S/T/ 7.¯|/10|, S/S 10|/1¯| ano SS|.
Data sheet
HP Z620 Workstation
versatiIity redehned, stiII cempact.
HP recommends Windows.
http://h10010.www1.hp.com/wwpc/pscmisc/vac/us/product_pdfs/Z620_datasheet-highres.pdf
(PDF Page 01)
DAC – HP STOREEASY (DATA STORAGE) EXAMPLE ITEMS
EXHIBIT E (02) – EOC / DOMAIN AWARENESS CENTER / JUNE SAIC INVOICES
December 2012
Family data sheet
HP StoreEasy
Storage
£mcient, secure, highIy avaiIabIe hIe and appIicatien sterage
HP StoreEasy Storage Family StoreEasy 1630 – 42TB
StoreEasy 12LFF Disk Enclosure StoreEasy 18TB LFF Drive Bundle
EXHIBIT E (03) – EOC / DOMAIN AWARENESS CENTER / JUNE SAIC INVOICES
http://info.publicintelligence.net/Oakland-DAC-Invoices-4.pdf
(PDF Page 14)
EXHIBIT E (04) – EOC / DOMAIN AWARENESS CENTER / JUNE SAIC INVOICES
http://info.publicintelligence.net/Oakland-DAC-Invoices-4.pdf
(PDF Page 16)
EXHIBIT E (05) – EOC / DOMAIN AWARENESS CENTER / JUNE SAIC INVOICES
http://info.publicintelligence.net/Oakland-DAC-Invoices-4.pdf
(PDF Page 41)
EXHIBIT E (06) – EOC / DOMAIN AWARENESS CENTER / JUNE SAIC INVOICES
http://info.publicintelligence.net/Oakland-DAC-Invoices-4.pdf
(PDF Page 42)
EXHIBIT E (07) – EOC / DOMAIN AWARENESS CENTER / JUNE SAIC INVOICES
http://info.publicintelligence.net/Oakland-DAC-Invoices-4.pdf
(PDF Page 43)
EXHIBIT E (08) – EOC / DOMAIN AWARENESS CENTER / JUNE SAIC INVOICES
http://info.publicintelligence.net/Oakland-DAC-Invoices-4.pdf
(PDF Page 44)
EXHIBIT E (09) – EOC / DOMAIN AWARENESS CENTER / JUNE SAIC INVOICES
http://info.publicintelligence.net/Oakland-DAC-Invoices-4.pdf
(PDF Page 45)
EXHIBIT E (10) – EOC / DOMAIN AWARENESS CENTER / JUNE SAIC INVOICES
http://info.publicintelligence.net/Oakland-DAC-Invoices-4.pdf
(PDF Page 46)
EXHIBIT E (11) – EOC / DOMAIN AWARENESS CENTER / JUNE SAIC INVOICES
http://info.publicintelligence.net/Oakland-DAC-Invoices-4.pdf
(PDF Page 47)
EXHIBIT E (12) – EOC / DOMAIN AWARENESS CENTER / JUNE SAIC INVOICES
http://info.publicintelligence.net/Oakland-DAC-Invoices-4.pdf
(PDF Page 48)
EXHIBIT E (13) – EOC / DOMAIN AWARENESS CENTER / JUNE SAIC INVOICES
http://info.publicintelligence.net/Oakland-DAC-Invoices-4.pdf
(PDF Page 49)
EXHIBIT E (14) – EOC / DOMAIN AWARENESS CENTER / JUNE SAIC INVOICES
http://info.publicintelligence.net/Oakland-DAC-Invoices-4.pdf
(PDF Page 50)
EXHIBIT E (15) – EOC / DOMAIN AWARENESS CENTER / JUNE SAIC INVOICES
http://info.publicintelligence.net/Oakland-DAC-Invoices-4.pdf
(PDF Page 51)
EXHIBIT E (16) – EOC / DOMAIN AWARENESS CENTER / JUNE SAIC INVOICES
http://info.publicintelligence.net/Oakland-DAC-Invoices-4.pdf
(PDF Page 52)
EXHIBIT E (17) – EOC / DOMAIN AWARENESS CENTER / JUNE SAIC INVOICES
http://info.publicintelligence.net/Oakland-DAC-Invoices-4.pdf
(PDF Page 53)
EXHIBIT E (18) – EOC / DOMAIN AWARENESS CENTER / JUNE SAIC INVOICES
http://info.publicintelligence.net/Oakland-DAC-Invoices-4.pdf
(PDF Page 54)