Professional Documents
Culture Documents
Emma Bohse
ENGL 138T
20 April 2021
Figure 1
AI in Everyday Life
Robots that can clean a messy room may seem like a thing of the future, but so did self-
driving cars and automated personal assistants, like Amazon Alexa or Google Nest. However,
these technologies all became possible or have the opportunity to become possible through
artificial intelligence (AI) and machine learning. AI is intelligence that machines show, while
machine learning is the “training” these AI systems go through to complete a particular task or
learn a set of data.1 Systems built through machine learning and AI are more intertwined into
people’s lives than many might expect. When scrolling through a TikTok “for-you-page” or
Netflix’s movie and show recommendations, one does not even give a second thought to the
perfectly tailored content. However, these normal technologies are AI at work, learning what
users like and making predictions based on this to show users other things they may enjoy.
Besides self-driving cars and social media algorithms, AI has the potential to aid
controlling drones to protecting the nation’s cyberspaces, AI has many applications throughout
the intelligence community.2 However, as AI advances and the capabilities of systems grow, the
risks of implementing them increase. With 49% of defense organizations already taking
technologies, artificial intelligence is taking center stage in intelligence communities across the
world.3 However, because AI is such a recent technology, there are limited regulations to ensure
it is implemented safely and reliably; this means that while there is the potential for AI to
successfully aid in defending the nation, there is the possibility for malfunctions and problems
that put people and the nation at risk. AI has powerful applications throughout the intelligence
community, but without increased regulations and research, these benefits become potentially
deadly risks, driving the need for government policies to regulate the use of AI and fund research
and education.
Figure 2: How far along defense organizations are in implementing AI for various
purposes.
As the use of AI increases, the systems being developed are becoming more advanced.
While this leads to powerful AI throughout the intelligence community, it also makes it more
difficult to ensure that these systems are implemented safely. AI is intelligence shown by
machines, but it still lacks human morals or logic, leading to three main AI “accidents” or
potential malfunctions.4
Failure of Robustness
The first is a failure of robustness where the AI system receives inputs that are
unexpected or that it has not been trained to handle.5 For example, in 2018, researchers came out
with an AI system that could detect skin cancer more accurately than dermatologists.6 However,
the system was not trained to handle people of color, so many went undiagnosed.7 Related to the
system decided that a glare was an enemy missile because it was not trained to distinguish
between the two, it might accidentally launch a missile leading to retaliation.8 Failure of
robustness also makes a system easy to hack because the machine does not know enough to flag
Failure of Specification
The next potential malfunction is a failure of specification where the machine does not
interpret the algorithm as the programmers intended; this can lead to unexpected behaviors and
side effects.10 For example, in 2017, during wildfires in California people were evacuating using
traffic apps like Waze.11 However, these apps were focused on alleviating traffic, so they sent
cars directly into the wildfires.12 Now, imagine if an AI system was designed to control a drone
that gathers intel, the drone might add unnecessary information to "complete its task”; this
happens because a system will cut corners to finish a task if not properly programmed.13
Failure of Assurance
The last potential malfunction relates to failure of assurance where the AI system cannot
be properly monitored or controlled during operation; this typically happens when people do not
understand what the AI systems are doing or have the time to recognize and fix mistakes before a
malfunction occurs.14 For example, in 2019, Boeing experienced a couple of plane crashes due to
a new system malfunctioning and reading sensors incorrectly; based on the incorrect data they
received, systems kicked in to “stabilize” a plane that did not need to be, making the pilots lose
control.15 Something like this tragic incident could happen again if the intelligence community
begins to rely on auto-piloted or self-controlled cars, planes, drones, etc. These three main AI
“accidents” and malfunctions are just the tip of the iceberg without proper system safety,
An Increasing Risk
increases. Competition to utilize AI and develop more sophisticated systems is heating up, which
may lead to cutting corners or not testing systems thoroughly.16 Systems are also becoming
increasingly complex and fast; this makes it harder for people to find problems, fix problems,
and intervene when something does go wrong.17 Those who operate AI systems may be
untrained or rely too heavily on the machine’s intelligence as well, leading to concerns over the
operator’s abilities to intervene when necessary.18 Finally, older AI systems are being used in the
development of newer systems, which is a security concern. If one AI system is used throughout
multiple programs, it can be targeted and hacked to bring down a large network of AI systems.19
Overall, as AI systems increase in complexity and implementation, there is a need for more
concrete policies to regulate their use, the testing they go through, and the research and education
Current AI Policies
Currently, there are limited policies in place addressing the use and implementation of
AI. In 1993, the National Science and Technology Council was created by President Clinton; this
council was established to create policies regarding the use of technology.20 Since their
foundation, the National Science and Technology Council has facilitated research,
communicated with the private sector, and strengthened education about technology.21 To
address these goals, the National Science and Technology Council funneled money into research
about the applications of science and technology.22 They also partnered with universities around
the country and established research labs and facilitates.23 Each of these research partners had to
follow and apply certain rules and principles, such as ensuring no impacts to the environment
and the health and safety of the nation.24 AI is now beginning a similar process. Under the
National Science and Technology council, there is the Select Committee on Artificial
Intelligence. The goals of this committee include leading research and development of AI,
leading the implementation of trustworthy AI, preparing the current and incoming workforce to
agencies.25 While these are strong steps forward, there are only two executive orders and a
Maintaining Leadership in AI
Intelligence, was passed. It outlines how the United States must drive technological
breakthroughs, create technical standards, educate the current and future workforce, foster public
trust, and promote an environment that supports AI.26 The executive order also pushes funding
for the research and development of AI technologies, along with government collaboration with
private sectors.27 Finally, it brushes the surface of what data AI systems can use to train,
attempting to ensure the privacy and protection of civil liberties for United States citizens. 28
While a memorandum is not law, it lays the foundations of possible laws and sets up a
framework that suggests how AI should be used and regulated. It pushes the principles of public
trust and participation in AI, integrity, risk management, benefits and costs, flexibility, fairness,
transparency, safety and security, and coordination when implementing and regulating AI.29
However, the memorandum also aims to aid in reducing barriers to developing and implementing
AI through access to more federal data, communication to the public, creating technical
the Use of Trustworthy Artificial Intelligence in the Federal Government, was passed outlining a
few regulations on the use of AI. This executive order included the idea that AI should be used
when appropriate, that the Office of Management and Budget should come out with more
regulations, and those who use AI should follow specific principles.31 These principles are
respecting that Nation’s values, designing for purpose and performance, being
accountable for tests and malfunctions.32 While these regulations are a step in the right direction,
they are vague and leave much up to interpretation. Executive Order 13960, Promoting the Use
of Trustworthy Artificial Intelligence in the Federal Government, also does not apply to AI used
Education and research are a huge part of developing AI technology and systems. While
the Select Committee on Artificial Intelligence touches on increased funding for research and
ensuring that the nation’s current and future workforce is prepared to handle AI, there are no
concrete steps about how this will be done. This brief suggests that there should be an outline of
how research funding will be used and an increase in the research budget, along with a push
towards partnering with universities and labs to educate the nation on the use and
implementation of AI.
would increase for different forms of research surrounding technology use in the federal
related research and development.34 In total, the budget was increased by $83.3 billion, allowing
the government to increase research and development to address their goals of maintaining
leadership in STEM and harnessing the power of technology.35 They also outlined how the
government will partner with universities to pursue research and educational opportunities,
helping to ensure the future work force is trained to handle technology.36 The National Science
and Technology Council also supported initiatives to strengthen federal government laboratories
and ensure they could utilize newer technology, preparing the current work force.37 This has
allowed the National Science and Technology Council to take advantage of emerging technology
with all the good these policies have done for technology, similar policies could have major
could focus the AI budget on different sectors. This will allow research and development of AI
develop stronger, more reliable, safer AI systems. Without the resources to properly develop and
implement AI systems, corners might be cut. The Select Committee could also follow the
Council’s lead in partnering with universities and the federal government’s labs to ensure that
people know how to work with AI systems and develop safe, reliable programs. If people fully
understand how to use AI through increased research and educational opportunities, many of the
AI accidents and malfunctions can be avoided. Especially problems with assurance where people
do not understand the scope of a system and therefore cannot react to stop or fix a problem
efficiently. Increased research and education will also allow the United States to lead in AI
development and implementation in the private and public sectors, as well as in the intelligence
community, which is a major goal of the executive orders and memorandum the Select
Along with increased research and education, it is imperative that AI systems go through
testing that ensures they are safe and reliable. However, there are no policies that specifically
regulate how much testing systems must go through or what tests they should pass. This brief
systems to ensure safety and reliability, as well as checking these systems every year to see if
they are still performing properly. While every AI system has different purposes, there should
still be a threshold they must pass and check-ups to ensure the systems are still working properly
down the line. The Science and Technology Council does not have these types of regulations on
technology, but because AI systems are not necessarily staffed by humans it is imperative that
they are safe and reliable in situations without human interference, especially when used by
intelligence communities for national security purposes. Also, the one executive order that lays
out principles for AI systems does not apply to defense or national security systems, so
regulations are needed in that sector as well as increased regulations all around. 39
approves food, drugs, medical products, animal products, vaccines, cosmetics, tobacco products,
and more.40 Every food or drug is slightly different, so ensuring their safety requires flexible, but
strong standards. The FDA approves foods and drugs through an analysis of what it brings to the
table, assessment of benefits and risks through data, and development of strategies that manage
potential risks.41 These policies have allowed the FDA to approve over 20,000 safe prescription
drugs and control over $2.7 trillion worth of products ranging from food to medical supplies.42
People eat and use FDA approved products all the time, showing their methods work. While it
seems like the FDA and AI have little in common, the FDA is able to research and ensure the
safety of a multitude of products; this applies to AI as every system is different but needs to be
An agency, like the FDA, could go through a similar process with AI systems to ensure
this happens. Through research, testing, and setting standards, like the FDA, AI systems can be
regulated. Completing tests and check-ups are imperative to avoiding AI “accident” and
malfunctions. Specifically, specification and robustness, especially if these systems are being
used by intelligence communities that are working towards securing and defending the nation. If
a system is tested properly and enough, failures of robustness will not be as much of a risk
because tests can reveal the information a system might be missing. It will also aid in reducing
failure of specification because with testing, it will be clear if a system is acting the way it was
intended. It is shown through research that using a structured identification approach to find
critical risks, instituting robust guidelines, and refining guidelines depending on the nature of a
risk aids in reducing the risks of implementing AI systems; these can all be things that a new
This new agency or administration can also help decide if something should be controlled
by AI or not. If their research suggests that there are risks in having certain systems be controlled
completely by AI, then the administration or agency can suggest more human interference. They
can also work to develop standards about what does and does not fall under the scope of AI,
much like the FDA decides what products are safe; this administration or agency could help
answer questions like should the nation rely on AI to deploy missiles, or should humans always
be involved in the final decision and how much can AI help in these scenarios. Overall, much
like the FDA, AI needs some sort of group that continues to implement standards as its use
Each of these policy options aims to reduce the risks of implementing AI systems. While
this has the potential to hinder some research and applications of AI, the benefit of ensuring safe
and reliable systems is more important. If the nation continues with the few, vague policies and
suggestions it has in place, there could be a disastrous mishap with an AI system that puts the
nation and innocent people at risk. Also, every policy that has been suggested has been done
before for a different application, so the resources to implement them are available. By bringing
awareness to AI accidents and potential malfunctions, these policies can gain the support they
need to continue. There have already been more “minor” incidents that have affected human life,
like self-driving car accidents and misdiagnosed cancer patients.44 So, do not wait for a
nationwide disaster to be a wake-up call. Implementing more regulations and policies to support
the safe growth of AI allows the nation to be a leading developer of AI, while ensuring its
citizens are protected. The intelligence community can reap major benefits from AI, if used
properly and safely, so what is there truly to lose?45 AI can be a powerful ally to the nation’s
defense or a major hindrance. It is up to policy makers to regulate its use, so the intelligence
community can use AI’s strength to defend the nation. As systems become more powerful, the
Figure 4: Where defense leaders see AI helping them secure the nation.
A Call to Action
Many do not even know what AI is or what it is fully capable of, so making policies
regarding and regulating AI needs to become a front-page-issue. The public can contribute to
increased education by seeking out resources to raise their awareness of AI and its flaws and
benefits, along with sharing this information to begin an open conversation about the
implementation of AI. At the end of the day, these policies are in the hands of law makers and
politicians, so supporting officials who understand the urgency of addressing AI and speaking up
about the issue to government officials will have the biggest impact. AI systems have the power
to do so much good for the nation, but these rewards cannot be reaped without addressing the
concerns that come along with them. Through increased funding and education, along with a
federal body that regulates and approves AI systems, the intelligence community and private
sectors can use AI to its full benefit safely, efficiently, and reliably.
End Notes
https://research.samsung.com/artificial-intelligence.
1. Tim G. J. Rudner and Helen Toner, “Key Concepts in AI Safety: An Overview” (CSET, 2021),
https://cset.georgetown.edu/wp-content/uploads/CSET-Key-Concepts-in-AI-Safety-An-
Overview.pdf.
2. Daniel Chenok and others, “Deploying AI in defense organizations” (New York: IBM, 2021),
https://www.ibm.com/downloads/cas/EJBREOMX.
3. Ibid.
Figure 2: Daniel Chenok and others, “Deploying AI in defense organizations” (New York: IBM, 2021),
https://www.ibm.com/downloads/cas/EJBREOMX.
4. Tim G. J. Rudner and Helen Toner, “Key Concepts in AI Safety: An Overview” (CSET, 2021),
https://cset.georgetown.edu/wp-content/uploads/CSET-Key-Concepts-in-AI-Safety-An-
Overview.pdf.
5. Zachary Arnold and Helen Toner, “AI Accidents: An Emerging Threat” (CSET, 2021),
https://cset.georgetown.edu/wp-content/uploads/CSET-AI-Accidents-An-Emerging-Threat.pdf.
6. Haenssle et al., “Man against machine: diagnostic performance of a deep learning convolutional
7. Zachary Arnold and Helen Toner, “AI Accidents: An Emerging Threat” (CSET, 2021),
https://cset.georgetown.edu/wp-content/uploads/CSET-AI-Accidents-An-Emerging-Threat.pdf.
8. Ibid.
9. Ibid.
10. Tim G. J. Rudner and Helen Toner, “Key Concepts in AI Safety: An Overview” (CSET, 2021),
https://cset.georgetown.edu/wp-content/uploads/CSET-Key-Concepts-in-AI-Safety-An-
Overview.pdf.
11. Jefferson Graham and Brett Molina, “Waze sent commuters towards California wildfires, drivers
fires-navigation-apps-like-waze-sent-commuters-into-flames-drivers/930904001/.
12. Ibid.
https://arxiv.org/pdf/1606.06565.pdf.
14. Zachary Arnold and Helen Toner, “AI Accidents: An Emerging Threat” (CSET, 2021),
https://cset.georgetown.edu/wp-content/uploads/CSET-AI-Accidents-An-Emerging-Threat.pdf.
15. Ibid.
16. Ibid.
17. Ibid.
18. Ibid.
19. Ibid.
Figure 3: McKinsey & Company, “The State of AI in 2020” (McKinsey Analytics, 2020),
https://www.mckinsey.com/Business-Functions/McKinsey-Analytics/Our-Insights/Global-survey-The-
state-of-AI-in-2020.
20. Administration of William J. Clinton, “Executive Oder 12882- President's Committee of Advisors
https://www.govinfo.gov/content/pkg/WCPD-1993-11-29/pdf/WCPD-1993-11-29-Pg2450.pdf.
21. White House and Congress, “U.S. national science and technology goals” (Washington D.C.:
22. Ibid.
23. Ibid.
24. Ibid.
25. National Artificial Intelligence Initiative Office, “Legislation and Executive Orders” (Washington
26. Donald Trump, “Executive Order 13859- Marinating American Leadership in Artificial
https://www.federalregister.gov/documents/2019/02/14/2019-02544/maintaining-american-
leadership-in-artificial-intelligence.
27. Ibid.
28. Ibid.
content/uploads/2020/11/M-21-06.pdf.
30. Ibid.
31. Donald Trump, “Executive Order 13960- Promoting the Use of Trustworthy Artificial
trustworthy-artificial-intelligence-in-the-federal-government.
32. Ibid.
33. Ibid.
34. White House and Congress, “U.S. national science and technology goals” (Washington D.C.:
35. Ibid.
36. Ibid.
37. Ibid.
38. Office of Science and Technology Policy, “NSTC Documents & Reports” (Executive Office of
https://obamawhitehouse.archives.gov/administration/eop/ostp/nstc/docsreports.
39. Donald Trump, “Executive Order 13960- Promoting the Use of Trustworthy Artificial
https://www.federalregister.gov/documents/2020/12/08/2020-27065/promoting-the-use-of-
trustworthy-artificial-intelligence-in-the-federal-government.
https://www.fda.gov/media/154548/download.
information/fda-rules-and-regulations.
https://www.fda.gov/media/154548/download.
43. Benjamin Cheatham et al. “Confronting the risks of artificial intelligence” (McKinsey Quarterly,
2019), https://www.cognitivescale.com/wp-content/uploads/2019/06/Confronting_AI_risks_-
_McKinsey.pdf.
44. AIID, “Artificial Intelligence Incident Database” (AIID, 2022),
https://incidentdatabase.ai/?lang=en.
45. Thanh Cong Truong et al., “Artificial Intelligence in the Cyber Domain: Offense and Defense”
Figure 4: Daniel Chenok and others, “Deploying AI in defense organizations” (New York: IBM, 2021),
https://www.ibm.com/downloads/cas/EJBREOMX.