You are on page 1of 8

This paper describes objective technical results and analysis.

Any subjective views or opinions that might be expressed


in the paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government.

HADDAL et al.
SAND2018-9787C

AUTONOMOUS SYSTEMS,ARTIFICIAL INTELLIGENCE AND


SAFEGUARDS

R. HADDAL,N. HAYDEN
Sandia National Laboratories(SNL)
Albuquerque, New Mexico, USA
Email: rhaddal@sandia.gov, nkhayde@sandia.gov

S. FRAZAR
Pacific Northwest National Laboratories(PNNL)
Seattle, Washington, USA
Email: sarah.frazar@pnnl.gov

Abstract

This study explores the mission space and key safeguards challenges confronting the International Atomic Energy
Agency(IAEA)today and how the status quo may be impacted by autonomous and artificial intelligence(AI)technologies, e.g.
technologies that learn, examine, and take action (autonomy), or methods that utilize Machine Learning(ML)and intelligence
exhibited by machines(AI). Principle issues include the operational value of these systems to safeguards, risks and challenges of
deployment(e.g., trustworthiness, security), and the likelihood of adoption in the near- to medium term (2-10 years)). The study
establishes a set of criteria to identify autonomous and AI methods that could impact IAEA safeguards verification activities in
the next decade. Criteria are informed by specific safeguards outcomes the IAEA wants to achieve e.g., efficiency, maintaining
continuity ofknowledge(CoK)on nuclear materials, or identifying anomalies in large amounts of data. The study develops and
assesses an inventory oftechnological methods based on these criteria. The framework for evaluating the methods that could help
address safeguards challenges consists offive elements: 1) identification ofprinciple safeguards verification challenges; 2)
development of criteria that the identified methods would need to address; 3)development of an inventory of methods that could
address the safeguards challenges; 4)safeguards use cases; and 5)technical evaluation and analysis ofthe selected systems.
While the study considers the broader subject of autonomy,it should be noted that most ofthe methods identified in the inventory
consist primarily of AI and its underlying ML capabilities which could enable autonomous systems in the future. Use cases
identifying scenarios in which the selected technologies could be deployed inform the potential application space and serve as the
foundation for analyzing impact. Finally, an evaluation oftwo specific methods assesses how they might benefit or challenge
IAEA safeguards activities.

1. INTRODUCTION

International nuclear safeguards are under constant pressure to be more effective and efficient to cope with an
expansion of global nuclear fuel cycle activities, increasing nuclear proliferation threats, and a constrained budget for
the IAEA. Emerging technologies in areas such as autonomy and artificial intelligence(AI)are being explored to help
address these challenges. According to the U.S. Department of Defense(DoD), autonomy results from delegation of
a decision to an authorized entity or system capable of taking action within specific boundaries. AI is broadly
characterized as "intelligence exhibited by machines11], where intelligence is "that quality that enables an entity to
function appropriately and with foresight in its environment"[2]. AI technologies are rooted in behaviorial and
neurosciences combined with computer science sub-disciplines such as Computer Vision, Natural Language
Processing(NLP), Robotics (including Human-Robot Interactions), Search and Planning, and Social Media Analysis
[1]. While today's AI is narrow — enabling machines to equal or exceed human intelligence for a specific task - the
goal of Artificial General or Super Intelligence ofthe future is to enable machines to meet or exceed the full range of
human performance across any task. As a result, there is growing competition globally in the development ofAI across
all domains of human endeavour — including security, economic, and social welfare [3].
AI design and development includes small, individualized applications, such as smartphones, as well as the
broad field of large-scale, federated systems capable of making intelligent decisions as a collective, such as that
required for autonomous vehicles. Machine Learning (ML) is a foundational basis for AI that seeks to provide
knowledge to computers through data, observations and interacting with the world, thereby allowing computers to

1
Sandia National Laboratories is a multimission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC, a wholly owned
subsidiary of Honeywell International Inc., for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-NA0003525.
IAEA-CN-267

correctly generalize to new settings [4]. Recent advancements in hardware that enable rapid, on-board processing of
huge data sets have spurred the development of special architectures for ML algorithms, such as those required for
Deep Neural Networks(DNN). These architectures are composed of multiple levels of non-linear operations, such as
in neural networks with many hidden layers [4]. AI technologies provide the underlying capability for autonomous
systems. While autonomy is enabled by AI,not all uses of AI are autonomous. For example, many AI capabilities are
used to augment human decision making, rather than replace it. To be autonomous, a system must have the capability
to independently compose and select among different courses of action to accomplish goals based on its knowledge
and understanding ofthe world, itself, and the situation [5]. An autonomous system must be capable of learning from
previous experience and taking action based on what it has learned. Autonomy is enabled by the integration of the
same three fundamental capabilities: sense(observe and orient), decide, act, and communicate [6]. Unlike automation,
which uses various control systems for operating equipment such as machinery, and whose functioning cannot
accommodate ambiguity or adjust to uncertainties, autonomy includes a decision-making component that leverages
computational intelligence and learning algorithms, or AI,to better adapt to unanticipated and changing situations [7].
Recent commercial advances in autonomy result from the convergence oftechnological breakthroughs in the fields of
AI(such as facial recognition and deep neural networks)and computing hardware (such as graphical processing units
(GPUs)), combined with the unprecedented availability of labelled data, to enable real-time or near real-time analysis
and decision-making by human-machine systems.
"Self-drivine vehicles are examples of autonomous systems that combine voice recognition technology with
vision-analytics and other environmental and biological sensors, data from the internet-of-things, and learning
algorithms to assess a situation, decide on how to respond and take controlled action. Integrated technologies monitor,
track, and map the behaviour of drivers and passengers in context to verify identity, assess levels of awareness and
stress, alert the human operator to dangers of which he/she may be unaware, and control the vehicle in response to
voice commands To be successful, the system design must ensure car operators are attentive and capable, and that
decision-making is using all relevant environmental information. [8] Recent studies have found that most successful
autonomous systems are those that team with humans, not replace them. In the field of international nuclear
safeguards, which involves the technical measures to verify that states are fulfilling their obligations under the Treaty
on the Non-proliferation of Nuclear Weapons(NPT)not to divert nuclear material for weapons purposes, adoption of
autonomous systems will require that such teaming occur effectively and efficiently.
Autonomous and AI methods can provide benefits for international nuclear safeguards through improved scope
and performance while reducing costs and manpower. Given the appropriate level of testing (e.g., vulnerability
assessments) and R&D,autonomous and AI methods show significant promise to increase efficiency both in cost and
time for processing large amounts of data that a human analyst simply does not have the time to do. For example,
given the limitations of human abilities to rapidly process vast amounts of data generated from a single nuclear facility
under safeguards, autonomous or AI methods could increase efficiency by rapidly (near-real time) finding trends or
analysing patterns to identify anomalies that a human information analyst may not otherwise see. If adopted, such
systems could more reliably support timely identification of undeclared activities, while reducing the burden on
inspectors by pinpointing redundancies. Nonetheless, risks and challenges to implementation, such as security,
trustworthiness and transparency, must also be considered. This study explores the potential safeguards operating
environments and how autonomous and AI methods might be applied.

2. INTERNATIONAL NUCLEAR SAFEGUARDS: STATE OF AFFAIRS AND CHALLENGES

2.1 State of Affairs

Safeguards involve large amounts of heterogeneous data, dynamic data sets, highly complex operations, and
new types of nuclear facilities which require significant investment of resources to analyse effectively and efficiently.
Incorporation of autonomous and AI methods might help ease the burden on inspectors and increase the efficiency of
information analysis. The recent IAEA workshop on "Emerging Technologies" in February 2017 noted that as the
demand on safeguards rises, the budget ofthe IAEA does not, and that exploiting new technologies is the only way to

2
HADDAL et al.

close the gap between demand and resources [9]. The workshop also acknowledged that emerging technologies in the
area of AI, an enabling capability for autonomy,could have a significant impact on IAEA activities. AI and autonomy
could have both positive and negative effects on systems, processes, and procedures relevant to international nuclear
safeguards. For instance, AI could significantly improve safeguards efficiency by focusing on value-added tasks and
reducing unnecessarily repetitive ones [9]. At the same time, these technologies may introduce new sources of
uncertainty and reduced transparency. Potential positive and negative impacts of these systems are not well
characterized or understood. Before autonomous or AI methods are deployed for international nuclear safeguards, it
is prudent to assess risks and challenges, compared to potential benefits. In particular, will deployed systems be more
or less robust, reliable, secure and trusted than a human doing the same job? This study seeks to better understand the
realities of adoption, and potential impacts on international nuclear safeguards.

2.2 Inspections and Information Analysis

Under comprehensive safeguards agreements (CSA), the IAEA has the right and obligation to verify the
correctness and completeness of the State's declarations of nuclear material required to be safeguarded. Inspections
are critical to achieving the primary safeguards objectives, including: (1) detecting diversion of declared nuclear
material at declared facilities;(2)detecting misuse of declared facilities; and(3)detecting undeclared nuclear material
or activities in the State as a whole [10]. These are non-trivial obligations requiring a tremendous amount oftime and
resources that could potentially benefit from autonomous or AI methods to improve effectiveness and efficiency.
According to the IAEA, over 180 countries have safeguards agreements in force in which several hundred
facilities and material balance areas(MBAs)contain nuclear material under safeguards [8]. Hundreds ofthousands of
significant quantities (SQs) of nuclear material and hundreds of tons of heavy water worldwide also require
verification through thousands of inspections annually. In 2015, inspectors spent over 13,000 calendar days in the
field conducting safeguards activities [11]. Given the extensive time and effort required to implement safeguards, the
continuous growth in volume and format of safeguards data, and the limited annual budget ofthe IAEA,safeguards
inspectors and analysts need to leverage innovative technologies, such as autonomy and AI, where appropriate.
The analysis of safeguards-relevant information is an essential part of evaluating a State's nuclear activities
and drawing safeguards conclusions. In doing so, the IAEA analyses the consistency of State declarations, and
compares them with the results of Agency verification activities and other relevant information. The Agency draws
on an increasing amount of information from verification activities performed at Headquarters and in the field,
including results from non-destructive assay(NDA), destructive assay(DA),environmental sample analysis, remotely
monitored equipment, commercial satellite imagery, open sources, and trade information. Much of this information is
collected from inspection activities, then relayed to information analysts at IAEA headquarters. Information analysis
considers input from the above activities as well as data reported on the performance of Member State laboratories
and measurement systems. In terms of material balance reporting, DA and NDA reports covering uranium (U),
plutonium (Pu) and input solution samples are produced annually. The Agency also acquires commercial satellite
images to support safeguards verification activities and produces imagery analysis reports. Environmental sampling
is used to identify the potential presence of undeclared nuclear material or activities, while remote monitoring is used
to collect data from containment and surveillance (C/S)and measurement systems and securely transmit it off-site via
communications networks. Open source information analysis utilizes hundreds ofmillions of safeguards-relevant open
source items to produce hundreds ofsummaries to support the State evaluation process.

2.2 Principle Challenges

Given the expanding mission space and pressure to implement safeguards in as many States as possible, the
IAEA is faced with a number of challenges. For example, the increase in new types of facilities and next generation
reactors as well as an increase in the number of facilities under safeguards creates a challenging environment for both
inspectors and information analysts. Other challenges include the global nuclear expansion of trade in equipment,
materials and know-how [12] as well as increasing data flows from information collection systems; the need to protect
safeguards information and transmit secure, authentic communications [12]; increased amount of material under

3
IAEA-CN-267

safeguards; the need for efficient and effective technology acceptance/adoption [12]; increases in environmental
samples, sample processing and data management; more countries bringing into force the Additional Protocol(AP);
more states with Broader Conclusions; and constrained IAEA resources (e.g., zero-real growth budget and workforce
issues)[12]. These challenges require the IAEA Department of Safeguards to rethink how to carry out its safeguards
activities more effectively and efficiently, while strategically investing resources in areas to make the greatest
difference in preventing nuclear proliferation [9]. For example, with over 200,000 SQs of nuclear material requiring
13,000+ days in the field for verification, combined with a zero-real growth budget, autonomy and AI might help
manage large amounts of heterogeneous data, and reduce the amount of time needed for onsite inspections and
information analysis. In an effort to explore how autonomous or AI methods might address safeguards challenges,this
exploratory study developed a framework to analyse the safeguards challenges and criteria for autonomous or AI
methods that could help address those challenges. The framework and criteria were applied to use cases, and technical
evaluations were conducted of the selected methods to determine their risks and challenges of deployment and
likelihood of adoption in the near- to medium-term (2-10 years).

3. FRAMEWORK FOR ANALYSIS

The framework for identifying and evaluating autonomous and AI methods to address safeguards challenges
consists offive elements: 1)identification of principle safeguards verification challenges; 2) development of criteria
that the identified methods would need to address, 3)development ofan inventory of autonomous or AI methods that
could address the safeguards challenges; 4) development of safeguards use cases; and 5) technical evaluation and
analysis ofthe selected systems. For element 3(`inventory'), four dimensions were considered: 1)levels of autonomy
or AI (e.g., the amount of human input or control in the system); 2) the operating environment (e.g., inspections,
information analysis, sample collection); 3)technology functions (e.g., sensing, reasoning and learning, planning and
controlling, acting and communicating); and 4) system technologies (e.g., systems for knowledge representation,
storage, and retrieval; software for learning about and adapting to the environment; information collection for
spatial/scene recognition and environmental sensing). In total, 12 verification activities and 12 safeguards challenges
were identified based on reporting from the IAEA [12]. For example, verifying that declared materials remain in
peaceful uses through the performance ofDIVs is made more difficult by the increase in new types ofnuclear facilities
and next generation reactors, and the need to transmit secure communications and global nuclear expansion in areas
such as the trade of equipment, materials, and know-how [12]. Performance criteria necessary for the adoption of
autonomous and AI methods were developed, followed by an inventory of potential AI methods that could be applied
[5][6]. Note that while the original goal was to identify and analyse the application of autonomy, most ofthe methods
identified were AI supported by ML which provide the enabling capabilities ofautonomous systems. Results informed
the development of use cases to explore potential applications ofthe methods to real-world scenarios.

3.1 Criteria to Address Principle Safeguards Challenges

Upon identifying the 12 safeguards challenges, a team of safeguards, autonomy, AI and MIL subject matter
experts(SMEs) developed performance criteria for adopting autonomous or AI methods to successfully address the
identified challenges. For example,ifthe method were leveraged to support the analysis ofincreasing data flows from
safeguards data collection systems, it would need to meet at least some ofthe following criteria: 1) Maintain sufficient
and resilient bandwidth for large data flows; 2)reliably identify and explain significant anomalies without violating
safeguards agreements (e.g. intellectual property (IP), legal constraints, data authentication); 3) detect meaningful
patterns across time and space; 4)increase efficiency while reducing costs, and/or 5)help verify normal operations.
Some ofthese criteria, such as reliably identifying and explaining significant anomalies without violating safeguards
agreements and detecting meaningful patterns, could also be applied to safeguards challenges such as detecting
undeclared activities. In another example, improving efficiencies in the Physical Inventory Verification(PIV)process
would need an autonomous or AI method that could help 1)reduce time in the field without reducing the quality of
safeguards inspections, 2)help verify operator declarations, and 3)detect anomalies. The process of defining criteria

4
HADDAL et al.

for the autonomous or AI method informed the development of an inventory ofpossible methods that could be applied
to real-world safeguards scenarios.

3.2 Inventory

Once criteria were established to define the functions an autonomous or AI method would need to perform to
improve or positively impact the safeguards challenges, an inventory of methods was developed. While the inventory
was informed by the criteria, other factors were taken into account including: the amount of human input or control a
system might have, the safeguards operating environment (e.g., inspections, information analysis), technology
functions (e.g., sensing, reasoning and learning, plarming and controlling, acting and communicating), and system
technologies, such as systems for knowledge representation, storage and retrieval, software for learning about and
adapting to the environment or information collection for spatial/scene recognition, and environmental sensing. Other
factors that were taken into account included the class ofthe task (e.g., detecting and classiing observations against
a fixed baseline, detecting and classifying dynamic patterns), data type (e.g., imagery, text-based electronic records,
physical samples), data source (e.g., onsite inspection observations, blueprints, satellite imagery, facility declarations),
ground truth (e.g., onsite inspections, reputable open source reporting), operating and environmental conditions (e.g.,
indoor/outdoor, variable temperature and humidity, variable light), and inspection time. The resulting inventory
consists of 14 AI-based methods that could potentially be applied to improve a variety of safeguards challenges. These
are: Support Vector Machines (SVIvI)[13], Convolutional Networks [14], Bayesian inference method [15], Cluster
analysis/clustering [16], Image segmentation [17], Decision Tree [18], Bayesian classifiers [19], k-nearest neighbours
algorithm [20], Deep Neural Networks (DNNs)[21], Random Forests [22], Hidden Markov Models(HMM)[23],
Natural Language Processing [24], Graphical Models (GRIVI) [25], and Anomaly Detection [26]. An autonomous
robotics method for geological repository safeguards was also identified. At the time of publication, however, the
technical evaluation was not yet complete. These methods were matrixed against specific safeguards challenges, such
as overburdened inspectors dealing with increases in new types of facilities and growing material inventories, or
oversubscribed analysts managing large amounts of data flows. Two methods were selected to explore their
application to the safeguards operating environment: 1) unsupervised machine learning (ML) One-Class Support
Vector Machine(OCSVIvI)for analysis oflarge amounts of unattended monitoring data; and 2)Convolutional Neural
Network(CNN)to support PW through image recognition and item counting at a fuel fabrication facility.(Table 1).

TABLE 1. Inventory of AI/ML Methods for Potential Application to Safeguards Challenges


# METHOD DESCRIPTION
1. Support Vector Machine Machine learning models with associated learning algorithms that analyze data used
(SVM) for classification and regression analysis. Given a set oftraining examples, each
marked as belonging to one or the other oftwo categories, an SVM training
algorithm builds a model that assigns new examples to one category or the other,
making it a non-probabilistic binary linear classifier. SVMs support image and
pattern recognition based on sets of training data.[10]
2. Convolutional Neural A class of deep, feed-forward (information moves in only one direction as opposed
Networks(CNN) to a cycle) artificial neural networks(computing systems that learn tasks based on
examples, e.g. image recognition)that has been successfully applied to analyzing
visual imagery. It has applications in image and video recognition, recommender
systems and natural language processing.[11]

The SVM was selected because of its ability to conduct pattern recognition and anomaly detection using text-
based data. The OCSVIvI,in particular, is useful because it is an unsupervised MIL method. That is, it does not require
labelled data to be trained to recognize patterns, whereas supervised ML methods need large amounts of labelled data
requiring intense time and resource investments. CNNs were selected because of their ability to do pattern recognition
with imagery. Given the large amounts ofimagery being produced for safeguards purposes, only a very small fraction
of which could be processed by a human analyst, CNNs could introduce significant efficiencies.

5
IAEA-CN-267

3.4 Use Cases

The purpose ofthe use cases is to clarify if and how some ofthe identified AI and ML methods could be applied
to real-world safeguards challenges. The two scenarios developed are relevant to safeguards inspections and
information analysis. Use case 1A,"One-Class Support Vector Machine(OCSVIvI)for Data Analysis and Anomaly
Detection," focuses on the detection of off-normal activity at a reprocessing plant through the application of an
unsupervised ML method, OCSVM,using unattended monitoring data. Use case 1B,"Convolutional Neural Networks
(CNNs) for Physical Inventory Verification (PIV)," focuses on leveraging CNNs for image recognition and item
counting to support PIVs.

3.4.1 OCSVMfor Process Monitoring

In this use case, the application of an OCSVM is used to help reduce the amount of person days in the field
(PDIs) and detect anomalies by leveraging unattended monitoring data at a reprocessing plant, such as the data
produced by the Solution Measurement and Monitoring System(SMNIS)used at the Rokkasho Reprocessing Plant in
Japan. As the SMMS provides highly accurate solution level measurements for the determination/verification of
solution volumes and densities in all major process vessels, it provides continuous monitoring of solution flows, which
gives added assurance to the verification of inventory changes. All of these data are used to train the OCSVM. The
SMMS-1 that is installed on the 12 most important process vessels is owned and controlled jointly by the State
Regulatory Authority and the IAEA, and uses high-accuracy electromanometers which can attain a measurement
uncertainty of ±0.05%. Once the OCSVM algorithm is trained to recognize normal activity, the algorithm runs
automatically and can indicate when off-normal activity has occurred by observing something it doesn't recognize
from training and communicating when a threshold has been reached, which could indicate a possible anomaly. In
this scenario, the OCSVM is running on SMNIS data, as well as sampling data from laboratory analysis, which
considers the overall material balance. The IAEA inspector at headquarters is alerted by the OCSVM that an off-
normal activity has occurred in MBA 2 where Pu and U are chemically separated. The inspector determines that the
data needs to be reviewed to isolate the problem. The question the inspector is trying to address is whether or not there
has been any off-normal activity, such as protracted diversion ofPu or U,or if some other anomaly is present. Thus,
the OCSVM has supported the reduction ofPDIs by 1) isolating a problem that could have taken days or weeks to
identify, and 2)detecting an anomaly in the unattended monitoring data.

3.4.2 CNNfor PIV

In this use case, a CNN is used to support a physical inventory verification(PIV)at a fuel fabrication plant. An
IAEA inspector is reviewing multiple sources of data including material measurement reports, facility design
documents provided by the operator, and video surveillance, including slow-frame video from the Next Generation
Surveillance System (NGSS). The inspector wants to focus specifically on the fuel rod assembly hall over the past
year to verify that the number of assemblies built are the same as the number that have been declared. The purpose is
to confirm that the fuel fabricator produced the exact amount of assemblies that were reported to the IAEA in support
of nuclear material accountancy. However, counting each individual fuel assembly is time consuming. To help reduce
the amount oftime to complete a counting task, the CNN is trained to recognize a standard pressurized water reactor
(PWR)fuel assembly, which typically stands between four and five meters high and approximately 20 cm across [26],
using thousands ofopen source and IAEA archived images. The IAEA inspector takes the available slow-frame video
surveillance data and uploads it onto the trained CNN software program. Once filtering is complete, the CNN software
identifies the fuel assemblies, clusters or classifies them, and calculates an item count number. The CNN analysis
determines that the number offuel assemblies in the assembly hall matches the number of assemblies declared by the
plant operator thereby reducing or eliminating the need for an inspector to physically count each assembly.

6
HADDAL et al.

3.5 Technical Evaluations

The two use cases discussed in sections 3.4.1 and 3.4.2 were evaluated by a number of technical experts
including computer scientists and SMEs with expertise in ML and international nuclear safeguards. The technical
evaluations included benefits, risks and challenges, and the timeframe for expected deployment.

3.5.1 OCSVM Technical Evaluation

The technical evaluation of the first use case, OCSVM for data analysis and anomaly detection, identified a
number of benefits for safeguards. For example, OCSVMs are capable ofhandling unlabelled heterogeneous data sets
such as those acquired from unattended monitoring systems for safeguards as they offer a more flexible approach to
handling data than supervised ML methods, which require labelled training data, for identifying potential anomalies.
They also allow users to make more general assumptions about the data being fed into the algorithm. Because
OCSVMs are unsupervised, or do not require labelled data (which is a hugely time consuming task prone to human
error), successfully identifying an anomaly or isolating an off-normal condition could save time and resources for data
analysts and inspectors. Finally, assuming historical data from events that could help train the OCSVM algorithm to
flag off-normal events or processes is available, a proof-of-concept could be easily demonstrated to help build
transparency and trust, and thereby support deployment in the field. Technical evaluators estimated the timeframe for
deployment in the field at approximately five years to test and validate the approach through a sensitivity analysis, but
it could take longer depending on IAEA acceptance ofthe capability.
Risks and challenges identified by the technical evaluation for the OCSVM method focused primarily on the
training data. For example, should an adversary gain access to the data used to train the OCSVM,it could be corrupted
or manipulated in such a way that might impact the method's ability to produce reliable, trustworthy results. In
addition, technical evaluators found that the small amounts of training data needed for an OCSVM could make the
method vulnerable to false alarms. In order to address this challenge, it was recommended that a robust feasibility
study or proof-of-concept be completed prior to deployment.

3.5.2 CNN Technical Evaluation

The technical evaluation of the second use case, CNNs for PIV, identified benefits to safeguards such as
reducing time in the field needed to conduct redundant tasks like counting. Technical evaluators also noted that, due
to a CNNs ability to recognize patterns and identify objects, they could provide a more nuanced picture of what is
happening in a nuclear facility than a human might otherwise observe with the naked eye. Finally, depending on how
well the CNN is trained, it could detect anomalies or off-normal conditions. The timeframe estimated for deployment
was approximately five years, including testing and validation ofthe approach. However, similar to the OCSVM,the
process could take longer depending on IAEA acceptance ofthe method for safeguards purposes.
Risks and challenges associated with the CNN also involved the training data. Depending on the quality ofthis
data, a CNN could potentially result in false alarms. For example, ifthe CNN is not properly trained to recognize what
is 'normal' and `off-normal', it could see something different about an assembly, e.g. exterior colour, a sticker, or an
imperfection, and identify the assembly as off-normal or not count it at all. If an adversary were to learn how changes
to the training data might impact the ability of the CNN to produce reliable results, they could potentially learn how
to defeat it. Additionally, CNN training imagery data could be mislabelled by a human, which would impact its ability
to recognize and count assemblies or other items. This could result in a false alarm about something innocuous,
requiring physical re-verification and thereby reducing efficiency. Finally, the ability to describe how a CNN makes
decisions is an open area of research. Without verified testing, this could lead to questions about trustworthiness and
operator or IAEA acceptance.

4. CONCLUSION

7
IAEA-CN-267

Autonomous systems, AI and ML provide important opportunities to improve the effectiveness and efficiency
of IAEA safeguards. While they hold the potential to reduce time and resources for safeguards implementation
activities, they could also help glean important insights such as recognizing patterns or anomalies that might not have
otherwise been observed by human inspectors and analysts. Though these capabilities could create significant
improvements to safeguards, they also present non-trivial challenges and risks to the operating environment. Most
notably, the ways in which these systems are trained to learn, decide and act must be carefully understood in order to
overcome any barriers to deployment. This will require robust testing and evaluation to increase trust, transparency
and the likelihood of adoption for IAEA safeguards.

ACKNOWLEDGEMENTS

This work was funded by the U.S. Department of Energy, National Nuclear Security Administration's
(DOE/NNSA) Office of International Nuclear Safeguards under the Concepts and Approaches subprogram. Sandia
National Laboratories is a multimission laboratory managed and operated by National Technology & Engineering
Solutions of Sandia, LLC,a wholly owned subsidiary ofHoneywell International Inc., for DOE/NNSA under contract
DE-NA0003525. SAND2018-8193 C

REFERENCES

[1] Perspectives on Research in Artificial Intelligence and Artificial General Intelligence Relevant to DoD, JASON, The
MITRE Corporation, p. 1, January 2017.
[2] Nilsson, N. The Quest for Artificial Intelligence: A History of Ideas and Achievements, Cambridge, UK: Cambridge
University Press, 2010.
[3] Stone,P., et al. Artificial Intelligence and Life in 2030: One Hundred Year Study on Artificial Intelligence, Report ofthe
2015-2016 Study Panel, Stanford University, Stanford, CA, Sep 2016. http://ail00.stanford.edu/2016-report.
[4] Yoshua, B.(2009). Learning Deep Architectures for AI. Foundations and Trends in Machine Learning 2(1): 1-127.
[5] Defense Science Board(DSB)Summer Study on Autonomy. U.S. Department ofDefense, p. 4, June 2016.
[6] U.S. Department of Defense(DOD), Office of Technical Intelligence, Office of the Assistant Secretary of Defense for
Research and Engineering, Technical Assessment: Autonomy(DOD: Washington, DC,February 2015). p. 2.
[7] Endsley, Mica R., From Here to Autonomy: Lessons Learned from Human-Automation Research. SA Technologies,
Mesa, AZ. December 2016.
[8] Macy, C., 'Autonomous tech' will surge in 2016 — keep an eye on these 8 players.' Venture Beat. December 2015.
https://venturebeat.com/2015/12/12/autonomous-tech-will-surge-in-2016-keep-an-eye-on-these-8-players/
[9] Emerging Technologies Workshop: Trends and Implications for Safeguards, Workshop Report, p. 4-5. IAEA, Vienna,
Austria. 13-16 February 2017.
[10] Cooley, J., Acquisition Pathway Analysis for the Development of a State-level Safeguards Approach. Consolidated
Nuclear Security, LLC. August 7, 2017.
[11] Henrique, S.,IAEA News Centre, A Day in the Life ofa Safeguards Inspector, https://www.iaea.org/newscenter/news/a-
day-in-the-life-of-a-safeguards-inspector, 27 July 2016.
[12] IAEA Department of Safeguards Long-Term R&D Plan, 2012-2023,IAEA, Vienna, January 2013, p. 2.
[13 Cortes, Corinna; Vapnik, Vladimir N.(1995) Support-vector networks. Machine Learning. 20(3): 273-297.
[14] Schmidhuber, J.(2015), Deep Learning in Neural Networks: An Overview. Neural Networks. 61: 85-117.
[15] Stigler, Stephen M.(1986) The history of statistics, Harvard University Press, p. 131.
[16] Cluster Analysis, Wikipedia, https://en.wikipedia.org/wiki/Cluster analysis
[17] Shapiro, L., G. C. Stockman, Computer Vision, New Jersey, Prentice-Hall, p. 279-325, 2001.
[18] Kaminski, B.; Jakubczyk, M.; Szufel, P.(2017) A framework for sensitivity analysis of decision trees, Central European
Journal of Operations Research.
[19] Devroye, L.; Gyorfi, L.& Lugosi, G. A probabilistic theory of pattern recognition, Springer, 1996.
[20] Altman, N. S. (1992). An introduction to kernel and nearest-neighbor nonparametric regression, The American
Statistician, 46(3): 175-185.
[21] Schmidhuber, J., Deep Learning in Neural Networks: An Overview. Neural Networks. 61: 85-117, 2015.
[22] Ho,T., Random Decision Forests. 3rd Conference on Document Analysis and Recognition, Montreal, QC,August 1995.
[23] Jurafsky, D., J. H. Martin, Speech and Language Processing: Ch.9 - Hidden Markov Models. Stanford University, 2016.
[24] Outline of natural language processing. https://en.wikipedia.org/wiki/Outline of natural language_processing
[25] Graphical model, https://en.wikipedia.org/wiki/Graphical_model
[26] Chandola, V.; Banerjee, A.; Kumar, V., Anomaly detection: A survey, ACM Computing Surveys, 41 (3): 1-58, 2009.
[27] Nuclear Fuel and its Fabrication, World Nuclear Association, January 2018.

You might also like