Professional Documents
Culture Documents
Dylan A. Herman?1 , Cody Googin?2 , Xiaoyuan Liu?3 , Alexey Galda4 , Ilya Safro3 , Yue Sun1 ,
Marco Pistoia1 , and Yuri Alexeev5
1
JPMorgan Chase Bank, N.A., New York, NY, USA — {dylan.a.herman,yue.sun,marco.pistoia}@jpmorgan.com
2
University of Chicago, Chicago, IL, USA — codygoogin@uchicago.edu
3
University of Delaware, Newark, DE, USA — {joeyxliu,isafro}@udel.edu
4
Menten AI, Palo Alto, CA, USA — alexey.galda@menten.ai
5
Argonne National Laboratory, Lemont, IL, USA — yuri@anl.gov
arXiv:2201.02773v2 [quant-ph] 13 Jan 2022
?
These authors contributed equally to this work.
i
Contents
1 Introduction 1
6 Quantum Optimization 15
6.1 Combinatorial Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
6.1.1 Quantum Annealing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
6.1.2 Quantum Approximate Optimization Algorithm . . . . . . . . . . . . . . . . . . . . . . . 17
6.1.3 Variational Quantum Eigensolver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
6.1.4 Variational Quantum Imaginary Time Evolution (VarQITE) . . . . . . . . . . . . . . . . 18
6.1.5 Optimization by Quantum Unstructured Search . . . . . . . . . . . . . . . . . . . . . . . . 18
6.2 Convex Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
6.3 Large-Scale Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
6.4 Financial Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
6.4.1 Portfolio Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Combinatorial Formulations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Convex Formulations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
6.4.2 Swap Netting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
6.4.3 Optimal Arbitrage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
6.4.4 Identifying Creditworthiness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
ii
6.4.5 Financial Crashes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
iii
1 Introduction
Quantum computation relies on a fundamentally different means of processing and storing information than
today’s classical computers. The reason is that the information does not obey the laws of classical mechanics but
those of quantum mechanics. Usually, quantum-mechanical effects become apparent only at very small scales,
when quantum systems are properly isolated from the surrounding environments. These conditions, however,
make the realization of a quantum computer a challenging task. Even so, according to a McKinsey & Co.
report [233], finance is estimated to be the first industry sector to benefit from quantum computing (Section 2),
largely because of the potential for many financial use cases to be formulated as problems that can be solved
by quantum algorithms suitable for near-term quantum computers. This is important because current quantum
computers are small-scale and noisy, yet the hope is that we can still find use cases for them. In addition, a variety
of quantum algorithms will be more applicable when large-scale, robust quantum computers become available,
which will significantly speed up computations used in finance.
The diversity in the potential hardware platforms or physical realizations of quantum computation is com-
pletely unlike any of the classical computational devices available today. There exist proposals based on vastly
different physical systems: superconductors, trapped ions, neutral atoms, photons, and others (Section 3.3), yet
no clear winner has emerged. A large number of companies are competing to be the first to develop a quan-
tum computer capable of running applications usable in a production environment. In theory, any computation
that runs on a quantum computer may also be executed on a classical computer—the benefit that quantum
computing may bring is the potential reduction in time and memory space with which the computational tasks
are performed [253], which in turn may lead to unprecedented scalability and accuracy of the computations. In
addition, any classical algorithm can be modified (in a potentially nontrivial way) such that it can be executed on
a quantum computer, but without any speedup. Obtaining quantum speedup requires developing new algorithms
that specifically take advantage of quantum-mechanical properties. Thus, classical and quantum computers will
need to work together. Moreover, in order to solve a real-world problem, a classical computer should be able to
efficiently obtain the necessary data from a quantum computer.
This promised efficiency of quantum computers enables certain computations that are otherwise infeasible
for current classical computers to complete in any reasonable amount of time (Section 3). In general, however,
the speedup for each task can vary greatly or may even be currently unknown (Section 4). While these speedups,
if found, can have a tremendous impact in practice, they are typically difficult to obtain. And even if they are
discovered, the underlying quantum hardware must be powerful enough to minimize errors or an overhead that
counteracts the algorithmic speedup. We do not know exactly when such robust hardware will exist. Thus, the
goal of quantum computing research is to develop quantum algorithms (Section 4) that solve useful problems
faster and to build robust hardware platforms to run them on. The industry needs to understand the problems
that can best benefit from quantum computing and the extent of these benefits, in order to make full use of its
revolutionary power when production-grade quantum devices are available.
To this end, we offer a comprehensive overview of the applicability of quantum computing to finance. Ad-
ditionally, we provide insight into the nature of quantum computation, focusing particularly on specific financial
problems for which quantum computing can provide potential speedups compared with classical computing. The
sections in this article can be grouped into two parts. The first part contains introductory material: Section 2
introduces computationally intensive financial problems in the areas of numerical risk modeling (Section 2.2.1),
optimization (Section 2.2.2), and machine learning (Section 2.2.3) that potentially benefit from quantum comput-
ing, whereas Sections 3 and 4 introduce the core concepts of quantum computation and quantum algorithms. The
second part—the main focus of the article—reviews research performed by the scientific community to develop
quantum-enhanced versions of three major problem-solving techniques used in finance: Monte Carlo integration
(Section 5), optimization (Section 6), and machine learning (Section 7). The connections between financial prob-
lems and these problem-solving techniques are outlined in Table 1. Finally, Section 8 covers experiments on
current quantum hardware that have been performed by researchers to solve financial use cases.
1
2.1 Macroeconomic Challenges for Financial Institutions
The financial services industry is expected to reach a market cap of $28.53 trillion by 2025, with a compound
annual growth rate of 6% [275]. North America contributes 27% to this market and is second only to Europe.
While this industry is often considered incredibly stable and insulated from change, shifting government regu-
lations, new technologies, and customer expectations present challenges that require the industry to adapt. We
discuss here three key macroeconomic financial trends that can be impacted by quantum computing: keeping up
with regulations, addressing customer expectations and big data requirements, and ensuring data security.
2
problems lend themselves to be solved on near-term quantum computers. Since many of the financial problems
are based on stochastic variables as inputs, they are, more often than not, relatively more tolerant to imprecisions
in the solutions compared with problems in chemistry and physics. In Table 1 we present an overview of some
problem categories, along with some example financial use cases, that potentially can benefit from the quantum
algorithms discussed in this survey. The remainder of this section discusses these problems in more detail.
Table 1: Financial Use Cases with Corresponding Classical and Quantum Solutions
3
with regulations. Because of the Basel III regulations mentioned earlier, financial institutions need to calculate
and meet requirements for their VaR, CVaR, and other metrics. However, the current classical approach of Monte
Carlo methods [141] is computationally intensive. Similar to the pricing problem mentioned above, a quantum
approach could also benefit this financial problem. In Section 5.1.2 we will explore the application for quantum
algorithms to the Monte Carlo approach of computing risk metrics such as VaR and CVaR.
2.2.2 Optimization
Optimization problems are ubiquitous in almost all quantitative disciplines, from computer science and engi-
neering to economics and finance. In finance, one of the most commonly seen optimization problem is portfolio
optimization. Portfolio optimization is the process of selecting the best set of assets and their quantities, from
a pool of assets being considered, according to some predefined objective. The objective can vary depending on
the investor’s preference regarding financial risk and expected return. Modern portfolio theory [228] focuses on
the trade-offs between risk and return to produce what is known as an efficient portfolio, which maximizes the
expected return given a certain amount of risk. This trade-off relationship is represented by a curve known as the
efficient frontier. The expected return and risk of a portfolio can often be modeled respectively by looking at the
mean and variance of static portfolio returns. The problem setup for portfolio optimization can be formulated
as constrained utility maximization [228].
In portfolio optimization [106], each asset class, such as stocks, bonds, futures, and options, is assigned a
weight. Additionally, all assets within the class are allocated in the portfolio depending on their respective risks,
returns, time to maturity, and liquidity. The variables controlling how much the portfolio should invest in an
asset can be continuous or discrete. However, the overall problem can contain variables of both types. Continuous
variables are suitable for representing the proportion of the portfolio invested in the asset positions when non-
discrete allocations are allowed. Discrete variables are used in situations where assets can be purchased or sold
only in fixed amounts. The signs of these variables can also be used to indicate long or short positions. When
risk is accounted for, the problem is typically a quadratic program, usually with constraints. Depending on the
formulation, this problem can be solved with techniques for convex [228] or mixed-integer programming [36, 227].
Speeding up the execution of and improving the quality of portfolio optimization is particularly important when
the number of variables is large. This subject is explored in depth in Section 6.4.1.
Optimization may also be used for feature selection in statistical or machine learning models. One use case
in finance that could potentially benefit from this type of optimization is credit scoring. Credit scoring employs
statistical analysis to determine the creditworthiness of an individual. This process is run by lenders and financial
institutions. Credit score is influenced by payment history, accounts owed, length of credit history, new credit,
and credit mix. This information can be gathered from historical business data, public filings, and payment
collections. Currently, credit scores are often predicted by using classical regression models or machine learning
algorithms [102, 329]. In order to get more accurate results, it is helpful for the machine learning algorithms to
train on more data; however, this process is lengthy. Using optimization methods, quantum approaches can find
critical features to determine a person’s creditworthiness. Credit scoring methods using quantum algorithms for
feature selection will be discussed in Section 6.4.4.
Other financial use cases that rely on optimization techniques include hedging and swap netting (Section
6.4.2), identification of arbitrage opportunities (Section 6.4.3), and financial crash prediction (Section 6.4.5).
These problems will be addressed as specific use cases in Section 6.
4
3 Quantum Computing Concepts
Quantum computing is an emerging and promising field of science. Global venture capital funds, public and
private companies, and governments have invested millions of dollars in this technology. Quantum computing
is driven by the need to solve computationally hard problems efficiently and accurately. For decades, the ad-
vancement of classical computing power has been remarkably following the well-known Moore’s law. Initially
introduced by Gordon Moore in 1965, Moore’s law forecasts that the number of transistors in a classical computer
chip will double every two years, resulting in lower prices for manufacturers and consumers. However, transistors
are already at the size at which quantum mechanical effects become apparent and problematic [205]. These effects
will only become more difficult to manage as classical chips decrease in size. Thus, it is imperative to investigate
quantum computing, which harnesses these effects to perform classically infeasible computations.
Google has demonstrated an enormous computational speedup using quantum computing for the task of
simulating the output of pseudo-random quantum circuits [25]. Specifically, Google claimed that what takes
its quantum device, Sycamore, 200 seconds to accomplish would take Summit, one of the most powerful su-
percomputers, approximately 10,000 years. Thus, quantum supremacy has been achieved. As defined by John
Preskill [268], quantum supremacy is the promise that certain computational (not necessarily useful) tasks can
be executed exponentially faster on a quantum processor than on a classical one. The decisive demonstration
of quantum devices to solve more useful problems faster and/or more accurately than classical computers, such
as large-scale financial problems (Section 2.2), is called quantum advantage. Quantum advantage is more elusive
and, arguably, has not been demonstrated yet. The main challenge is that existing quantum hardware does not
yet seem capable of running algorithms on large enough problem instances. Current quantum hardware (Section
3.3) is in the noisy intermediate-scale quantum (NISQ) technology era [269], meaning that the current quantum
devices are underpowered and suffer from multiple issues. The fault-tolerant era refers to a, yet unknown, period
in the future in which large-scale quantum devices that are robust against errors will be present. In the next two
subsections we briefly discuss quantum information (Section 3.1) and models for quantum computation (Section
3.2). In the last subsection (Section 3.3) we give an overview of the current state of quantum hardware and the
challenges that must be overcome.
1
Generally, quantum states live in a projective Hilbert space, which can be infinite-dimensional [294].
5
measurement is equal to hψ|A|ψi, where hψ|, called a “bra,” is the Hermitian adjoint. In physics, the quantum
Hamiltonian is the observable for a system’s energy. A ground state of a quantum Hamiltonian is a state vector in
an eigenspace associated with the smallest eigenvalue and thus has the lowest energy. Transformations between
states are restricted to unitary operators and (non-unitary) measurements. The dynamics of a closed quantum
system follow the Schrödinger equation, i~ |ψ̇(t)i = H |ψ(t)i, where ~ is the reduced Planck constant and H
is the system’s quantum Hamiltonian [253, 294]. The unitary operator e−iH∆/~ , which transforms a solution
|ψ(t)i to the Schrödinger equation at one point in time t into another one at a different point in time t + ∆, is
called the time-evolution operator or propagator. There can also be a time-dependent Hamiltonian, H(t), where
the operator H changes over time. However, the propagator is computed using the product integral [112] to
take into account that H(ti ) and H(tj ) may not commute for all ti , tj in the specified evolution time interval
[294]. The simulation of time evolution on a quantum computer is called Hamiltonian simulation. For quantum
algorithms, the Hamiltonian can be an arbitrary observable with no connection to a physical system, and ~ := 1
is often assumed. Three important observables on a single qubit are the Pauli operators: X := |+ih+| − |−ih−|,
Y := |iihi| − |−iih−i|, and Z := |0ih0| − |1ih1|, where |±i := |0i±|1i
√
2
, and |±ii := |0i±i|1i
√
2
, where |ψihψ| denotes the
2 n
vector outer product. For n qubits, the set {|ki | k ∈ {0, 1} } forms the computational basis. Measuring in the
computational basis probabilistically projects the quantum state onto a computational basis state. A measure-
ment in the context of quantum computation, without further clarification, usually refers to a measurement in
the computational basis.
A positive semi-definite operator ρ on the quantum state space with Tr(ρ) = 1 is called a density operator,
where Tr(·) is the trace operator. The classical probabilisticP mixture of pure states |ψi i, in other words, state
vectors, with probabilities pi has density operator ρ := p |ψi ihψi | and is called a mixed state. In general,
i i
a density operator ρ represents a mixed state if Tr(ρ2 ) < 1 and a pure statepif Tr(ρ2 ) = 1. Fidelity is one
of the distance metrics for quantum states ρ and σ, defined as F (ρ, σ) = Tr( ρ1/2 σρ1/2 ) [253]. The fidelity
of an operation refers to the computed fidelity between the quantum state that resulted after the operation
was performed and the expected state. Decoherence is the loss of information about a quantum state to its
environment, due to interactions with it. Decoherence constitutes an important source of error for near-term
quantum computers.
For further details, the works of Nielsen and Chuang [253] and Kitaev et al. [199] are the standard references
on the subject of quantum computation. The former covers a variety of topics from quantum information theory,
while the latter focuses on quantum computational complexity theory.
2
The Pauli operators are also sometimes denoted as σx , σy , and σz , respectively.
3
There are other models of quantum computation that are not based on qubits, such as n-level quantum
systems, for n > 2, called qudits [331], and continuous-variable, infinite-dimensional systems [64]. This article
discusses only quantum models and algorithms based on qubits.
6
circuit model is analogous to the Boolean circuit model of classical computation [199]. An important difference
with quantum computation is that because of unitarity all gates are invertible. A shot is a single execution of a
quantum circuit followed by measurements. Multiple shots are required if the goal is to obtain statistics.
This model can be utilized to realize universal quantum computation by considering circuits with n qubits
built using any type of single-qubit gate and only one type of two-qubit gate [253]. Any n-qubit unitary can be
approximated in this manner to arbitrarily small error. The two-qubit gates can be used to mediate entanglement.
A discrete set of gates that can be used for universal quantum computation is called a basis gate set. The basis
gate set realized by a quantum device is called its native gate set. The native gate set typically consists of a finite
number of (parameterized and non-parameterized) single-qubit gates and one two-qubit entangling operation.
An arbitrary single-qubit gate can be approximately decomposed into the native single-qubit gates.
Some common single-qubit gates are the Hadamard gate (H), the Phase (S) and π8 -Phase (T) gates, the Pauli
gates (X, Y, Z), and the rotation gates generated by Pauli operators (RX (θ), RY (θ), RZ (θ)). Examples of two-qubit
gates are the controlled NOT gate denoted CX or CNOT and the SWAP gate. The H, S, and CX gates generate,
under the operation of composition, the Clifford group, which is the normalizer of the group of Pauli gates [253].
The time complexity of an algorithm is measured in terms of the depth of the circuit implementing it. This is in
terms of native gates when running the circuit on hardware. The majority of commercially available quantum
computers implement the quantum circuit model.
7
enough but require a large number of logical qubits with extremely high-fidelity quantum gates. In this work
we cover both types of algorithms. One should keep in mind that arguably none of the discussed algorithms
implemented on NISQ devices provide a decisive advantage over classical algorithms and computers yet.
3.3.1 Noise
Qubits lose their desired quantum state or decohere (Section 3.1) over time. The decoherence times of each of
the qubits are important attributes of a quantum device. Various mechanisms of qubit decoherence exist, such
as quantum amplitude and phase relaxation processes and depolarizing quantum noise [253]. One potentially
serious challenge for superconducting systems is cosmic rays [326]. While single-qubit decoherence is relatively
well understood, the multiqubit decoherence processes, generally called “crosstalk,” pose more serious challenges
[290]. Even two-qubit operations have an order of magnitude higher error rate than do single-qubit operations.
This makes it difficult to entangle a large number of qubits without errors. Various error-mitigation techniques
[62, 333] have been developed for the near term. In the long term, quantum error correction using logical
operations, briefly mentioned in Section 3.2, will be necessary [130, 198]. However, this requires multiple orders
of magnitude more physical qubits than available on today’s NISQ systems. Thus, algorithms executed on current
hardware must contend with the presence of noise and are typically limited to low-depth circuits. In addition,
current quantum error correction techniques theoretically combat local errors [130, 198, 290]. The robustness of
these techniques to nonlocal errors is still being investigated [131].
3.3.2 Connectivity
Quantum circuits need to be optimally mapped to the topology of a quantum device in order to minimize errors
and total runtime. With current quantum hardware, qubits can interact only with neighboring subsets of other
qubits. Existing superconducting and trapped-ion processors have a fixed topology that defines the allowed
connectivity. However, trapped-ion devices can change the ordering of the ions in the trap to allow for, originally,
non-neighboring qubits to interact [264]. Since this approach utilizes a different degree of freedom from that used
to store the quantum information for computation, technically the device can be viewed as having an all-to-all
connected topology. For existing superconducting processors, the positioning of the qubits cannot change. As a
result, two-qubit quantum operations between remote qubits have to be mediated by a chain of additional two-
qubit SWAP gates via the qubits connecting them. Moreover, the two-qubit gate error of current NISQ devices
is high. Therefore, quantum circuit optimizers and quantum hardware must be developed with these limitations
in mind. Connectivity is also a problem with current quantum annealers, which is discussed in Section 6.1.1.
8
Computational complexity, the efficiency in time or space with which computational problems can be solved,
is presented by using the standard asymptotic “Big-O” notation [95]: the set O(f (n)) for an upper bound on
complexity (worst case) and the set Ω(g(n)) for a lower bound on complexity (best case). Also, h(n) ∈ Θ(f (n))
if and only if h ∈ O(f (n)) and h ∈ Ω(f (n)).4 Computational problems, which are represented as functions, are
divided into complexity classes. The most common classes are defined based on decision problems, which can
be represented as Boolean functions. P denotes the complexity class of decision problems that can be solved in
polynomial time (i.e., the time required is upper bounded, in “Big-O” terms, by a polynomial in the input size)
by a deterministic Turing machine (TM), in other words, traditional classical computer [199, 253]. An amount of
time or space required for computation that is upper bounded by a polynomial is called asymptotically efficient,
whereas, for example, upper bounded by an exponential function is not. However, asymptotic efficiency does not
necessarily imply efficiency in practice; the coefficients or order of the polynomial can be very large. NP denotes
the class of decision problems for which proofs of correctness exist, called witnesses, that can be executed in
polynomial time on a deterministic TM [199].
A problem is hard for a complexity class if any problem in that class can be reduced in polynomial time to
it, and it is complete if it is both hard and contained in the class [23]. Problems in P can be solved efficiently on
a classical or quantum device. While NP contains P, it also contains many problems not known to be efficiently
solvable on either a quantum or classical device. #P is a set of counting problems. More specifically, it is the class
of functions that count the number of witnesses that can be computed in polynomial time on a deterministic TM
for NP problems [23]. BQP, which contains P, denotes the class of decision problems solvable in polynomial time
on a quantum computer with bounded probability of error [253]. A common way to represent the complexity
of quantum algorithms is by using query complexity [15]. Roughly speaking, the problem setup provides the
algorithm access to functions that the algorithm considers as “black boxes.” These are represented as unitary
operators called quantum oracles. The asymptotic complexity is computed based on the number of calls, or
queries, to the oracles.
The goal of quantum algorithms research is to develop quantum algorithms that provide computational
speedups, a reduction in computational complexity. However, it is common to find algorithms with improved
efficiency in practice without any proven reduction in complexity. These algorithms typically utilize heuristics.
As mentioned in Section 3.3, a majority of algorithms for NISQ fall into this category. When one can theoretically
show a reduction in asymptotic complexity, the algorithm has a provable speedup for the problem. In the
discussions that follow, we emphasize the category into which each algorithm currently falls.
4
Let ∗ signify O, Ω, or Θ. It is common practice to abuse notation and use the symbol = instead of ∈ to
signify g(n) is in ∗(f (n)) [95]. Similarly, it is common to interchange the phrases “the complexity is ∗(f (n))” and
“the complexity is in ∗(f (n)).”
9
say, by adding a phase shift by π). Also, assume there is a unitary U and |Φi := U |ψi for some input state
|ψi. In addition, if |Φi has nonzero overlap with the marked state |φi, namely, hΦ| φi = 6 0, then without loss
of generality |Φi = sin(θa ) |φi + cos(θa )|φ⊥ i, where hφ| φ⊥ i = 0 and sin(θa ) = hΦ| φi. For example, in the case
of Grover’s algorithm, |Φi is a uniform superposition over states corresponding to the elements of X , and |φi
is the marked state. QAA will amplify the amplitude sin(θa ) and as a consequence the probability sin2 (θa ) of
measuring |φi (using an observable with |φi as an eigenstate) to Ω(1)5 using O( sin(θ 1
a)
) queries to U and Oφ .
This is done utilizing a series of interleaved reflection operators involving U , |ψi, and Oφ [60].
This algorithm can be understood as a quantum analogue to classical probabilistic boosting techniques and
achieves a quadratic speedup [60]. Classically, O( sin21(θa ) ) samples would be required to measure |φi with Ω(1)
probability. QAA is an indispensable technique for quantum computation due to the inherit randomness of
quantum mechanics. Improvements to the algorithm have been made to handle issues that occur when it is not
possible to perform the required reflection about |ψi [42], the number of queries to make (which requires knowing
sin(θa )) is not known [55, 151], and Oφ is imperfect with bounded error [176]. The last two improvements are
also useful for Grover’s algorithm (Section 4.1). A version also exists for quantum algorithms (i.e., a unitary U )
that can be decomposed into multiple steps with different stopping times [14].
5
Ω(1) probability means the probability that the event occurs is at least some constant. This is used to signify
that the desired output of the algorithm occurs with a probability that is independent of the variables in the
algorithm’s time complexity or query complexity. In addition, this implies an asymptotically efficient amount of
repetitions (classical probabilistic boosting) can be used to increase the probability of success close to 1 (e.g., see
Section 4.3).
10
Suzuki–Trotter methods [41, 87] was used for sparse Hamiltonian simulation. However, a variety of more efficient
techniques have been developed since the paper’s inception [43, 223]. These potentially reduce the original
HHL’s quadratic dependence on sparsity. HHL’s query complexity, which is independent of the complexity of
Hamiltonian simulation, is O(κ2 /) [97].
Alternative quantum linear systems solvers also exist that have better dependence on some of the parameters,
such as almost linear in the condition number from Ambainis [14] and polylogarithmic dependence on 1/ for
precision from Childs, Kothari and Somma (CKS) [86]. In addition, Wossnig et al. [340] utilized the quantum
singular value estimation (QSVE) √ algorithm of Kerenedis and Prakash [186] to implement a QLS solver for dense
matrices. This algorithm has O( N polylog(N )) dependence on N and hence offers no exponential speedup.
However, it still obtains a quadratic speedup over HHL for dense matrices. Following this, Kerenedis and Prakash
generalized the QSVE-based linear systems solver to handle both sparse and dense matrices and introduced
a technique for spectral norm estimation [187]. In addition, the quantum singular value transform (QSVT)
framework provides methods for QLS that have the improved dependencies on κ and 1/ mentioned above
[71, 147]. The QSVT framework also provides algorithms for a variety of other linear algebraic routines such as
implementing singular-value-threshold projectors [136, 186] and matrix-vector multiplication [71]. Alternatively
to QLS based on the QSVT, there is a discrete-time adiabatic (Section 3.2.2) approach [115] to the QLSP by
Costa et al. [97] that has optimal [158] query complexity: linear in κ and logarithmic in 1/. Since QLS algorithms
invert Hermitian matrices, they can also be used to compute the Moore-Penrose pseudoinverse of an arbitrary
matrix. This requires computing the Hermitian dilation of the matrix and filtering out singular values that are
near zero [136, 158].
While solving the QLSP does not provide classical access to the solution vector, this can still be done
by utilizing methods for vector-state tomography [188]. Applying tomography would no longer allow for an
exponential speedup in the system size N . However, polynomial speedups using tomography for some linear-
algebra-based algorithms are still possible (Section 6.2). In addition, without classical access to the solution,
certain statistics, such as the expectation of an observable with respect to the solution state, can still be obtained
without losing the exponential speedup in N . Providing quantum access to A and ~b is a difficult problem that can,
in theory, be dealt with by using quantum models for data access. Two main models of data access for quantum
linear algebra routines exist [71]: the sparse-data access model and the quantum-accessible data structure. The
sparse-data access model provides efficient quantum access to the nonzero elements of a sparse matrix. The
quantum-accessible data structure, suitable for fixed-sized, non-sparse inputs, is a classical data structure stored
in a quantum memory (e.g., qRAM) and provides efficient quantum queries to its elements [71].
Lastly, for problems that involve only low-rank matrices, classical Monte Carlo methods for numerical linear
algebra (e.g., the FKV algorithm [132]) can be used. These techniques have been used to produce classical
algorithms that have an asymptotic exponential speedup in dimension for various problems involving low-rank
linear algebra (e.g., some machine learning problems) [80, 313]. As of this writing, these classical dequantized
algorithms have impractical polynomial dependence on other parameters, making the quantum algorithms still
useful. In addition, since these results do not apply to sparse linear systems, there is still the potential for
provable speedups for problems involving sparse, high-rank matrices.
11
4.7 Variational Quantum Algorithms
Variational quantum algorithms (VQAs) [69], also known as classical/quantum hybrid algorithms, are a class of
algorithms, typically for gate-based devices, that modify the parameters of a quantum (unitary) procedure based
on classical information obtained from running the procedure on a quantum device. This information is typically
in the form of a cost function dependent on the expectations of observables with respect to the state that the
quantum procedure produces. In general, the training of quantum variational algorithms is NP-hard [47]. In
addition, these methods are heuristic in nature. The primordial quantum-circuit-based variational algorithm is
called the variational quantum eigensolver (VQE) [263] and is used to compute the minimum eigenvalue of an
observable (e.g., the ground-state energy of a physically realizable Hamiltonian). However, it is also applicable
to optimization problems and quantum linear systems [61, 173]. VQE utilizes one of the quantum variational
principles based on the Rayleigh quotient of a Hermitian operator (Section 6.1.3). The quantum procedure that
provides the state, called the ansatz, is typically a parameterized quantum circuit (PQC) built from Pauli rotations
and two-qubit gates (Section 3.2.1). For VQAs in general, finding the optimal set of parameters is a non-convex
optimization problem [175]. A variation of this algorithm, specific to combinatorial optimization problems,
known as the quantum approximate optimization algorithm (QAOA) [127], utilizes the alternating operator
ansatz [155] and applies only to diagonal observables. There are also variational algorithms for approximating
certain dynamics, such as real-time and imaginary-time quantum evolution that utilize variational principles
for dynamics [344] (e.g., McLachlan’s principle [231]). Furthermore, variational algorithms utilizing PQCs and
various cost functions have been applied to machine learning tasks (Section 7.6) [236]. Problems using VQAs are
seen as one of the leading potential applications of near-term quantum computing.
6
Somma et al. [305] produced a quantum algorithm with provable speedup, in terms of the spectral gap, for
simulated annealing using quantum walks (Section 4.6).
12
The QAE algorithm (Section 4.4) provides a quadratic speedup for Monte Carlo integration [238].7 With
QAE, using the derivative pricing example, the error in the expectation computation decays as O( N1q ), where
Nq is the number of queries made to a quantum oracle that computes g(X1 , . . . , Xn ). This is in contrast to
the complexity in terms of the number of samples of g(X1 , . . . , Xn ) mentioned earlier for classical Monte Carlo
integration. Thus, if samples are considered as classical queries, QAE requires quadratically fewer queries than
classical MCI requires in order to achieve the same desired error.
The general procedure for Monte Carlo integration utilizing QAE is outlined below [70]. Let Ω be the set
of potential sample paths ω of a stochastic process that is distributed according to p(ω), and f : Ω 7→ A is a
real-valued function on Ω, where A ⊂ R is bounded. The task is to compute E[f ] [238], which can be achieved
with QAE in three steps as follows.
1. Construct a unitary operator Pl to load a discretized and truncated version of p(ω). The probability value
p(ω) translates to the amplitude of the quantum state |ωi representing the discrete sample path ω. In
mathematical form, it is Xp
Pl |0i = p(ω) |ωi . (1)
ω∈Ω
2. Convert f into a normalized function f˜ : Ω 7→ [0, 1], and construct a unitary Pf that computes f˜(ω) and
loads the value onto the amplitude of |ωi. 8 The resultant state after applying Pl and Pf is
Xq q
Pf Pl |0i = (1 − f˜(ω))p(ω) |ωi |0i + f˜(ω)p(ω) |ωi |1i . (2)
ω∈Ω
3. Using the notation from Section 4.4, perform quantum amplitude estimation with U = Pf Pl and an
P O
oracle, φ , that marks states with the last qubit being |1i. The result of QAE will be an approximation
to f˜(ω)p(ω) = E[f˜]. This value can be estimated to a desired error utilizing O( 1 ) evaluations of U
ω∈Ω
and its inverse [238]. Then scale E[f˜] to the original bounded range, A, of f to obtain E[f ].
Options. The definition of an option was mentioned in Section 2.2.1. The goal of option pricing is to determine
the current fair value of an option given the sources of uncertainty about the future movements of the underlying
asset price and other market variables. In order to numerically estimate the fair value, a large number of samples
of the market variables are generated, upon which Monte Carlo integration is applied to compute the expected
value of a payoff function [56].
Stamatopoulos et al. [307] constructed quantum procedures for the uncertainty loading Pl and the payoff
function Pf for European options, which are path independent. Pl loads a distribution over the computational
basis states representing the price at maturity. Pf approximately computes the rectified linear function, char-
acteristic of the payoff for European options. Quantum MCI has also been applied to path-dependent options
(Section 2.2.1) such as barrier and Asian options [274, 307], and multiasset options [307]. For these options,
prices for different assets or at different points on a sample path are represented by separate quantum registers,
and the payoff operator Pf is applied to all of these registers. Pricing of more complicated derivatives with QAE
can be found in the literature. For example, Chakrabarti et al. performed an in-depth resource estimation and
error analysis for QAE applied to the pricing of autocallable options and target accrual redemption forwards [70].
7
To disambiguate, there are classical computational techniques called quantum Monte Carlo methods used in
physics to classically simulate quantum systems. These are not the topic of this discussion [68].
p
8
In general, quantum arithmetic methods [156] can be used to load arcsin f˜(ω) into a register and perform
p
controlled rotations to load f˜(ω) onto the amplitudes [70].
13
Collateralized Debt Obligations. A CDO is a derivative that is backed by a pool of loans and other
assets that serve as collateral if the loan defaults. The tool is used to free up capital and shift risk. A typical
CDO pool is divided into three tranches: equity, mezzanine, and senior. The equity tranche is the first to bear
loss; the mezzanine tranche investors bear loss if the loss is greater than the first attachment point; and the
senior tranche investors lose money if the loss is greater than the second attachment point. Although the senior
tranche is protected from loss, other default events can cause the CDO to collapse, such as events that caused
the 2008 financial crisis [314].
Conditional independence models [211] are often used for estimating the chances of parties defaulting on the
credit in the CDO pool. In such models, given the systemic risk Z, the default risks X1 , . . . , Xn of the assets
involved are independent. Z is then used as a latent variable to introduce correlations between the default risks.
The distributions of the default risks and the systemic risk are usually either Gaussian [211] or normal-inverse
Gaussian [29].
Tang et al. [314] presented quantum procedures for both the Pl and Pf operators used for quantum Monte
Carlo integration. The goal was to utilize QAE to estimate the expected loss of a given tranche. Pl is a
composition of rotations to load the uncorrelated probabilities of default (i.e., Bernoulli random variables). Then
a procedure is used for loading the distribution of Z, and controlled rotations are used to apply the correlations.
Pf computes the loss under a given realization of the variables X1 , . . . , Xn and uses a quantum comparator to
determine whether the loss falls within the specified tranche range. QAE is then used, instead of classical MCI,
to compute the expectation of the total loss given that it falls within the tranche range.
Value at Risk. As first mentioned in Section 2.2.1, VaR and CVaR are common metrics used for risk
analysis. Mathematically, VaR at confidence α ∈ [0, 1] is defined as VaRα [X] = infx≥0 {x | P[X ≤ x] ≥ α} usually
computed by using Monte Carlo integration [169]. CVaR is defined as CVaRα [X] = E[X | 0 ≤ X ≤ VaRα [X]]
[338]; α is generally around 99.9% for the finance industry. QAE can be used to evaluate VaRα and CVaRα
faster than with classical MCI. The procedure to do this, by Woerner and Egger [119, 338], is as follows.
Similar to the CDO case above (Section 5.1.1), the risk can be modeled by using a Gaussian conditional
independence model. Quantum algorithms for computing VaRα and CVaRα via QAE can utilize the same
realization of Pl as for the CDO case [338]. Note that P[X ≤ x] = E[h(X)], where h(X) = 1 for X ≤ x and
0 otherwise. Thus Pf implements a quantum comparator similar to both of the previous applications discussed
for derivatives. A bisection search can be used to identify the value of x such that P[X ≤ x] ≥ α using multiple
iterations of QAE. CVaRα uses a comparator to check against the obtained value for VaRα when computing the
conditional expectation of X.
Economic Capital Requirement. Economic capital requirement (ECR) summarizes the amount of
capital required to remain solvent at a given confidence level and time horizon [119]. Egger et al. [119] presented
a quantum method using quantum MCI to compute ECR. Consider a portfolio of K assets with PK L1 , . . . , LK
denoting the losses associated with each asset. The expected value of the total loss, L, is E[L] = m=1 E[Lm ].
The losses can be modeled by using the same Gaussian conditional independence model discussed above. ECR
at confidence level α is defined as
14
large T gate (Section 3.2.1) or Toffoli gate depths9 [70]. Furthermore, the loading of arbitrary uncertainty
distributions requires exponentially deep circuits in terms of the number of qubits [266]. The Grover–Rudolph
algorithm [149], a proposed efficient distribution loading strategy for quantum MCI, has been shown to be
insufficient to achieve quantum speedup [163]. In addition, the resources for the arithmetic need to be tuned
carefully to not lose the quadratic speedup. If fault tolerance is to be taken into account, advancements in the
field of quantum error correction are required in order to realize quantum advantage [26].
Despite these challenges, some progress has been made to alleviate some of the difficulties in implementing
quantum MCI. For distribution loading, variational methods (Section 4.7) have been developed to load stochastic
models [347], such as normal distributions [70]. More specifically, models based on geometric Brownian motions
can be implemented by utilizing procedures for loading standard normal distributions, followed by quantum
arithmetic to modify the distribution’s parameters [70, 156]. Moreover, the number of required oracle calls to
the distribution loading and payoff procedures used by QAE could be reduced at the cost of a smaller speedup
[139, 140]. The difficulty in implementing QPE in the near term required for QAE can be mitigated by using
the variants mentioned in Section 4.4. Potential methods for near-term implementations of payoff functions have
been made by Herbert [164], who considered certain functions that can be approximated by truncated Fourier
series. Also worth noting are non-unitary methods for efficient state preparation [142, 271]. However, these
methods are usually incompatible with QAE because of the non-invertibility of the operations involved. The
methods would be more useful in a broader context where the inverse of the state preparation operation is not
required; this topic is out of the scope of this section.
6 Quantum Optimization
Solving optimization problems (e.g., portfolio optimization, arbitrage) presents the most promising commercially
relevant applications on NISQ hardware (Section 3.3). In this section we discuss the potential of using quantum
algorithms to help solve various optimization problems. We start by discussing quantum algorithms for the NP-
hard10 combinatorial optimization problems [171, 339] in Section 6.1, with particular focus on those with quadratic
cost functions. Neither classical nor quantum algorithms are believed to be able to solve these NP-hard problems
asymptotically efficiently (Section 4). However, the hope is that quantum computing can solve these problems
faster than classical computing on realistic problems such that it has an impact in practice. Section 6.2 focuses
on convex optimization [54]. In its most general form convex optimization is NP-hard. However, in a variety of
situations such problems can be solved efficiently [251] and have a lot of nice properties and a rich theory [54].
There are quantum algorithms utilizing QLS (Section 4.5) and other techniques that provide polynomial speedups
for certain convex, specifically conic, optimization problems. These can have a significant impact in practice.
Section 6.3 discusses large-scale optimization, which utilizes hybrid approaches that decompose large problems
into smaller subtasks solvable on a near-term quantum device. The more general mixed-integer programming
problems, in which there are both discrete and continuous variables, can potentially be handled by using quantum
algorithms for optimization (both the ones mentioned in Section 6.1 and, in certain scenarios, Section 6.2) as
subroutines in a classical/quantum hybrid approach (i.e., decomposing the problem in a similar manner to the
ways mentioned in Section 6.3) [7, 57]. Many quantum algorithms for combinatorial optimization are NISQ
friendly and heuristic (i.e., with no asymptotically proven benefits). The convex optimization algorithms are
usually for the fault-tolerant era of quantum computation.
9
In the fault-tolerant era of quantum computation, T or Toffoli gates (i.e., non-Clifford gates, Section 3.2.1)
will be the most expensive to execute. During the NISQ era, however, the goal is to minimize the number of
two-qubit gates, such as CNOT, because of the high-error rate.
10
By definition (Section 4), optimization problems are not in NP, but they can be NP-hard. However, there is
an analogous class for optimization problems called NPO [171].
15
A binary quadratic program without constraints, which is also NP-hard, is called a quadratic unconstrained
binary optimization problem [202] and lends itself naturally to many quantum algorithms. A QUBO problem on
N binary variables can be expressed as
min ~ xT Q~
x+~ xT~b, (3)
x∈BN
~
where ~b ∈ RN , Q ∈ RN ×N , and B = {0, 1}. Unconstrained integer quadratic programming (IQP) problems can
be converted to QUBO in the same way that IP can be reduced to general binary integer programming [256].
Constraints can be accounted for by optimizing the Lagrangian function [54] of the constrained problem, where
each dual variable is a hyperparameter that signifies a penalty for violating the constraint [224].
QUBO has a connection with finding the ground state of the generalized Ising Hamiltonian from statistical
mechanics [224]. A simple example of an Ising model is a two-dimensional lattice under an external longitudinal
magnetic field that is potentially site-dependent; in other words, the strength of the field at a site on the lattice is
a function of location. At each site j, the direction of the magnetic moment of spin zj may take on values in the
set {−1, +1}. The value of zj is influenced by both the moments of its neighboring sites and the strength of the
external field. This model can be generalized to an arbitrary graph, where the vertices on the graph represent
the spin sites and weights on the edges signify the interaction strength between the sites, hence allowing for
long-range interactions. The classical Hamiltonian for this generalized model is
X X
H=− Jij zi zj − h j zj ,
ij j
where the magnitude of Jij represents the amount of interaction between sites i and j and its sign is the desired
relative orientation; hj represents the external field strength and direction at site j [180]. Using a change of
variables zj = 2xj − 1, a QUBO cost function as defined in Equation (3) can be transformed to a classical Ising
Hamiltonian, with xj being the jth element of ~ x. The classical Hamiltonian can then be “quantized” by replacing
the classical variable zj for the jth site with the Pauli-Z operator σjz (Section 3.1). Therefore, finding the optimal
solution for the QUBO is equivalent to finding the ground state of the corresponding Ising Hamiltonian.
Higher-order combinatorial problems can be modeled by including terms for multispin interactions. Some of
the quantum algorithms presented in this section can handle such problems (e.g., QAOA in Section 6.1.2). From
the quantum point of view, these higher-order versions are observables that are diagonal in the computational
basis. The following subsections will more fully show why the quantum Ising or more generally the diagonal
observable formulation results in quantum algorithms for solving combinatorial problems. In Section 6.4 we
discuss a variety of financial problems that can be formulated as combinatorial optimization and solved by using
the quantum methods described in this section.
This formulation natively allows for solving QUBO. As discussed earlier, however, with proper encoding and
additional variables, unconstrained IQP problems can be converted to QUBO. Techniques also exist for converting
higher-order combinatorial problems into QUBO problems, at the cost of more variables [72, 113]. QUBO, as
mentioned earlier, can, through a transformation of variables, be encoded in the site-dependent longitudinal
magnetic field strengths {hi } and coupling terms {Jij } of an Ising Hamiltonian. The P QA process initializes
the system in a uniform superposition of all states, which is the ground state of − i σix . σix is the Pauli-X
(Section 3.2) operator applied to the ith qubit. This initialization implies the presence of a large transverse-field
component, A(t) in Equation (4). The annealing schedule, controlled by tuning the strength of the transverse
magnetic field, is defined by the functions A(t) and B(t) in Equation (4). If the evolution is slow enough,
according to the adiabatic theorem (Section (3.2.2)), the system will remain in the instantaneous ground state
for the entire duration and end in the ground state of the Ising Hamiltonian encoding the problem. This process
is called forward annealing. There also exists reverse annealing [196], which starts in a user-provided classical
state,11 increases the transverse-field strength, pauses, and then, assuming an adiabatic path, ends in a classical
state. The pause is useful when reverse quantum annealing is used in combination with other classical optimizers.
11
The term “classical state” refers to any computational basis state. The quantum Ising Hamiltonian has an
eigenbasis consisting of computational basis states.
16
The D-Wave devices are examples of commercially available quantum annealers [255]. These devices have
limited connectivity resulting in extra qubits being needed to encode arbitrary QUBO problems (i.e., Ising
models with interactions at arbitrary distances). Finding such an embedding, however, is in general a hard
problem [219]. Thus heuristic algorithms [66] or precomputed templates [144] are typically used. Assuming the
hardware topology is fixed, however, an embedding can potentially be efficiently found for QUBOs with certain
structure [325]. Alternatively, the embedding problem associated with restricted connectivity can be avoided
if the hardware supports three- and four-qubit interactions, by using the LHZ encoding [208] or its extension
by Ender et al. [121]. The extension by Ender et al. even supports higher-order binary optimization problems
[116, 128]. Moreover, the current devices for QA provide no guarantees on the adiabaticity of the trajectory that
the system follows. Despite the mentioned limitations of today’s annealers, a number of financial problems still
remain that can be solved by using these devices (Section 6.4).
where γ = (γ1 , . . . , γp ), β = (β1 , . . . , βp ). U (C, γ) is the phase operator defined as U (C, γ) = e−iγC , where C is a
diagonal observable that encodes the objective function f (z). In addition, U (B, β) is the mixing operator defined
PN
as U (B, β) = e−iβB , where, in the initial QAOA formulation, B = i=1 σix . The initial state, |si, is a uniform
superposition, which is an excited state of B with maximum energy. However, other choices for B exist that
allow QAOA to incorporate constraints without penalty terms [155]. Equation (5), where the choice of B can
vary, is called the alternating operator ansatz [155]. Preparation of the state is then followed by a measurement
in the computational basis (Section 3.1), indicating the assignments of the N binary variables. The parameters
are updated such that the expectation of C, that is, hγ, β| C |γ, βi, is maximized. Note that we can multiply
the cost function by −1 to convert it to a minimization problem. The structure of the QAOA ansatz allows
for finding good parameters purely classically for certain problem instances [127]. In general, however, finding
such parameters is a challenging task. As mentioned earlier (Section 4.7), the training of quantum variational
algorithms is NP-hard. However, a lot of recent research has been proposing practical approaches for effective
parameter and required circuit depth estimation [133, 295, 298, 308].
12
In the case of standard QAOA, however, the adiabatic trajectory would move between instantaneous eigen-
states with the highest energy [127]. This still follows from the adiabatic theorem (Section 3.2.2).
17
can utilize any PQC as an ansatz. However. certain ansätze are more suitable for particular problems because
they take into account prior information about the ground-state wave function(s) [263]. Alternatively, there exist
evolutionary, noise-resistant techniques for optimizing the structure of the ansatz and significantly increasing the
space of candidate wave functions [270].
where H is a quantum Hamiltonian and Eτ = hψ(τ )|H |ψ(τ )i. The time-evolved state (solution to Equation
(6)) is |ψ(τ )i = C(τ )e−Hτ |ψ(0)i, where C(τ ) is a time-dependent normalization constant [229]. As τ − → ∞, the
smallest eigenvalue of the non-unitary operator e−Hτ dominates, and the state approaches a ground state of H.
This is assuming the initial state has nonzero overlap with a ground state. Thus, ITE presents another method
for finding the minimum eigenvalue of an observable [229, 242].
For variational quantum ITE (VarQITE) [344], the time-dependent state |ψ(τ )i is represented by a param-
eterized ansatz (i.e., PQC), |φ(θ(τ ))i, with a set of time-varying parameters, θ(τ ). The evolution is projected
onto the circuit parameters, and the convergence is also dependent on the expressibility of the ansatz [344].
However, ITE is typically more expensive to implement than the previously mentioned methods because of the
required metric tensor terms [237, 344]. This approach can be used to solve a combinatorial optimization prob-
lem by letting H encode the combinatorial cost function, in a similar manner to QAOA and VQE, and is also
not restricted to QUBO. In addition, VarQITE has been used to prepare states that do not result from unitary
evolution in real time, such as the quantum Gibbs state [348]. Specifically for finance it has been used to simulate
the Feynman–Kac partial differential equation [10].
We briefly mention that the variational principles for real-time evolution, mentioned in Section 4.7, can be
used to approximate adiabatic trajectories [75]. This can potentially be used for combinatorial optimization as
well.
18
method of Arora and Kale [24] for SDP. Alternatively, Kerenedis and Prakash applied their quantum IPM to
SDP [188].
where W is an N × N diagonal matrix with the entries of w ~ ∈ [0, 1]N , such that kwk
~ 1 = 1, on the diagonal; w,
~
contains a fixed weight for each asset j indicating the proportion of the portfolio that should hold a long or short
position in j. In both cases, the authors utilized quantum annealing (Section 6.1.1).
Instead of just risk minimization, the problem can be reformulated to specify a desired fixed expected return,
µ ∈ R,
xT Σ~
min ~ x : ~rT ~
x = µ, ~xT~1 = β (8)
x∈BN
~
xT Σ~
min q~ x − ~rT ~ xT~1 = β,
x : ~ (9)
x∈BN
~
19
where ~x is a vector of Boolean decision variables, ~r ∈ RN contains the expected returns of the assets, Σ is the
same as above, q > 0 is the risk level, and β ∈ N is the budget. In both cases, the Lagrangian of the problem,
where the dual variables are used to encode user-defined penalties, can be mapped to a QUBO (Section 6.1).
These formulations are known as mean-variance portfolio optimization problems [228]. More constraints can also
be added to these problems.
Hodson et al. [167] and Slate et al. [304] both utilized QAOA to solve a problem similar to Equation (9). They
allowed for three possible decisions: long position, short position, or neither. In addition, they utilized mixing
operators (Section 6.1.2) that accounted for constraints. Alternatively, Gilliam et al. [135] utilized an extension of
Grover adaptive search (Section 6.1.5) to solve Equation (9) with a quadratic speedup over classical unstructured
search. Rosenberg et al. [282] and Mugel et al. [244] solved dynamic versions of Equation (9), where portfolio
decisions are made for multiple time steps. Additionally, an analysis of benchmarking quantum annealers for
portfolio optimization can be found in a paper by Grant et al. [146]. Furthermore, an approach using reverse
quantum annealing (Section 6.1.1) was proposed by Venturelli and Kondratyev (Section 8.2). Note that any of
the methods from Section 6.1 can be applied whenever quantum annealing can. Currently, however, because
quantum annealers possess more qubits than existing gate-based devices possess, more experimental results with
larger problems have been obtained with annealers.
Convex Formulations. This part of the section focuses on formulations of the portfolio optimization
problem that are convex. The original formulation of mean-variance portfolio optimization by Markowitz [228]
was convex, and thus it did not contain integer constraints, as was included in the discussion above. Thus,
if the binary variable constraints are relaxed, the optimization problem, represented by Equation (8), can be
reformulated as a convex quadratic program with linear equality constraints with both a fixed desired return and
budget (Equation 10):
~ T Σw
min w ~T w
~ : p ~ = ξ, ~rT w
~ = µ, (10)
w∈R
~ N
where w ~ ∈ RN is the allocation vector. The ith entry of the vector w ~ represents the proportion of the portfolio
that should invest in asset i. This is in contrast to the combinatorial formulations where the result was a Boolean
vector. p~ ∈ RN and ~r ∈ RN are the vectors of the assets’ prices and returns, respectively; ξ ∈ R is the budget;
and µ ∈ R is desired expected return. In contrast to Equation (8), Equation (10) admits a closed-form solution
[191]. However, more problem-specific conic constraints and cost terms (assuming these terms are convex) can be
added, increasing the complexity of the problem, in which case it can potentially be solved with more sophisticated
algorithms (e.g., interior-point methods). As mentioned, there exist polynomial speedups provided by quantum
computing for common cone programs (Section 6.2). Equation (10) with additional positivity constraints on the
allocation vector can be represented as an SOCP and solved with quantum IPMs [191].
Considering the exact formulation in Equation (10), however, we might be able to obtain superpolynomial
speedup if we relax the problem even further. Rebentrost and Lloyd [272] presented a relaxed problem where the
goals were to sample from the solution and/or compute statistics. Their approach was to apply the method of
Lagrange multipliers to Equation (10) and obtain a linear system. This system can be formulated as a quantum
linear systems problem and solved with QLS methods (Section 4.5). The result of solving the QLSP (Section 4.5)
is a normalized quantum state corresponding to the solution found with time complexity that is polylogarithmic
in the system size, N . Thus, if only partial information about the solution is required, QLS methods potentially
provide an exponential speedup, in N , over all known classical algorithms for this scenario, although the sparsity
and well-conditioned matrix assumptions mentioned in Section 4.5 have to be met to obtain this speedup.
20
the set of swaps associated with a counterparty into subsets that can be netted. Second, for a subset M in the
partition, with N := |M|, construct the following QUBO:
!2
X
max ~
x p T
~−α xi d~i p
~ ~i xT V ~
− β~ x,
x∈BN
~
i∈M
N
where ~x ∈ B is a vector of decision variables indicating whether to include the swap in the netted subset,
~ ∈ RN is the vector of notional values associated with the swaps, and d~ ∈ {+1, −1}N indicates the direction of
p
the fixed-rate interest payments: +1 if the clearing house pays the counterparty and −1 if the opposite is true.
×N
V ∈ RN ≥0 signifies how incompatible two potentially “nettable” swaps are, and α and β weigh the importance
of the second and third terms. The combined goal of the three terms is to maximize the total notional value
netted (first term), such that the notional values of the selected swaps cancel (second term) and the swaps are
compatible (third term). The QUBO problems were solved by utilizing quantum annealing (Section 6.1.1).
21
to a QUBO problem, and perturbing the system to find the ground state it reaches, we can determine the effect
of small perturbations on the market.
We briefly discuss the formulation of Elliott et al. [120]. First, suppose the financial network consists of N
N ×M
~ ∈ RM
institutions and M assets, with prices p ≥0 , that the institutions can own. D ∈ R≥0 contains the percentage
of each asset owned by each institution. In addition, the institutions can have a stake, cross holdings, in each
×N
other represented by the matrix C ∈ RN ≥0 . The diagonal entries of C are set to 0, and the self-ownership is
P
represented by the diagonal matrix C̃ such that C̃jj := I − i Cij . The equity valuations, ~v , at equilibrium
satisfy
7.1 Regression
Regression is the process of fitting a numeric function from the training data set. This process is often used
to understand how the value changes when the attributes vary, and it is a key tool for economic forecasting.
Typically, the problem is solved by applying least-squares fitting, where the goal is to find a continuous function
that approximates a set of N data points {xi , yi }. The fit function is of the form
M
X
f (x, ~λ) := fj (x)λj ,
j=1
where ~λ is a vector of parameters and fj (x) is function of x and can be nonlinear. The optimal parameters can
be found by minimizing the least-squares error:
N
f (xi , ~λ) − yi 2 = |F ~λ − y|2 ,
X
min
~
λ
i=1
where the N × M matrix F is defined through Fij = fj (xi ). Wiebe et al. [337] proposed a quantum algorithm to
solve this problem, where they encoded the optimal parameters of the model into the amplitudes of a quantum
22
state and developed a quantum algorithm for estimating the quality of the least-squares fit by building on the HHL
algorithm (Section 4.5). The algorithm consists of three subroutines: a quantum algorithm for performing the
pseudoinverse of F , an algorithm that estimates the fitting quality, and an algorithm for learning the parameter ~λ.
Later, Wang [330] proposed using the CKS method (Section 4.5) for matrix inversion, which has better dependence
on the precision in the output than HHL has, and used amplitude estimation (Section 4.4) to estimate the optimal
parameters. Kerenedis and Prakash developed algorithms utilizing QSVE (Section 4.5) to perform a coherent
gradient descent for normal and weighted least-squares regression [187]. Moreover, quantum annealing (Section
6.1.1) has been used to solve least-squares regression when formulated as a QUBO (Section 6.1) [103].
A Gaussian process (GP) is another widely used model for regression in supervised machine learning. It has
been used, for example, in predicting the price behavior of commodities in financial markets. Given a training
set of N data points {xi , yi }, the goal is to model a latent function f (x) such that
y = f (x) + noise ,
7.2 Classification
Classification is the process of placing objects into predefined groups. This type of process is also called pattern
recognition. This area of machine learning can be used effectively in risk management and large data processing
when the group information is of particular interest, for example, in creditworthiness identification and fraud
detection.
PM
where α~ = (α1 , · · · , αM )T , subject to the constraints α = 0 and yj αj ≥ 0. Here, we have a key quantity
j=1 j
for the supervised machine learning problem [245], the kernel matrix Kjk = k(~ xj , ~ xj · ~
xk ) = ~ xk . Solving the
dual form involves evaluating the dot products using the kernel matrix and then finding the optimal parameters
αj by quadratic programming, which takes O(M 3 ) in the non-sparse case. A quantum SVM was first proposed
by Rebentrost et al. [273], who showed that a quantum SVM can be implemented with O(log M N ) runtime in
both training and classification stages, PNunder the assumption that there are oracles for the training data that
return quantum states |~ xj i = k~x1j k2 k=1
xj )k |ki, the norms k~
(~ xj k2 and the labels yj are given, and the states
are constructed by using qRAM [137, 216]. The core of this quantum classification algorithm is a non-sparse
matrix exponentiation technique for efficiently performing a matrix inversion of the training data inner-product
(kernel) matrix. Lloyd et al. [216] showed that, effectively, a low-rank approximation to kernel matrix is used due
to the eigenvalue filtering procedure used by HHL [158]. As mentioned in Section 6.2, Kerenedis et al. proposed
a quantum enhancement to second-order cone programming that was subsequently used to provide a small
polynomial speedup to the training of the classical `1 SVM [193]. In addition, classical SVMs using quantum-
enhanced feature spaces to construct quantum kernels have been proposed by Havlíček et al. [159, 292].
23
the set, (2) sort the data points in a list indexed from closest to the query example to farthest, and (3) choose the
first k entries and, if regression, return the mean of the k labels and if classification, return the mode of k labels.
The computationally expensive step is to compute the distance between elements in the data set. The quantum
nearest-neighbor classification algorithm was proposed by Wiebe et al. [336], who used the Euclidean distance
and the inner product as distance metrics. The distance between two points is encoded in the amplitude of a
quantum state. They used amplitude amplification (Section 4.2) to amplify the probability of creating a state
to store the distance estimate in a register, without measuring it. They then applied the Dürr–Høyer algorithm
(Section 6.1.5). Later, Ruan et al. proposed using the Hamming distance as the metric [284]. More recently,
Basheer et al. have proposed using a quantum k maxima-finding algorithm to find the k-nearest neighbors and
use the fidelity and inner product as measures of similarity [30].
7.3 Clustering
Clustering, or cluster analysis, is an unsupervised machine learning task. It explores and discovers the grouping
structure of the data. In finance, cluster analysis can be used to develop a trading approach that helps investors
build a diversified portfolio. It can also be used to analyze different stocks such that the stocks with high
correlations in returns fall into one basket.
24
For data sets that are high dimensional, incomplete, and noisy, the extraction of information is generally
challenging. Topological data analysis (TDA) is an approach to study qualitative features and analyze and
explore the complex topological and geometric structures of data sets. Persistent homology (PH) is a method
used in TDA to study qualitative features of data that persist across multiple scales. Lloyd et al. [218] proposed
a quantum algorithm to compute the Betti numbers, that is, the numbers of connected components, holes, and
voids in PH. Essentially, the algorithm operates by finding the eigenvectors and eigenvalues of the combinatorial
Laplacian and estimating Betti numbers to all orders and to accuracy δ in time O(n5 /δ). The most significant
speedup is for dense clique complexes [154]. An improved version as well as a NISQ version of the quantum
algorithm for PH has been proposed by Ubaru et al. [317].
25
[34]: initialize a network architecture; specify a learning task, and implement a training algorithm; and simulate
the learning task.
Current proposals for QNNs, discussed in the following subsections, involve a diverse collection of ideas with
varying degrees of proximity to classical neural networks.
26
7.7 Quantum Reinforcement Learning
Reinforcement learning is an area of machine learning that considers how agents ought to take actions in an
environment in order to maximize their reward. It has been applied to many financial applications, include
pricing and hedging of contingent claims, investment and portfolio allocation, buying and selling of a portfolio
of securities subject to transaction costs, market making, asset liability management, and optimization of tax
consequences.
The standard framework of reinforcement learning is based on discrete-time, finite-state Markov decision
processes (MDPs). It comprises five components: {S, Ai , pij (a), ri,a , V, i, j ∈ S, a ∈ Ai }, where S is the state
space; Ai is the action space for state i; pij (a) is the transition probability; r is the reward function defined as
r : Γ → {−∞, +∞}, where Γ = {(i, a)|i ∈ S, a ∈ Ai }; and V is a criterion function or objective function. The
MDP has a history of successive states and decisions, defined as hn = (s0 , a0 , s1 , a1 , · · · , sn−1 , an−1 , sn ), and the
policy π is a sequence π = (π0 , π1 , · · · ). When the history at n is hn , the agent will make a decision according to
the probability distribution πn (·|hn ) on Asn . At each step t, the agent observes the state of the environment st
and then chooses an action at . The agent will therefore receive a reward rt+1 . The environment then will change
to the next state st+1 under action at , and the agent will again observe and act. The goal of reinforcement
learning is to learn a mapping from S states to action that maximize the expected sum of discounted reward or,
formally, to learn a policy π : S × i∈S Ai → [0, 1], such that
max Vsπ = E[rt+1 + γrt+2 + · · · |st = s, π]
X X
= π(s, a)[rsa + γ pss0 Vsπ0 ],
a∈As s0
where γ ∈ [0, 1] is a discount factor and π(s, a) is the probability of the agent to select action a at state s according
to policy π.
Dong et al. [114] proposed to represent the state set with a quantum superposition state; the eigenstate
obtained by randomly observing the quantum state is the action. The probability of the eigen action is determined
by the probability amplitude, which is updated in parallel according to the rewards. The probability of the
“good” action is amplified by using iterations of Grover’s algorithm (Section 4.1). In addition, they showed
that the approach makes a good trade-off between exploration and exploitation using the probability amplitude
and can speed up learning through quantum parallelism. Paparo et al. [262] showed that the computational
complexity of a particular model, projective simulation, can be quadratically reduced. In addition, quantum
neural networks (Section 7.6) and quantum Boltzmann machines (Section 7.5.3) have been used for approximate
RL [76, 78, 99, 179, 220].
27
However, there remain concerns such as bias [49], energy consumption, and environmental impact [110]. Thus it
makes sense to look to quantum computing for potentially better language models for NLP.
The DisCoCat framework, developed by Coecke et al. [92] combines both meaning and grammar, something
not previously done, into a single language model utilizing category theory. For grammar, they utilized pregroups
[206], sets with relaxed group axioms, to validate sentence grammar. Semantics is represented by a vector space
model, similar to those commonly used in NLP [234]. However, DisCoCat also allows for higher-order tensors to
distinguish different parts of speech.
Coecke et al. noted that pregroups and vector spaces are asymmetric and symmetric strict monoidal cate-
gories, respectively. Thus, they can be represented by utilizing the powerful diagrammatic framework associated
with monoidal categories [91]. The revelation made by this group [93] was that this same diagrammatic framework
is utilized by categorical quantum mechanics [3]. Because of the similarity between the two, the diagrams can
be represented by quantum circuits [93] utilizing the algebra of ZX-calculus [90]. Because of this connection and
the use of tensor operations, the group argued that this natural language model is “quantum-native,” meaning it
is more natural to run the DisCoCat model on a quantum computer instead of a classical one [93]. Coecke et al.
ran multiple experiments on NISQ devices for a variety of simple NLP tasks [104, 221, 232].
In addition, there are many ways in which QML models, such as QNNs (Section 7.6), could potentially
enhance the capacity of existing natural language neural architectures [31, 77, 311].
28
ripple-carry adder of Cuccaro et al. [100]. For each of these methods, they discussed the trade-off between circuit
depth, qubit count, and convergence rate. An example circuit for 3 evaluation qubits, corresponding to 23 = 8
samples (i.e., quantum queries, Section 5), is shown in Figure 1 for computing the value of a T-Bill. Increasing the
number of evaluation qubits above 3 yields smaller estimation errors than with classical Monte Carlo integration
(Section 5), demonstrating the predicted quantum speedup; see Figure 2.
Figure 1: Quantum amplitude estimation circuit for approximating the expected value of a portfolio
made up of one T-Bill. The algorithm uses 3 evaluation qubits, corresponding to 8 samples. (Image
source: [338]; use permitted under the Creative Commons Attribution 4.0 International License.)
Despite the observed speedup of the QAE-based algorithm, compared with classical Monte Carlo integration,
practical quantum advantage cannot be achieved using this algorithm on NISQ devices because of the relatively
small qubit count, qubit connectivity and quality, and decoherence limitations present in modern quantum pro-
cessors (Sections 3.3 and 5.2). Additionally, Monte Carlo for numerical integration can be efficiently parallelized,
imposing even stronger requirements on quantum algorithms in order to outperform standard classical methods.
Figure 2: Results of executing the QAE algorithm on the IBMQ 5 Yorktown (ibmqx2) chip for a range
of evaluation qubits, with 8,192 shots each. Because of the better convergence rate of the quantum
algorithm (blue line), it achieves lower error rates than does Monte Carlo (orange line) for the version
with 4 qubits, with the performance gap estimated to grow even more for larger instances on 5 and 6
qubits (blue dashed line). (Image source: [338]; use permitted under the Creative Commons Attribution
4.0 International License, http://creativecommons.org/licenses/by/4.0/)
29
a b
𝐴𝐴 𝑡𝑡 𝐵𝐵 𝑡𝑡 𝐵𝐵 𝑡𝑡
Energy scale
Energy scale
𝐴𝐴 𝑡𝑡
𝑡𝑡 𝑡𝑡
Figure 3: Forward (a) and reverse (b) quantum annealing protocols. During reverse quantum annealing,
the system is initialized in a classical state (c, left), evolved following a “backward” annealing schedule to
some quantum superposition state. At that point the evolution is interrupted for a fixed amount of time,
allowing the system to perform a global search (c, middle). The system’s evolution then is completed by
the forward annealing schedule (c, right). Adapted from [196].
The novelty of this work by Venturelli and Kondratyev is the use of the reverse annealing protocol [196],
which was found to be on average 100+ times faster than forward annealing, as measured by the time to solution
(TTS) metric. Reverse quantum annealing, briefly mentioned in Section 6.1.1, is illustrated in Figure 3. It
is a new approach enabled by the latest D-Wave architecture designed to help escape local minima through a
combination of quantum and thermal fluctuations. The annealing schedule starts in some classical state provided
by the user, as opposed to the quantum superposition state, as in the case of a forward annealing. The transverse
field is then increased, driving backward annealing from the classical initial state to some intermediate quantum
superposition state. The annealing schedule functions A(t) and B(t) (Section 6.1.1) are then kept constant for
a fixed amount of time, allowing the system to perform the search of a global minimum. The reverse annealing
schedule is completed by removing the transverse field, effectively performing the forward annealing protocol
with the system evolving to the ground state of the problem Hamiltonian.
The DW2000Q device used in [324] allows embedding up to 64 logical binary variables on a fully connected
graph, and the authors considered portfolio optimization problems with 24–60 assets, with 30 randomly generated
instances for each size, and calculating the TTS. The results for both forward and reverse quantum annealing
were then compared with the industry-established genetic algorithm (GA) approach used as a classical benchmark
heuristic. The TTS for reverse annealing with the shortest annealing and pause times was found to be several
orders of magnitude smaller than for the forward annealing and GA approaches, demonstrating a promising step
toward practical applications of NISQ devices.
30
" #
0 0 ~rT η µ
0 0 ~T θ = ξ ,
p (12)
~r p
~ Σ w
~ ~0
where η, θ are the dual variables. This lends itself to quantum linear systems algorithms (Section 4.5).
To reiterate, solving a QLSP consists of returning a quantum state, with bounded error, that is proportional
to the solution of a linear system A~ x = ~b. Quantum algorithms for this problem on sparse and well-conditioned
matrices potentially achieve an exponential speedup in the system size N . Although a QLS algorithm cannot
provide classical access to amplitudes of the solution state while maintaining the exponential speedup in N ,
potentially useful financial statistics can be obtained from the state.
The QLS algorithm discussed in this section is HHL. This algorithm is typically viewed as one that is intended
for the fault-tolerant era of quantum computation. Two of the main components of the algorithm that contribute
to this constraint are quantum phase estimation (Section 4.3) and the loading of the inverse of the eigenvalue
estimations onto the amplitudes of an ancilla, called eigenvalue inversion [158].13 Yalovetzky et al. [342] developed
an algorithm called NISQ-HHL with the goal of applying it to portfolio optimization problems on NISQ devices.
While it does not retain the asymptotic speedup of HHL, the authors’ approach can potentially provide benefits
in practice. They applied their algorithm to small mean-variance portfolio optimization problems and executed
them on the trapped-ion Quantinuum System Model H1 device [264].
QPE is typically difficult to run on near-term devices because of the large number of ancillary qubits and
controlled operations required for the desired eigenvalue precision (depth scales as O( 1δ ) for precision δ). An
alternative realization of QPE reduces the number of ancillary qubits to one, and replaces all controlled-phase
gates in the quantum Fourier transform component of QPE with classically controlled single-qubit gates [32].
This method requires mid-circuit measurements, ground-state resets, and quantum conditional logic (QCL). These
features are available on the Quantinuum System Model H1 device. The paper calls this method QCL-QPE. The
differences between these two QPE methods are highlighted in Figure 4.
Figure 4: Circuits for estimating the eigenvalues of the unitary operator U to three bits using QPE (left
circuit) or QCL-QPE (right circuit). S is the register that U is applied to, and j is a classical register;
H refers to the Hadamard gate; and Rk , for k = 2, 3, are the standard phase gates used by the quantum
Fourier transform [253]. Adapted from [342].
With regard to eigenvalue inversion, implementing the controlled rotations on near-term devices typically
requires a multicontrolled rotation for every possible eigenvalue that can be estimated by QPE, such as the
uniformly controlled rotation gate [243]. This circuit is exponentially deep in the number of qubits. Fault-
tolerant implementations can take advantage of quantum arithmetic methods to implement efficient algorithms
for the required arcsin computation. An alternative approach for the near term is a hybrid version of HHL
[210], which the developers of NISQ-HHL extended. This approach utilizes QPE to first estimate the required
eigenvalues to perform the rotations for. While the number of controlled rotations is now reduced, this approach
does not retain the asymptotic speedup provided by HHL because of the loss of coherence. A novel procedure
was developed by the authors for scaling the matrix by a factor γ to make it easier to distinguish the eigenvalues
in the output distribution of QPE and improve performance. Figure 5 displays experimental results collected
from the Quantinuum H1 device to show the effect of the scaling procedure.14
These methods were executed on the Quantinuum H1 device for a few small portfolios (Table 2). A controlled-
SWAP test [33] was performed between the state from HHL and the solution to the linear system obtained
13
Hamiltonian simulation also contributes to this constraint.
14
Let Πλ denote the eigenspace projector for an eigenvalue λ of the Hermitian matrix in Equation (12), and
let |bi be a quantum state proportional to the right side of Equation (12). To clarify, the y-axis value of the red
dot corresponding to λ, on either of the histograms in Figure 5, is kΠλ |bik22 .
31
Figure 5: Probability distributions over the eigenvalue estimations from the QCL-QPE run using γ = 50
(left) and γ = 100 (right). The blue bars represent the experimental results on the H1 machine—1, 000
shots (left) and 2, 000 shots (right)—and we compare them with the theoretical eigenvalues (classically
calculated using a numerical solver) represented with the red dots and the results from the Qiskit [9]
QASM simulator represented with the orange dots. Adapted from [342].
classically (loaded onto a quantum state) [342]. The number of H1 two-qubit ZZMax gates used is also displayed
in Table 2.
Uniformly NISQ-HHL NISQ-HHL
Rotations 7 4 6
Depth 272 214 254
Simulation 0.59 0.49 0.83
H1 0.42 - 0.46
Table 2: Comparison of the number of rotations in the eigenvalue inversion, ZZMax depth of the HHL
plus the controlled-SWAP test circuits and the inner product calculated from the Qiskit statevector
simulator and the Quantinuum H1 results with 3, 000 shots. Adapted from [342].
Statistics such as risk can be computed efficiently from the quantum state. Sampling can also be performed
if the solution state is sparse enough [272].
32
quantum technology to transform the financial industry. Right now, the first applications that are expected
to benefit from quantum advantage are portfolio optimization, derivatives pricing, risk analysis, and several
problems in the realm of artificial intelligence and machine learning, such as fraud detection and NLP.
In this paper we have provided a comprehensive review of quantum computing for finance, specifically focusing
on quantum algorithms that can solve computationally challenging financial problems. We have described the
current landscape of the state of the industry. We hope this work will be used by the scientific community,
both in the industry and academia, not only as a reference, but also as a source of information to identify new
opportunities and advance the state of the art.
Competing Interests
The authors declare that they have no competing interests.
Funding
C.N.G’s work was supported in part by the U.S. Department of Energy, Office of Science, Office of Workforce
Development for Teachers and Scientists (WDTS) under the Science Undergraduate Laboratory Internships
Program (SULI). Y.A.’s work at Argonne National Laboratory was supported by the U.S. Department of Energy,
Office of Science, under contract DE-AC02-06CH11357.
Disclaimer
This paper was prepared for information purposes by the teams of researchers from the various institutions
identified above, including the Future Lab for Applied Research and Engineering (FLARE) group of JPMorgan
Chase Bank, N.A.. This paper is not a product of the Research Department of JPMorgan Chase & Co. or its
affiliates. Neither JPMorgan Chase & Co. nor any of its affiliates make any explicit or implied representation
or warranty and none of them accept any liability in connection with this paper, including, but limited to,
the completeness, accuracy, reliability of information contained herein and the potential legal, compliance, tax
or accounting effects thereof. This document is not intended as investment research or investment advice, or
a recommendation, offer or solicitation for the purchase or sale of any security, financial instrument, financial
product or service, or to be used in any way for evaluating the merits of participating in any transaction.
Authors’ Contributions
Dylan Herman, Cody Ning Googin, and Xiaoyuan Liu contributed equally to the manuscript. All authors
contributed to the writing and reviewed and approved the final manuscript.
Acronyms
NISQ Noisy-Intermediate Scale Quantum
AQC Adiabatic Quantum Computing
qRAM Quantum Random Access Memory
QCL Quantum Conditional Logic
VQE Variational Quantum Eigensolver
QAOA Quantum Approximate Optimization Algorithm
QML Quantum Machine Learning
PCA Principal Component Analysis
TDA Topological Data Analysis
QNN Quantum Neural Network
PQC Parameterized Quantum Circuit
GAN Generative Adversarial Network
SVM Support Vector Machine
GP Gaussian Process
33
NP Non-Deterministic Polynomial
VQA Variational Quantum Algorithm
QPE Quantum Phase Estimation
NLP Natural Language Processing
RL Reinforcement Learning
QAA Quantum Amplitude Amplification
QAE Quantum Amplitude Estimation
QLSP Quantum Linear Systems Problem
QLS Quantum Linear System
HHL Harrow Hassidim Lloyd
QSVT Quantum Singular Value Transform
QSVE Quantum Singular Value Estimation
CKS Childs, Kothari, and Somma
QA Quantum Annealing
IP Integer Programming
IQP Integer Quadratic Programming
QUBO Quadratic Unconstrained Binary Optimization
QAOA Quantum Approximate Optimization Algorithm
ITE Imaginary-Time Evolution
VarQITE Variational Quantum Imaginary Time Evolution
MCI Monte Carlo Integration
VaR Value at Risk
CVaR Conditional Value at Risk
CDO Collateralized Debt Obligation
ECR Economic Capital Requirement
CPU Central Processing Unit
GPU Graphics Processing Unit
References
[1] Scott Aaronson. Read the Fine Print. Nature Physics, 11(4):291–293, 04 2015. DOI: 10.1038/nphys3272.
[2] A. Abbas, D. Sutter, C. Zoufal, A. Lucchi, A. Figalli, and S. Woerner. The Power of Quantum Neural
Networks. Nature Computational Science, 1(6):403–409, 2021. DOI: 10.1038/s43588-021-00084-1.
[3] Samson Abramsky and Bob Coecke. Categorical quantum mechanics. Handbook of quantum logic and quantum
structures, 2:261–325, 2009.
[4] D. Aharonov, A. Ambainis, J. Kempe, and U. Vazirani. Quantum walks on graphs. In Proceedings of the thirty-
third annual ACM symposium on Theory of Computing, pages 50–59, 2001. DOI: 10.1145/380752.380758.
[5] Dorit Aharonov, Wim Van Dam, Julia Kempe, Zeph Landau, Seth Lloyd, and Oded Regev. Adiabatic
quantum computation is equivalent to standard quantum computation. SIAM Review, 50(4):755–787, 2008.
DOI: 10.1137/080734479.
[6] Mohiuddin Ahmed, Abdun Naser Mahmood, and Jiankun Hu. A survey of network anomaly detection
techniques. Journal of Network and Computer Applications, 60:19–31, 2016. DOI: 10.1016/j.jnca.2015.11.016.
[7] Akshay Ajagekar, Travis Humble, and Fengqi You. Quantum computing based hybrid solution strategies for
large-scale discrete-continuous optimization problems. Computers & Chemical Engineering, 132:106630, Jan
2020. ISSN 0098-1354. DOI: 10.1016/j.compchemeng.2019.106630.
[8] Tameem Albash and Daniel A. Lidar. Adiabatic quantum computation. Reviews of Modern Physics, 90(1),
Jan 2018. DOI: 10.1103/revmodphys.90.015002.
[9] Gadi Aleksandrowicz, Thomas Alexander, Panagiotis Barkoutsos, Luciano Bello, Yael Ben-Haim, David
Bucher, Francisco Jose Cabrera-Hernández, Jorge Carballo-Franquis, Adrian Chen, Chun-Fu Chen, et al.
Qiskit: An open-source framework for quantum computing. 2019. DOI: 10.5281/zenodo.2562111.
34
[10] Hedayat Alghassi, Amol Deshmukh, Noelle Ibrahim, Nicolas Robles, Stefan Woerner, and Christa Zoufal.
A variational quantum algorithm for the feynman-kac formula. arXiv preprint arXiv:2108.10846, 2021.
[11] Jonathan Allcock, Chang-Yu Hsieh, Iordanis Kerenidis, and Shengyu Zhang. Quantum algorithms for feed-
forward neural networks. ACM Transactions on Quantum Computing, 1(1):1–24, 2020. DOI: 10.1145/3411466.
[12] Andris Ambainis. Quantum walk algorithm for element distinctness. SIAM Journal on Computing, 37(1):
210–239, 2007. DOI: 10.1137/S0097539705447311.
[13] Andris Ambainis. Quantum search with variable times. Theory of Computing Systems, 47(3):786–807, 2010.
DOI: 10.1007/s00224-009-9219-1.
[14] Andris Ambainis. Variable time amplitude amplification and quantum algorithms for linear algebra problems.
In STACS’12 (29th Symposium on Theoretical Aspects of Computer Science), volume 14, pages 636–647.
LIPIcs, 2012. DOI: 10.4230/LIPIcs.STACS.2012.636.
[15] Andris Ambainis. Understanding quantum algorithms via query complexity. In Proceedings of the Inter-
national Congress of Mathematicians: Rio de Janeiro 2018, pages 3265–3285. World Scientific, 2018. DOI:
10.1142/9789813272880_0181.
[16] Andris Ambainis, Eric Bach, Ashwin Nayak, Ashvin Vishwanath, and John Watrous. One-dimensional
quantum walks. In Proceedings of the thirty-third annual ACM symposium on Theory of Computing, pages
37–49, 2001. DOI: 10.1145/380752.380757.
[17] Andris Ambainis, Julia Kempe, and Alexander Rivosh. Coins make quantum walks faster. In Proceedings
of the Sixteenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ’05, page 1099–1108, USA,
2005. Society for Industrial and Applied Mathematics. ISBN 0898715857.
[18] Andris Ambainis, András Gilyén, Stacey Jeffery, and Martins Kokainis. Quadratic speedup for finding
marked vertices by quantum walks. In Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory
of Computing, page 412–424, New York, NY, USA, 2020. Association for Computing Machinery. ISBN
9781450369794. DOI: 10.1145/3357713.3384252.
[19] Mohammad H. Amin, Evgeny Andriyash, Jason Rolfe, Bohdan Kulchytskyy, and Roger Melko. Quantum
Boltzmann machine. Phys. Rev. X, 8:021050, May 2018. DOI: 10.1103/PhysRevX.8.021050.
[20] Matthew Amy, Olivia Di Matteo, Vlad Gheorghiu, Michele Mosca, Alex Parent, and John Schanck. Esti-
mating the cost of generic quantum pre-image attacks on SHA-2 and SHA-3. In International Conference on
Selected Areas in Cryptography, pages 317–337. Springer, 2016. DOI: 10.1007/978-3-319-69453-5_18.
[21] Simon Apers and Ronald de Wolf. Quantum speedup for graph sparsification, cut approximation and
Laplacian solving. In 2020 IEEE 61st Annual Symposium on Foundations of Computer Science (FOCS),
pages 637–648, 2020. DOI: 10.1109/FOCS46700.2020.00065.
[22] Simon Apers, András Gilyén, and Stacey Jeffery. A unified framework of quantum walk search. arXiv
preprint arXiv:1912.04233, 2019.
[23] Sanjeev Arora and Boaz Barak. Computational complexity: a modern approach. Cambridge University
Press, 2009. DOI: 10.1017/CBO9780511804090.
[24] Sanjeev Arora and Satyen Kale. A combinatorial, primal-dual approach to semidefinite programs. J. ACM,
63(2), may 2016. ISSN 0004-5411. DOI: 10.1145/2837020.
[25] Frank Arute, Kunal Arya, Ryan Babbush, Dave Bacon, Joseph C. Bardin, Rami Barends, Rupak Biswas,
Sergio Boixo, Fernando G. S. L. Brandao, David A. Buell, et al. Quantum supremacy using a programmable
superconducting processor. Nature, 574(7779):505–510, 2019. DOI: 10.1038/s41586-019-1666-5.
[26] Ryan Babbush, Jarrod R. McClean, Michael Newman, Craig Gidney, Sergio Boixo, and Hartmut Neven.
Focus beyond quadratic speedups for error-corrected quantum advantage. PRX Quantum, 2(1), Mar 2021.
ISSN 2691-3399. DOI: 10.1103/prxquantum.2.010103.
[27] Tucker Bailey, Soumya Banerjee, Christopher Feeney, and Heather Hogsett. Cybersecurity: Emerging
challenges and solutions for the boards of financial-services companies, 2020.
[28] William P Baritompa, David W Bulger, and Graham R Wood. Grover’s quantum algorithm applied to
global optimization. SIAM Journal on Optimization, 15(4):1170–1184, 2005. DOI: 10.1137/040605072.
[29] Ole E. Barndorff-Nielsen. Normal inverse Gaussian distributions and stochastic volatility modelling. Scan-
dinavian Journal of Statistics, 24(1):1–13, 1997. DOI: 10.1111/1467-9469.00045.
[30] Afrad Basheer, A Afham, and Sandeep K Goyal. Quantum k-nearest neighbors algorithm. arXiv preprint
arXiv:2003.09187, 2020.
[31] Johannes Bausch. Recurrent quantum neural networks. In H. Larochelle, M. Ranzato, R. Hadsell, M. F.
Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 1368–1379.
Curran Associates, Inc., 2020.
[32] Stephane Beauregard. Circuit for Shor’s algorithm using 2n+3 qubits. Quantum Info. Comput., 3(2):
175–185, Mar 2003. DOI: 10.26421/QIC3.2-8.
[33] R. G. Beausoleil, W. J. Munro, T. P. Spiller, and W. K. van Dam. Tests of quantum information. US Patent
7,559,101 B2, 2008.
35
[34] Kerstin Beer, Dmytro Bondarenko, Terry Farrelly, Tobias J. Osborne, Robert Salzmann, Daniel Scheier-
mann, and Ramona Wolf. Training deep quantum neural networks. Nature Communications, 11(1), Feb
2020. ISSN 2041-1723. DOI: 10.1038/s41467-020-14454-2.
[35] Kerstin Beer, Dmytro Bondarenko, Terry Farrelly, Tobias J. Osborne, Robert Salzmann, Daniel Scheier-
mann, and Ramona Wolf. Training deep quantum neural networks. Nature Communications, 11(1), Feb
2020. ISSN 2041-1723. DOI: 10.1038/s41467-020-14454-2.
[36] S. Benati and R. Rizzi. A mixed integer linear programming formulation of the optimal mean/value-at-risk
portfolio problem. European Journal of Operational Research, 2007. DOI: 10.1016/j.ejor.2005.07.020.
[37] Marcello Benedetti, John Realpe-Gómez, Rupak Biswas, and Alejandro Perdomo-Ortiz. Estimation of
effective temperatures in quantum annealers for sampling applications: A case study with possible applications
in deep learning. Physical Review A, 94(2):022308, 2016. DOI: 10.1103/PhysRevA.94.022308.
[38] Marcello Benedetti, Delfina Garcia-Pintos, Oscar Perdomo, Vicente Leyton-Ortega, Yunseong Nam, and
Alejandro Perdomo-Ortiz. A generative modeling approach for benchmarking and training shallow quantum
circuits. npj Quantum Information, 5(1), May 2019. ISSN 2056-6387. DOI: 10.1038/s41534-019-0157-8.
[39] Charles H. Bennett, Ethan Bernstein, Gilles Brassard, and Umesh Vazirani. Strengths and weaknesses of
quantum computing. SIAM Journal on Computing, 26(5):1510–1523, 1997. DOI: 10.1137/s0097539796300933.
[40] D. J. Bernstein and T. Lange. Post-quantum cryptography. Nature, 2017. DOI: 10.1038/nature23461.
[41] Dominic W. Berry, Graeme Ahokas, Richard Cleve, and Barry C. Sanders. Efficient quantum algorithms for
simulating sparse Hamiltonians. Communications in Mathematical Physics, 270(2):359–371, Dec 2006. ISSN
1432-0916. DOI: 10.1007/s00220-006-0150-x.
[42] Dominic W Berry, Andrew M Childs, Richard Cleve, Robin Kothari, and Rolando D Somma. Exponential
improvement in precision for simulating sparse Hamiltonians. In Proceedings of the forty-sixth annual ACM
symposium on Theory of Computing, pages 283–292, 2014. DOI: 10.1145/2591796.2591854.
[43] Dominic W. Berry, Andrew M. Childs, and Robin Kothari. Hamiltonian simulation with nearly optimal
dependence on all parameters. 2015 IEEE 56th Annual Symposium on Foundations of Computer Science,
Oct 2015. DOI: 10.1109/focs.2015.54.
[44] Dimitris Bertsimas and John N Tsitsiklis. Introduction to linear optimization, volume 6. Athena Scientific
Belmont, MA, 1997.
[45] Jacob D. Biamonte and Peter J. Love. Realizable Hamiltonians for universal adiabatic quantum computers.
Physical Review A, 78(1), Jul 2008. ISSN 1094-1622. DOI: 10.1103/physreva.78.012352.
[46] James Bicksler and Andrew H. Chen. An economic analysis of interest rate swaps. The Journal of Finance,
41(3):645–655, 1986. DOI: 10.2307/2328495.
[47] Lennart Bittel and Martin Kliesch. Training variational quantum algorithms is NP-hard. Phys. Rev. Lett.,
127(12):120502, 2021. DOI: 10.1103/PhysRevLett.127.120502.
[48] Fischer Black and Myron Scholes. The pricing of options and corporate liabilities. In World Scientific
Reference on Contingent Claims Analysis in Corporate Finance: Volume 1: Foundations of CCA and Equity
Valuation, pages 3–21. World Scientific, 2019. DOI: 10.1142/9789814759588_0001.
[49] Su Lin Blodgett, Solon Barocas, Hal Daumé III, and Hanna Wallach. Language (technology) is power: A crit-
ical survey of “bias” in NLP. In Proceedings of the 58th Annual Meeting of the Association for Computational
Linguistics. Association for Computational Linguistics, Jul 2020. DOI: 10.18653/v1/2020.acl-main.485.
[50] Tomas Boothby, Andrew D King, and Aidan Roy. Fast clique minor generation in Chimera qubit connectivity
graphs. Quantum Information Processing, 15(1):495–508, 2016. DOI: 10.1007/s11128-015-1150-6.
[51] S. E. Borujeni, S. Nannapaneni, N. H. Nguyen, E. C. Behrman, and J. E. Steck. Quantum circuit represen-
tation of Bayesian networks. Expert Systems with Applications, 2021. DOI: 10.1016/j.eswa.2021.114768.
[52] Sima E Borujeni, Nam H Nguyen, Saideep Nannapaneni, Elizabeth C Behrman, and James E Steck.
Experimental evaluation of quantum bayesian networks on ibm qx hardware. In 2020 IEEE Interna-
tional Conference on Quantum Computing and Engineering (QCE), pages 372–378. IEEE, 2020. DOI:
10.1109/QCE49297.2020.00053.
[53] Adam Bouland, Wim van Dam, Hamed Joorati, Iordanis Kerenidis, and Anupam Prakash. Prospects and
challenges of quantum finance. arXiv preprint arXiv:2011.06492, 2020.
[54] Stephen Boyd and Lieven Vandenberghe. Convex Optimization. Cambridge University Press, 2004. DOI:
10.1017/CBO9780511804441.
[55] M. Boyer, G. Brassard, P. Høyer, and A. Tapp. Tight bounds on quantum searching. Fortschritte der Physik:
Progress of Physics, 46(4-5):493–505, 1998. DOI: /10.1002/(SICI)1521-3978(199806)46:4/5<493::AID-
PROP493>3.0.CO;2-P.
[56] Phelim P Boyle. Options: A monte carlo approach. Journal of financial economics, 4(3):323–338, 1977.
DOI: 10.1016/0304-405X(77)90005-8.
[57] Lee Braine, Daniel J. Egger, Jennifer Glick, and Stefan Woerner. Quantum algorithms for mixed binary
optimization applied to transaction settlement. IEEE Transactions on Quantum Engineering, 2:1–8, 2021.
DOI: 10.1109/TQE.2021.3063635.
36
[58] Fernando G. S. L. Brandão, Amir Kalev, Tongyang Li, Cedric Yen-Yu Lin, Krysta M. Svore, and Xiaodi
Wu. Quantum SDP solvers: Large speed-ups, optimality, and applications to quantum learning. In 46th
International Colloquium on Automata, Languages, and Programming (ICALP 2019), volume 132 of Leibniz
International Proceedings in Informatics (LIPIcs), pages 27:1–27:14. Schloss Dagstuhl–Leibniz-Zentrum fuer
Informatik, 2019. ISBN 978-3-95977-109-2. DOI: 10.4230/LIPIcs.ICALP.2019.27.
[59] Fernando G.S.L. Brandao and Krysta M. Svore. Quantum speed-ups for solving semidefinite programs.
In 2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS), pages 415–426, 2017.
DOI: 10.1109/FOCS.2017.45.
[60] Gilles Brassard, Peter Høyer, Michele Mosca, and Alain Tapp. Quantum amplitude amplification and
estimation. Quantum Computation and Information, page 53–74, 2002. DOI: 10.1090/conm/305/05215.
[61] Carlos Bravo-Prieto, Ryan LaRose, Marco Cerezo, Yigit Subasi, Lukasz Cincio, and Patrick J Coles. Vari-
ational quantum linear solver. arXiv preprint arXiv:1909.05820, 2019.
[62] Sergey Bravyi, Sarah Sheldon, Abhinav Kandala, David C. Mckay, and Jay M. Gambetta. Mitigating mea-
surement errors in multiqubit experiments. Phys. Rev. A, 103(4), 2021. DOI: 10.1103/physreva.103.042605.
[63] Colin D. Bruzewicz, John Chiaverini, Robert McConnell, and Jeremy M. Sage. Trapped-ion quantum
computing: Progress and challenges. Applied Physics Reviews, 6(2):021314, 2019. DOI: 10.1063/1.5088164.
[64] Samantha Buck, Robin Coleman, and Hayk Sargsyan. Continuous variable quantum algorithms: an intro-
duction. arXiv preprint arXiv:2107.02151, 2021.
[65] David Bulger, William P. Baritompa, and Graham R. Wood. Implementing pure adaptive search with
Grover’s quantum algorithm. Journal of optimization theory and applications, 116(3):517–529, 2003. DOI:
10.1023/A:1023061218864.
[66] Jun Cai, William G Macready, and Aidan Roy. A practical heuristic for finding graph minors. arXiv preprint
arXiv:1406.2741, 2014.
[67] Yudong Cao, Gian Giacomo Guerreschi, and Alán Aspuru-Guzik. Quantum neuron: an elementary building
block for machine learning on quantum computers. arXiv preprint arXiv:1711.11240, 2017.
[68] D. Ceperley and B. Alder. Quantum Monte Carlo. Science, 231(4738):555–560, 1986. DOI: 10.1126/sci-
ence.231.4738.555.
[69] M. Cerezo, Andrew Arrasmith, Ryan Babbush, Simon C. Benjamin, Suguru Endo, Keisuke Fujii, Jarrod R.
McClean, Kosuke Mitarai, Xiao Yuan, Lukasz Cincio, and et al. Variational quantum algorithms. Nature
Reviews Physics, 3(9):625––644, Aug 2021. ISSN 2522-5820. DOI: 10.1038/s42254-021-00348-9.
[70] Shouvanik Chakrabarti, Rajiv Krishnakumar, Guglielmo Mazzola, Nikitas Stamatopoulos, Stefan Woerner,
and William J. Zeng. A threshold for quantum advantage in derivative pricing. Quantum, 5:463, Jun 2021.
ISSN 2521-327X. DOI: 10.22331/q-2021-06-01-463.
[71] Shantanav Chakraborty, András Gilyén, and Stacey Jeffery. The power of block-encoded matrix powers:
improved regression techniques via faster Hamiltonian simulation. arXiv preprint arXiv:1804.01973, 2018.
[72] Nicholas Chancellor, Stefan Zohren, and Paul A Warburton. Circuit design for multi-body interactions
in superconducting quantum annealing systems with applications to a scalable architecture. npj Quantum
Information, 3(1):1–7, 2017. DOI: 10.1038/s41534-017-0022-6.
[73] Guillaume Chapuis, Hristo Djidjev, Georg Hahn, and Guillaume Rizk. Finding maximum cliques on the
D-Wave quantum annealer. Journal of Signal Processing Systems, 91(3):363–377, 2019. DOI: 10.1007/s11265-
018-1357-8.
[74] L. Chen, M. Pelger, and J. Zhu. Deep learning in asset pricing. SSRN, 2020. DOI: 10.2139/ssrn.3350138.
[75] Ming-Cheng Chen, Ming Gong, Xiaosi Xu, Xiao Yuan, Jian-Wen Wang, Can Wang, Chong Ying, Jin
Lin, Yu Xu, Yulin Wu, and et al. Demonstration of adiabatic variational quantum computing with a su-
perconducting quantum coprocessor. Physical Review Letters, 125(18), Oct 2020. ISSN 1079-7114. DOI:
10.1103/physrevlett.125.180501.
[76] Samuel Yen-Chi Chen, Chao-Han Huck Yang, Jun Qi, Pin-Yu Chen, Xiaoli Ma, and Hsi-Sheng Goan.
Variational quantum circuits for deep reinforcement learning. IEEE Access, 8:141007–141024, 2020. DOI:
10.1109/ACCESS.2020.3010470.
[77] Samuel Yen-Chi Chen, Shinjae Yoo, and Yao-Lung L Fang. Quantum long short-term memory. arXiv
preprint arXiv:2009.01783, 2020.
[78] Samuel Yen-Chi Chen, Chih-Min Huang, Chia-Wei Hsing, Hsi-Sheng Goan, and Ying-Jer Kao. Variational
quantum reinforcement learning via evolutionary optimization. Machine Learning: Science and Technology,
2021. DOI: 10.1088/2632-2153/ac4559.
[79] Boris V Cherkassky and Andrew V Goldberg. Negative-cycle detection algorithms. Mathematical Program-
ming, 85(2), 1999. DOI: 10.1007/s101070050058.
[80] Nai-Hui Chia, András Gilyén, Tongyang Li, Han-Hsuan Lin, Ewin Tang, and Chunhao Wang. Sampling-
based sublinear low-rank matrix arithmetic framework for dequantizing quantum machine learning. Pro-
ceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing, Jun 2020. DOI:
10.1145/3357713.3384314.
37
[81] Andrew M. Childs. On the relationship between continuous- and discrete-time quantum walk. Communica-
tions in Mathematical Physics, 294(2):581–603, Oct 2009. ISSN 1432-0916. DOI: 10.1007/s00220-009-0930-1.
[82] Andrew M. Childs and Jason M. Eisenberg. Quantum algorithms for subset finding. Quantum Info. Comput.,
5(7):593–604, Nov 2005. ISSN 1533-7146.
[83] Andrew M Childs and Wim Van Dam. Quantum algorithms for algebraic problems. Reviews of Modern
Physics, 82(1):1, 2010. DOI: 10.1103/RevModPhys.82.1.
[84] Andrew M. Childs, Edward Farhi, and Sam Gutmann. An example of the difference between quantum and
classical random walks. Quantum Information Processing, 2002. DOI: 10.1023/a:1019609420309.
[85] Andrew M. Childs, Richard Cleve, Enrico Deotto, Edward Farhi, Sam Gutmann, and Daniel A. Spielman.
Exponential algorithmic speedup by a quantum walk. Proceedings of the thirty-fifth ACM symposium on
Theory of computing - STOC ’03, 2003. DOI: 10.1145/780542.780552.
[86] Andrew M. Childs, Robin Kothari, and Rolando D. Somma. Quantum algorithm for systems of linear equa-
tions with exponentially improved dependence on precision. SIAM Journal on Computing, 46(6):1920–1950,
Jan 2017. ISSN 1095-7111. DOI: 10.1137/16m1087072.
[87] Andrew M. Childs, Yuan Su, Minh C. Tran, Nathan Wiebe, and Shuchen Zhu. Theory of trotter error with
commutator scaling. Physical Review X, 11(1), Feb 2021. ISSN 2160-3308. DOI: 10.1103/physrevx.11.011020.
[88] Jaeho Choi, Seunghyeok Oh, and Joongheon Kim. A tutorial on quantum graph recurrent neural network
(QGRNN). In 2021 International Conference on Information Networking (ICOIN), pages 46–49, 2021. DOI:
10.1109/ICOIN50884.2021.9333917.
[89] R. Cleve, A. Ekert, C. Macchiavello, and M. Mosca. Quantum algorithms revisited. Proceedings of the Royal
Society of London. Series A: Mathematical, Physical and Engineering Sciences, 454(1969):339–354, Jan 1998.
ISSN 1471-2946. DOI: 10.1098/rspa.1998.0164.
[90] Bob Coecke and Ross Duncan. Interacting quantum observables: categorical algebra and diagrammatics.
New Journal of Physics, 13(4):043016, Apr 2011. ISSN 1367-2630. DOI: 10.1088/1367-2630/13/4/043016.
[91] Bob Coecke and Eric Oliver Paquette. Categories for the practising physicist. In New structures for physics,
pages 173–286. Springer, 2010. DOI: 10.1007/978-3-642-12821-9_3.
[92] Bob Coecke, Mehrnoosh Sadrzadeh, and Stephen Clark. Mathematical foundations for a compositional
distributional model of meaning. arXiv preprint arXiv:1003.4394, 2010.
[93] Bob Coecke, Giovanni de Felice, Konstantinos Meichanetzidis, and Alexis Toumi. Foundations for near-term
quantum natural language processing. arXiv preprint arXiv:2012.03755, 2020.
[94] Iris Cong, Soonwon Choi, and Mikhail D. Lukin. Quantum convolutional neural networks. Nature Physics,
15(12):1273–1278, 2019. DOI: 10.1038/s41567-019-0648-8.
[95] Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to algorithms.
MIT press, 2009.
[96] Gerard Cornuejols, Marshall L Fisher, and George L Nemhauser. Exceptional paper—location of bank
accounts to optimize float: An analytic study of exact and approximate algorithms. Management Science, 23
(8):789–810, 1977. DOI: 10.1287/mnsc.23.8.789.
[97] Pedro Costa, Dong An, Yuval R Sanders, Yuan Su, Ryan Babbush, and Dominic W Berry. Optimal scaling
quantum linear systems solver via discrete adiabatic theorem. arXiv preprint arXiv:2111.08152, 2021.
[98] Brian Coyle, Maxwell Henderson, Justin Chan Jin Le, Niraj Kumar, Marco Paini, and Elham Kashefi.
Quantum versus classical generative modelling in finance. Quantum Science and Technology, 6(2):024013,
2021. DOI: 10.1088/2058-9565/abd3db.
[99] Daniel Crawford, Anna Levit, Navid Ghadermarzy, Jaspreet S Oberoi, and Pooya Ronagh. Reinforcement
learning using quantum Boltzmann machines. arXiv preprint arXiv:1612.05695, 2016.
[100] Steven A Cuccaro, Thomas G Draper, Samuel A Kutin, and David Petrie Moulton. A new quantum
ripple-carry addition circuit. arXiv preprint quant-ph/0410184, 2004.
[101] Ammar Daskin. Quantum spectral clustering through a biased phase estimation algorithm. TWMS Journal
of Applied and Engineering Mathematics, 10(1):24–33, 2017.
[102] Xolani Dastile, Turgay Celik, and Moshe Potsane. Statistical and machine learning models in credit scoring:
A systematic literature survey. Applied Soft Computing, 91:106263, 2020. DOI: 10.1016/j.asoc.2020.106263.
[103] Prasanna Date and Thomas Potok. Adiabatic quantum linear regression. Scientific reports, 11(1):21905,
2021. DOI: 10.1038/s41598-021-01445-6.
[104] Giovanni de Felice, Alexis Toumi, and Bob Coecke. DisCoPy: Monoidal categories in Python. Elec-
tronic Proceedings in Theoretical Computer Science, 333:183–197, Feb 2021. ISSN 2075-2180. DOI:
10.4204/eptcs.333.13.
[105] Freddy Delbaen and Walter Schachermayer. The mathematics of arbitrage. Springer Science & Business
Media, 2006. DOI: 10.1007/978-3-540-31299-4.
[106] Victor Demiguel, Lorenzo Garlappi, Francisco Nogales, and Raman Uppal. A Generalized Approach to
Portfolio Optimization: Improving Performance by Constraining Portfolio Norms. Management Science, 55:
798–812, 05 2009. DOI: 10.1287/mnsc.1080.0986.
38
[107] Vasil S. Denchev, Sergio Boixo, Sergei V. Isakov, Nan Ding, Ryan Babbush, Vadim Smelyanskiy, John
Martinis, and Hartmut Neven. What is the computational value of finite-range tunneling? Physical Review
X, 6(3), Aug 2016. ISSN 2160-3308. DOI: 10.1103/physrevx.6.031015.
[108] Danial Dervovic, Mark Herbster, Peter Mountney, Simone Severini, Naïri Usher, and Leonard Wossnig.
Quantum linear systems algorithms: a primer. arXiv preprint arXiv:1802.08227, 2018.
[109] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirec-
tional transformers for language understanding. In Proceedings of the 2019 Conference of the North American
Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long
and Short Papers). Association for Computational Linguistics, Jun 2019. DOI: 10.18653/v1/N19-1423.
[110] Payal Dhar. The carbon impact of artificial intelligence. Nature Machine Intelligence, 2(8):423–425, 2020.
DOI: 10.1038/s42256-020-0219-9.
[111] Vivek Dixit, Raja Selvarajan, Muhammad A Alam, Travis S Humble, and Sabre Kais. Training re-
stricted Boltzmann machines with a d-wave quantum annealer. Front. Phys., 9:589626, 2021. DOI:
10.3389/fphy.2021.589626.
[112] John D. Dollard and Charles N. Friedman. Product Integration with Application to Differential
Equations. Encyclopedia of Mathematics and its Applications. London: Addison-Wesley, 1984. DOI:
10.1017/CBO9781107340701.
[113] Krzysztof Domino, Akash Kundu, Özlem Salehi, and Krzysztof Krawiec. Quadratic and higher-order
unconstrained binary optimization of railway dispatching problem for quantum computing. arXiv preprint
arXiv:2107.03234, 2021.
[114] Daoyi Dong, Chunlin Chen, Hanxiong Li, and Tzyh-Jong Tarn. Quantum reinforcement learning. IEEE
Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 38(5):1207–1220, 2008. DOI:
10.1109/TSMCB.2008.925743.
[115] Alexander Dranov, Johannes Kellendonk, and Rudolf Seiler. Discrete time adiabatic theorems for quantum
mechanical systems. Journal of Mathematical Physics, 39(3):1340–1349, 1998. DOI: 10.1063/1.532382.
[116] Maike Drieb-Schön, Younes Javanmard, Kilian Ender, and Wolfgang Lechner. Parity quantum optimiza-
tion: Encoding constraints. arXiv preprint arXiv:2105.06235, 2021.
[117] Darrell Duffie and Ming Huang. Swap rates and credit quality. The Journal of Finance, 51(3):921–949,
1996. DOI: 10.2307/2329227.
[118] Christoph Dürr and Peter Høyer. A quantum algorithm for finding the minimum. arXiv preprint quant-
ph/9607014, 1996.
[119] D. J. Egger, R. Garcia Gutierrez, J. Mestre, and S. Woerner. Credit risk analysis using quantum computers.
IEEE Transactions on Computers, 70(12):2136–2145, Dec 2021. DOI: 10.1109/TC.2020.3038063.
[120] Matthew Elliott, Benjamin Golub, and Matthew O. Jackson. Financial networks and contagion. American
Economic Review, 104(10):3115–53, October 2014. DOI: 10.1257/aer.104.10.3115.
[121] Kilian Ender, Roeland ter Hoeven, Benjamin E Niehoff, Maike Drieb-Schön, and Wolfgang Lechner. Parity
quantum optimization: Compiler. arXiv preprint arXiv:2105.06233, 2021.
[122] Arturo Estrella and Frederic S. Mishkin. Predicting U.S. recessions: Financial variables as leading indica-
tors. The Review of Economics and Statistics, 1998.
[123] Edward Farhi and Sam Gutmann. Quantum computation and decision trees. Physical Review A, 58(2):
915–928, Aug 1998. ISSN 1094-1622. DOI: 10.1103/physreva.58.915.
[124] Edward Farhi and Hartmut Neven. Classification with quantum neural networks on near term processors.
arXiv preprint arXiv:1802.06002, 2018.
[125] Edward Farhi, Jeffrey Goldstone, Sam Gutmann, and Michael Sipser. Quantum computation by adiabatic
evolution. arXiv preprint quant-ph/0001106, 2000.
[126] Edward Farhi, Jeffrey Goldstone, and Sam Gutmann. Quantum adiabatic evolution algorithms versus
simulated annealing. arXiv preprint quant-ph/0201031, 2002.
[127] Edward Farhi, Jeffrey Goldstone, and Sam Gutmann. A quantum approximate optimization algorithm.
arXiv preprint arXiv:1411.4028, 2014.
[128] Michael Fellner, Kilian Ender, Roeland ter Hoeven, and Wolfgang Lechner. Parity quantum optimization:
Benchmarks. arXiv preprint arXiv:2105.06240, 2021.
[129] Luciano Floridi and Massimo Chiriatti. GPT-3: Its nature, scope, limits, and consequences. Minds and
Machines, 30:1–14, 2020. DOI: 10.1007/s11023-020-09548-1.
[130] A. G. Fowler, M. Mariantoni, J. M. Martinis, and A. N. Cleland. Surface codes: Towards practical large-
scale quantum computation. Physical Review A, 86(3), Sep 2012. DOI: 10.1103/physreva.86.032324.
[131] Austin G Fowler and John M Martinis. Quantifying the effects of local many-qubit errors and nonlocal two-
qubit errors on the surface code. Physical Review A, 89(3):032316, 2014. DOI: 10.1103/PhysRevA.89.032316.
[132] Alan Frieze, Ravi Kannan, and Santosh Vempala. Fast Monte-Carlo algorithms for finding low-rank ap-
proximations. Journal of the ACM (JACM), 51(6):1025–1041, 2004. DOI: 10.1145/1039488.1039494.
39
[133] Alexey Galda, Xiaoyuan Liu, Danylo Lykov, Yuri Alexeev, and Ilya Safro. Transferability of optimal QAOA
parameters between random graphs. In 2021 IEEE International Conference on Quantum Computing and
Engineering (QCE), pages 171–180. IEEE, 2021. DOI: 10.1109/QCE52317.2021.00034.
[134] Craig Gidney and Martin Ekerå. How to factor 2048 bit RSA integers in 8 hours using 20 million noisy
qubits. Quantum, 5:433, Apr 2021. ISSN 2521-327X. DOI: 10.22331/q-2021-04-15-433.
[135] Austin Gilliam, Stefan Woerner, and Constantin Gonciulea. Grover adaptive search for constrained poly-
nomial binary optimization. Quantum, 5:428, April 2021. ISSN 2521-327X. DOI: 10.22331/q-2021-04-08-428.
[136] András Gilyén, Yuan Su, Guang Hao Low, and Nathan Wiebe. Quantum singular value transformation and
beyond: exponential improvements for quantum matrix arithmetics. Proceedings of the 51st Annual ACM
SIGACT Symposium on Theory of Computing, Jun 2019. DOI: 10.1145/3313276.3316366.
[137] Vittorio Giovannetti, Seth Lloyd, and Lorenzo Maccone. Quantum random access memory. Physical Review
Letters, 100(16), Apr 2008. ISSN 1079-7114. DOI: 10.1103/physrevlett.100.160501.
[138] Nicolas Gisin, Grégoire Ribordy, Wolfgang Tittel, and Hugo Zbinden. Quantum cryptography. Reviews of
Modern Physics, 74(1):145, 2002. DOI: 10.1103/RevModPhys.74.145.
[139] Tudor Giurgica-Tiron, Iordanis Kerenidis, Farrokh Labib, Anupam Prakash, and William Zeng. Low depth
algorithms for quantum amplitude estimation. arXiv preprint arXiv:2012.03348, 2020.
[140] Tudor Giurgica-Tiron, Sonika Johri, Iordanis Kerenidis, Jason Nguyen, Neal Pisenti, Anupam Prakash,
Ksenia Sosnova, Ken Wright, and William Zeng. Low depth amplitude estimation on a trapped ion quantum
computer. arXiv preprint arXiv:2109.09685, 2021.
[141] Paul Glasserman. Monte Carlo methods in financial engineering, volume 53. Springer, 2004. DOI:
10.1007/978-0-387-21617-1.
[142] Niladri Gomes, Anirban Mukherjee, Feng Zhang, Thomas Iadecola, Cai-Zhuang Wang, Kai-Ming Ho,
Peter P Orth, and Yong-Xin Yao. Adaptive variational quantum imaginary time evolution approach for ground
state preparation. Advanced Quantum Technologies, page 2100114, 2021. DOI: 10.1002/qute.202100114.
[143] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. MIT press, 2016.
[144] Timothy Goodrich, Blair D. Sullivan, and T. Humble. Optimizing adiabatic quantum program compilation
using a graph-theoretic framework. Quantum Information Processing, 17:1–26, 2018. DOI: 10.1007/s11128-
018-1863-4.
[145] Ilene Grabel. Predicting financial crisis in developing economies: astronomy or astrology? Eastern Eco-
nomic Journal, 29(2):243–258, 2003.
[146] Erica Grant, Travis S. Humble, and Benjamin Stump. Benchmarking quantum annealing controls with
portfolio optimization. Physical Review Applied, 15(1), Jan 2021. DOI: 10.1103/physrevapplied.15.014012.
[147] Sander Gribling, Iordanis Kerenidis, and Dániel Szilágyi. Improving quantum linear system solvers via a
gradient descent perspective. arXiv preprint arXiv:2109.04248, 2021.
[148] Dmitry Grinko, Julien Gacon, Christa Zoufal, and Stefan Woerner. Iterative quantum amplitude estima-
tion. npj Quantum Information, 7(1), Mar 2021. ISSN 2056-6387. DOI: 10.1038/s41534-021-00379-1.
[149] Lov Grover and Terry Rudolph. Creating superpositions that correspond to efficiently integrable probability
distributions. arXiv preprint quant-ph/0208112, 2002.
[150] Lov K Grover. A fast quantum mechanical algorithm for database search. In Proceedings of the twenty-
eighth annual ACM symposium on Theory of computing, pages 212–219, 1996. DOI: 10.1145/237814.237866.
[151] Lov K. Grover. Fixed-point quantum search. Phys. Rev. Lett., 2005. DOI: 10.1103/PhysRevLett.95.150501.
[152] Shihao Gu, Bryan Kelly, and Dacheng Xiu. Empirical asset pricing via machine learning. The Review of
Financial Studies, 33(5):2223–2273, 2020. DOI: 10.1093/rfs/hhaa009.
[153] Ming-Chao Guo, Hai-Ling Liu, Yong-Mei Li, Wen-Min Li, Su-Juan Qin, Qiao-Yan Wen, and Fei Gao.
Quantum algorithms for anomaly detection using amplitude estimation. arXiv preprint arXiv:2109.13820,
2021.
[154] Casper Gyurik, Chris Cade, and Vedran Dunjko. Towards quantum advantage via topological data analysis.
arXiv preprint arXiv:2005.02607, 2020.
[155] Stuart Hadfield, Zhihui Wang, Bryan O’Gorman, Eleanor Rieffel, Davide Venturelli, and Rupak Biswas.
From the quantum approximate optimization algorithm to a quantum alternating operator ansatz. Algorithms,
12(2):34, Feb 2019. ISSN 1999-4893. DOI: 10.3390/a12020034.
[156] Thomas Häner, Martin Roetteler, and Krysta M Svore. Optimizing quantum circuits for arithmetic. arXiv
preprint arXiv:1805.12445, 2018.
[157] Aram W Harrow. Small quantum computers and large classical data sets. arXiv preprint arXiv:2004.00026,
2020.
[158] Aram W. Harrow, Avinatan Hassidim, and Seth Lloyd. Quantum algorithm for linear systems of equations.
Phys. Rev. Lett., 103:150502, Oct 2009. DOI: 10.1103/PhysRevLett.103.150502.
[159] Vojtěch Havlíček, Antonio D. Córcoles, Kristan Temme, Aram W. Harrow, Abhinav Kandala, Jerry M.
Chow, and Jay M. Gambetta. Supervised learning with quantum-enhanced feature spaces. Nature, 567(7747):
209–212, Mar 2019. ISSN 1476-4687. DOI: 10.1038/s41586-019-0980-2.
40
[160] Brett Hemenway and Sanjeev Khanna. Sensitivity and computational complexity in financial networks.
Algorithmic Finance, 5(3-4):95–110, 2016. DOI: 10.3233/AF-160166.
[161] Maxwell Henderson, Samriddhi Shakya, Shashindra Pradhan, and Tristan Cook. Quanvolutional neural
networks: powering image recognition with quantum circuits. Quantum Machine Intelligence, 2(1):1–9, 2020.
DOI: 10.1007/s42484-020-00012-y.
[162] Loïc Henriet, Lucas Beguin, Adrien Signoles, Thierry Lahaye, Antoine Browaeys, Georges-Olivier Reymond,
and Christophe Jurczak. Quantum computing with neutral atoms. Quantum, 4:327, September 2020. ISSN
2521-327X. DOI: 10.22331/q-2020-09-21-327.
[163] Steven Herbert. No quantum speedup with grover-rudolph state preparation for quantum Monte Carlo
integration. Physical Review E, 103(6), Jun 2021. ISSN 2470-0053. DOI: 10.1103/physreve.103.063302.
[164] Steven Herbert. Quantum monte-carlo integration: The full advantage in minimal circuit depth. arXiv
preprint arXiv:2105.09100, 2021.
[165] Daniel Herr, Benjamin Obert, and Matthias Rosenkranz. Anomaly detection with variational quantum
generative adversarial networks. Quantum Science and Technology, 6(4):045004, Jul 2021. ISSN 2058-9565.
DOI: 10.1088/2058-9565/ac0d4d.
[166] Geoffrey E Hinton. Training products of experts by minimizing contrastive divergence. Neural computation,
14(8):1771–1800, 2002. DOI: 10.1162/089976602760128018.
[167] Mark Hodson, Brendan Ruck, Hugh Ong, David Garvin, and Stefan Dulman. Portfolio rebalancing exper-
iments using the quantum alternating operator ansatz. arXiv preprint arXiv:1911.05296, 2019.
[168] Tito Homem-de-Mello and Güzin Bayraksan. Monte Carlo sampling-based methods for stochastic opti-
mization. Surveys in Operations Research and Management Science, 19(1):56–85, 2014. ISSN 1876-7354.
DOI: 10.1016/j.sorms.2014.05.001.
[169] L. Jeff Hong, Zhaolin Hu, and Guangwu Liu. Monte carlo methods for value-at-risk and conditional
value-at-risk: A review. ACM Trans. Model. Comput. Simul., 24(4), nov 2014. ISSN 1049-3301. DOI:
10.1145/2661631.
[170] Akinori Hosoyamada and Yu Sasaki. Quantum collision attacks on reduced SHA-256 and SHA-512. IACR
Cryptol. ePrint Arch., 2021:292, 2021. DOI: 10.1007/978-3-030-84242-0_22.
[171] J. Hromkovič. Algorithmics for hard problems: introduction to combinatorial optimization, randomization,
approximation, and heuristics. Springer Science & Business Media, 2013. DOI: 10.1007/978-3-662-05269-3.
[172] Henry TC Hu. Swaps, the modern process of financial innovation and the vulnerability of a regulatory
paradigm. U. Pa. L. Rev., 138:333, 1989. DOI: 10.2307/3312314.
[173] Hsin-Yuan Huang, Kishor Bharti, and Patrick Rebentrost. Near-term quantum algorithms for linear systems
of equations. arXiv preprint arXiv:1909.07344, 2019.
[174] Hsin-Yuan Huang, Michael Broughton, Masoud Mohseni, Ryan Babbush, Sergio Boixo, Hartmut Neven,
and Jarrod R McClean. Power of data in quantum machine learning. Nature Communications, 12(1):1–9,
2021. DOI: 10.1038/s41467-021-22539-9.
[175] Patrick Huembeli and Alexandre Dauphin. Characterizing the loss landscape of variational quantum cir-
cuits. Quantum Science and Technology, 6(2):025011, Feb 2021. ISSN 2058-9565. DOI: 10.1088/2058-
9565/abdbc9.
[176] Peter Høyer, Michele Mosca, and Ronald de Wolf. Quantum search on bounded-error inputs. Lecture Notes
in Computer Science, page 291–299, 2003. ISSN 0302-9743. DOI: 10.1007/3-540-45061-0_25.
[177] Arthur Jacot, Franck Gabriel, and Clément Hongler. Neural tangent kernel: Convergence and generalization
in neural networks. arXiv preprint arXiv:1806.07572, 2018.
[178] Samuel Jaques, Michael Naehrig, Martin Roetteler, and Fernando Virdia. Implementing Grover oracles
for quantum key search on AES and LowMC. Advances in Cryptology–EUROCRYPT 2020, 12106:280, 2020.
DOI: 10.1007/978-3-030-45724-2_10.
[179] Sofiene Jerbi, Lea M. Trenkwalder, Hendrik Poulsen Nautrup, Hans J. Briegel, and Vedran Dunjko. Quan-
tum enhancements for deep reinforcement learning in large spaces. PRX Quantum, 2:010328, Feb 2021. DOI:
10.1103/PRXQuantum.2.010328.
[180] Tadashi Kadowaki and Hidetoshi Nishimori. Quantum annealing in the transverse Ising model. Physical
Review E, 58(5):5355––5363, Nov 1998. ISSN 1095-3787. DOI: 10.1103/physreve.58.5355.
[181] Angad Kalra, Faisal Qureshi, and Michael Tisi. Portfolio asset identification using graph algorithms on a
quantum annealer. Available at SSRN 3333537, 2018. DOI: 10.2139/ssrn.3333537.
[182] Kostyantyn Kechedzhi and Vadim N. Smelyanskiy. Open-system quantum annealing in mean-field models
with exponential degeneracy. Physical Review X, 6(2), May 2016. ISSN 2160-3308. DOI: 10.1103/phys-
revx.6.021028.
[183] J. Kempe. Quantum random walks: An introductory overview. Contemporary Physics, 44(4):307–327, Jul
2003. ISSN 1366-5812. DOI: 10.1080/00107151031000110776.
[184] Julia Kempe. Quantum random walks hit exponentially faster. arXiv preprint quant-ph/0205083, 2002.
41
[185] I. Kerenidis and J. Landman. Quantum spectral clustering. Physical Review A, 103(4):042415, 2021. DOI:
10.1103/PhysRevA.103.042415.
[186] I. Kerenidis and A. Prakash. Quantum recommendation systems. arXiv preprint arXiv:1603.08675, 2016.
[187] Iordanis Kerenidis and Anupam Prakash. Quantum gradient descent for linear systems and least squares.
Physical Review A, 101(2), Feb 2020. ISSN 2469-9934. DOI: 10.1103/physreva.101.022316.
[188] Iordanis Kerenidis and Anupam Prakash. A quantum interior point method for LPs and SDPs. ACM
Transactions on Quantum Computing, 1(1), Oct 2020. ISSN 2643-6809. DOI: 10.1145/3406306.
[189] Iordanis Kerenidis, Jonas Landman, Alessandro Luongo, and Anupam Prakash. q-means: A quantum
algorithm for unsupervised machine learning. arXiv preprint arXiv:1812.03584, 2018.
[190] Iordanis Kerenidis, Jonas Landman, and Anupam Prakash. Quantum algorithms for deep convolutional
neural networks. arXiv preprint arXiv:1911.01117, 2019.
[191] Iordanis Kerenidis, Anupam Prakash, and Dániel Szilágyi. Quantum algorithms for portfolio optimization.
In Proceedings of the 1st ACM Conference on Advances in Financial Technologies, AFT ’19, page 147–155,
New York, NY, USA, 2019. Association for Computing Machinery. DOI: 10.1145/3318041.3355465.
[192] Iordanis Kerenidis, Jonas Landman, and Natansh Mathur. Classical and quantum algorithms for orthogonal
neural networks. arXiv preprint arXiv:2106.07198, 2021.
[193] Iordanis Kerenidis, Anupam Prakash, and Dániel Szilágyi. Quantum algorithms for second-order cone
programming and support vector machines. Quantum, 5:427, Apr 2021. ISSN 2521-327X. DOI: 10.22331/q-
2021-04-08-427.
[194] Sumsam Ullah Khan, Ahsan Javed Awan, and Gemma Vall-Llosera. K-means clustering on noisy interme-
diate scale quantum computers. arXiv preprint arXiv:1909.12183, 2019.
[195] Nathan Killoran, Thomas R. Bromley, Juan Miguel Arrazola, Maria Schuld, Nicolás Quesada, and Seth
Lloyd. Continuous-variable quantum neural networks. Phys. Rev. Research, 1:033063, Oct 2019. DOI:
10.1103/PhysRevResearch.1.033063.
[196] James King, Masoud Mohseni, William Bernoudy, Alexandre Fréchette, Hossein Sadeghi, Sergei V
Isakov, Hartmut Neven, and Mohammad H Amin. Quantum-assisted genetic algorithm. arXiv preprint
arXiv:1907.00707, 2019.
[197] A. Yu. Kitaev. Quantum measurements and the Abelian stabilizer problem. arXiv preprint quant-
ph/9511026, 1995.
[198] A. Yu. Kitaev. Fault-tolerant quantum computation by anyons. Annals of Physics, 303(1):2–30, 2003.
ISSN 0003-4916. DOI: 10.1016/S0003-4916(02)00018-0.
[199] A. Yu. Kitaev, A. H. Shen, and M. N. Vyalyi. Classical and Quantum Computation. American Mathematical
Society, USA, 2002. ISBN 0821832298. DOI: 10.1090/gsm/047.
[200] F. C. Klebaner. Introduction to Stochastic Calculus with Applications. Imperial College Press, 3rd edition,
2012. DOI: 10.1142/p821.
[201] G. Klepac. Chapter 12 – The Schrödinger equation as inspiration for a client portfolio simulation hybrid
system based on dynamic Bayesian networks and the REFII model. In Quantum Inspired Computational
Intelligence, pages 391–416. Morgan Kaufmann, Boston, 2017. ISBN 978-0-12-804409-4. DOI: 10.1016/B978-
0-12-804409-4.00012-7.
[202] Gary Kochenberger, Jin-Kao Hao, Fred Glover, Mark Lewis, Zhipeng Lü, Haibo Wang, and Yang Wang.
The unconstrained binary quadratic programming problem: a survey. Journal of combinatorial optimization,
28(1):58–81, 2014. DOI: 10.1007/s10878-014-9734-0.
[203] Daphne Koller and Nir Friedman. Probabilistic Graphical Models: Principles and Techniques. The MIT
Press, 2009. ISBN 0262013193.
[204] P. Krantz, M. Kjaergaard, F. Yan, T. P. Orlando, S. Gustavsson, and W. D. Oliver. A quantum engineer’s
guide to superconducting qubits. Applied Physics Reviews, 6(2):021318, 2019. DOI: 10.1063/1.5089550.
[205] Gagnesh Kumar and Sunil Agrawal. CMOS limitations and futuristic carbon allotropes. In 2017 8th IEEE
Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON), pages 68–
71. IEEE, 2017. DOI: 10.1109/IEMCON.2017.8117151.
[206] Joachim Lambek. The mathematics of sentence structure. Journal of Symbolic Logic, 33(4):627–628, 1968.
DOI: 10.2307/2271418.
[207] Gregory F. Lawler and Vlada Limic. Random Walk: A Modern Introduction. Cambridge Studies in
Advanced Mathematics. Cambridge University Press, 2010. DOI: 10.1017/CBO9780511750854.
[208] Wolfgang Lechner, Philipp Hauke, and Peter Zoller. A quantum annealing architecture with all-to-all
connectivity from local interactions. Science Advances, 1(9):e1500838, 2015. DOI: 10.1126/sciadv.1500838.
[209] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition.
Proceedings of the IEEE, 86(11):2278–2324, 1998. DOI: 10.1109/5.726791.
[210] Yonghae Lee, Jaewoo Joo, and Soojoon Lee. Hybrid quantum linear equation algorithm and its experimental
test on IBM quantum experience. Scientific Reports, 9, 03 2019. DOI: 10.1038/s41598-019-41324-9.
42
[211] David X. Li. On default correlation: A copula function approach. The Journal of Fixed Income, 9(4):
43–54, 2000. ISSN 1059-8596. DOI: 10.3905/jfi.2000.319253.
[212] Shuai Li, Kui Jia, Yuxin Wen, Tongliang Liu, and Dacheng Tao. Orthogonal deep neural networks. IEEE
transactions on pattern analysis and machine intelligence, 43(4):1352—1368, April 2021. ISSN 0162-8828.
DOI: 10.1109/tpami.2019.2948352.
[213] Daniel A. Lidar and Todd A. Brun. Quantum Error Correction. Cambridge University Press, 2013. DOI:
10.1017/CBO9781139034807.
[214] Jin-Guo Liu and Lei Wang. Differentiable learning of quantum circuit Born machines. Physical Review A,
98(6), Dec 2018. ISSN 2469-9934. DOI: 10.1103/physreva.98.062324.
[215] Seth Lloyd and Christian Weedbrook. Quantum generative adversarial learning. Phys. Rev. Lett., 121:
040502, Jul 2018. DOI: 10.1103/PhysRevLett.121.040502.
[216] Seth Lloyd, Masoud Mohseni, and Patrick Rebentrost. Quantum algorithms for supervised and unsuper-
vised machine learning. arXiv preprint arXiv:1307.0411, 2013.
[217] Seth Lloyd, Masoud Mohseni, and Patrick Rebentrost. Quantum principal component analysis. Nature
Physics, 10(9):631–633, Jul 2014. ISSN 1745-2481. DOI: 10.1038/nphys3029.
[218] Seth Lloyd, Silvano Garnerone, and Paolo Zanardi. Quantum algorithms for topological and geometric
analysis of data. Nat. Commun., 7(1):1–7, 2016. DOI: 10.1038/ncomms10138.
[219] Elisabeth Lobe and Annette Lutz. Minor embedding in broken chimera and pegasus graphs is np-complete.
arXiv preprint arXiv:2110.08325, 2021.
[220] Owen Lockwood and Mei Si. Reinforcement learning with quantum variational circuits. In Proceedings
of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, volume 16, pages
245–251. AAAI Press, 2020.
[221] R. Lorenz, A. Pearson, K. Meichanetzidis, D. Kartsaklis, and B. Coecke. QNLP in practice: Running
compositional models of meaning on a quantum computer. arXiv preprint arXiv:2102.12846, 2021.
[222] László Lovász. Random walks on graphs: A survey. Combinatorics, Paul Erdős is Eighty, 2, 01 1993.
[223] Guang Hao Low and Isaac L. Chuang. Optimal Hamiltonian simulation by quantum signal processing.
Phys. Rev. Lett., 118(1), Jan 2017. ISSN 1079-7114. DOI: 10.1103/physrevlett.118.010501.
[224] Andrew Lucas. Ising formulations of many NP problems. Frontiers in Physics, 2:5, 2014. ISSN 2296-424X.
DOI: 10.3389/fphy.2014.00005.
[225] James MacQueen. Some methods for classification and analysis of multivariate observations. In Proceedings
of the fifth Berkeley symposium on mathematical statistics and probability, volume 1, pages 281–297. Oakland,
CA, USA, 1967.
[226] Frédéric Magniez, Ashwin Nayak, Jérémie Roland, and Miklos Santha. Search via quantum walk. SIAM
Journal on Computing, 40(1):14–164, Jan 2011. ISSN 1095-7111. DOI: 10.1137/090745854.
[227] Renata Mansini, Włodzimierz Ogryczak, and M. Grazia Speranza. Linear and mixed integer programming
for portfolio optimization. Springer, 2015. DOI: 10.1007/978-3-319-18482-1.
[228] Harry Markowitz. Portfolio selection. The Journal of Finance, 7(1):77–91, 1952. DOI: 10.1111/j.1540-
6261.1952.tb01525.x.
[229] Sam McArdle, Tyson Jones, Suguru Endo, Ying Li, Simon C. Benjamin, and Xiao Yuan. Variational
ansatz-based quantum simulation of imaginary time evolution. npj Quantum Information, 5(1), Sep 2019.
ISSN 2056-6387. DOI: 10.1038/s41534-019-0187-2.
[230] Jarrod R McClean, Jonathan Romero, Ryan Babbush, and Alán Aspuru-Guzik. The theory of variational
hybrid quantum-classical algorithms. New Journal of Physics, 18(2):023023, Feb 2016. DOI: 10.1088/1367-
2630/18/2/023023.
[231] A.D. McLachlan. A variational solution of the time-dependent Schrödinger equation. Molecular Physics, 8
(1):39–44, 1964. DOI: 10.1080/00268976400100041.
[232] Konstantinos Meichanetzidis, Alexis Toumi, Giovanni de Felice, and Bob Coecke. Grammar-aware question-
answering on quantum computers. arXiv preprint arXiv:2012.03756, 2020.
[233] Alexandre Ménard, Ivan Ostojic, Mark Patel, and Daniel Volz. A game plan for quantum computing, 2020.
[234] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations
in vector space. In ICLR: Proceeding of the International Conference on Learning Representations Workshop,
2013.
[235] Andrew Milne, Maxwell Rounds, and Phil Goddard. Optimal feature selection in credit scoring and
classification using a quantum annealer. White Paper 1Qbit, 2017.
[236] K. Mitarai, M. Negoro, M. Kitagawa, and K. Fujii. Quantum circuit learning. Physical Review A, 98(3),
Sep 2018. ISSN 2469-9934. DOI: 10.1103/physreva.98.032309.
[237] Kosuke Mitarai and Keisuke Fujii. Methodology for replacing indirect measurements with direct measure-
ments. Physical Review Research, 1(1), Aug 2019. ISSN 2643-1564. DOI: 10.1103/physrevresearch.1.013006.
[238] Ashley Montanaro. Quantum speedup of Monte Carlo methods. Proceedings of the Royal Society A: Math-
ematical, Physical and Engineering Sciences, 471(2181):20150301, Sep 2015. DOI: 10.1098/rspa.2015.0301.
43
[239] Renato Monteiro and Takashi Tsuchiya. Polynomial convergence of primal-dual algorithms for the second-
order cone program based on the MZ-family of directions. Mathematical programming, 88(1):61–83, 2000.
DOI: 10.1007/PL00011378.
[240] Cristopher Moore and Alexander Russell. Quantum walks on the hypercube. In Proceedings of the 6th
International Workshop on Randomization and Approximation Techniques, page 164–178. Springer, 2002.
DOI: 10.1007/3-540-45726-7_14.
[241] Catarina Moreira and Andreas Wichert. Quantum-like bayesian networks for modeling decision making.
Frontiers in Psychology, 7:11, 2016. ISSN 1664-1078. DOI: 10.3389/fpsyg.2016.00011.
[242] Mario Motta, Chong Sun, Adrian T. K. Tan, Matthew J. O’Rourke, Erika Ye, Austin J. Minnich, Fernando
G. S. L. Brandão, and Garnet Kin-Lic Chan. Determining eigenstates and thermal states on a quantum
computer using quantum imaginary time evolution. Nature Physics, 16(2):205–210, Nov 2019. ISSN 1745-
2481. DOI: 10.1038/s41567-019-0704-4.
[243] Mikko Möttönen, Juha J. Vartiainen, Ville Bergholm, and Martti M. Salomaa. Transformation of quantum
states using uniformly controlled rotations. Quantum Info. Comput., 5(6):467–473, Sep 2005. ISSN 1533-7146.
DOI: 10.26421/QIC5.6-5.
[244] Samuel Mugel, Carlos Kuchkovsky, Escolástico Sánchez, Samuel Fernández-Lorenzo, Jorge Luis-Hita, En-
rique Lizaso, and Román Orús. Dynamic portfolio optimization with real datasets using quantum processors
and quantum-inspired tensor networks. Phys. Rev. Research, 4:013006, Jan 2022. DOI: 10.1103/PhysRevRe-
search.4.013006.
[245] K.-R. Muller, S. Mika, G. Ratsch, K. Tsuda, and B. Scholkopf. An introduction to kernel-based learning
algorithms. IEEE Transactions on Neural Networks, 12(2):181–201, 2001. DOI: 10.1109/72.914517.
[246] Gregory L Naber. The Geometry of Minkowski Spacetime: An Introduction to the Mathematics of the
Special Theory of Relativity, volume 92. Springer Science & Business Media, 01 2012. ISBN 978-1-4419-7837-
0. DOI: 10.1007/978-1-4419-7838-7.
[247] Kouhei Nakaji, Hiroyuki Tezuka, and Naoki Yamamoto. Quantum-enhanced neural networks in the neural
tangent kernel framework. arXiv preprint arXiv:2109.03786, 2021.
[248] Giacomo Nannicini. Fast quantum subroutines for the simplex method. In Integer Programming and
Combinatorial Optimization - 22nd International Conference, IPCO 2021, Atlanta, GA, USA, May 19-21,
2021, Proceedings, volume 12707 of Lecture Notes in Computer Science, pages 311–325. Springer, 2021. DOI:
10.1007/978-3-030-73879-2.
[249] Usman Naseem, Imran Razzak, Shah Khalid Khan, and Mukesh Prasad. A comprehensive survey on word
representation models: From classical to state-of-the-art word representation language models. ACM Trans.
Asian Low-Resour. Lang. Inf. Process., 20(5), Jun 2021. ISSN 2375-4699. DOI: 10.1145/3434237.
[250] Radford M Neal. Probabilistic inference using Markov chain Monte Carlo methods. Department of Com-
puter Science, University of Toronto Toronto, ON, Canada, 1993.
[251] Yurii Nesterov and Arkadii Nemirovskii. Interior-point polynomial algorithms in convex programming.
SIAM, 1994. DOI: 10.1137/1.9781611970791.
[252] Andrew Y. Ng, Michael I. Jordan, and Yair Weiss. On spectral clustering: Analysis and an algorithm.
In Proceedings of the 14th International Conference on Neural Information Processing Systems: Natural and
Synthetic, page 849–856, 2001.
[253] Michael A. Nielsen and Isaac L. Chuang. Quantum Computation and Quantum Information. Cambridge
University Press, 2010. DOI: 10.1017/CBO9780511976667.
[254] J. R. Norris. Markov Chains. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge
University Press, 1997. DOI: 10.1017/CBO9780511810633.
[255] Masayuki Ohzeki. Breaking limitation of quantum annealer in solving optimization problems under con-
straints. Scientific Reports, 10, 02 2020. DOI: 10.1038/s41598-020-60022-5.
[256] S. Okada, M. Ohzeki, and S. Taguchi. Efficient partition of integer optimization problems with one-hot
encoding. Scientific Reports, 2019. DOI: 10.1038/s41598-019-49539-6.
[257] Basel Committee on Banking Supervision. High-level summary of Basel III reforms, 2017.
[258] Román Orús, Samuel Mugel, and Enrique Lizaso. Forecasting financial crashes with quantum computing.
Phys. Rev. A, 99(6), Jun 2019. DOI: 10.1103/physreva.99.060301.
[259] Konrad Osterwalder and Robert Schrader. Axioms for Euclidean Green’s functions. Communications in
mathematical physics, 31(2):83–112, 1973. DOI: 10.1007/BF01645738.
[260] Daniel W. Otter, Julian R. Medina, and Jugal K. Kalita. A survey of the usages of deep learning for
natural language processing. IEEE Transactions on Neural Networks and Learning Systems, 32(2):604–624,
2021. DOI: 10.1109/TNNLS.2020.2979670.
[261] Guansong Pang, Chunhua Shen, Longbing Cao, and Anton Van Den Hengel. Deep learning for anomaly
detection: A review. ACM Comput. Surv., 54(2), mar 2021. ISSN 0360-0300. DOI: 10.1145/3439950.
[262] G. D. Paparo, V. Dunjko, A. Makmal, M. A. Martin-Delgado, and H. J. Briegel. Quantum speedup for
active learning agents. Phys. Rev. X, 2014. DOI: 10.1103/PhysRevX.4.031002.
44
[263] Alberto Peruzzo, Jarrod McClean, Peter Shadbolt, Man-Hong Yung, Xiao-Qi Zhou, Peter J. Love, Alán
Aspuru-Guzik, and Jeremy L. O’Brien. A variational eigenvalue solver on a photonic quantum processor.
Nat. Commun., 5(1), Jul 2014. DOI: 10.1038/ncomms5213.
[264] J. Pino, Joan Dreiling, C. Figgatt, J. Gaebler, S. Moses, M. Allman, C. Baldwin, Michael Foss-Feig,
D. Hayes, K. Mayer, C. Ryan-Anderson, and Brian Neyenhuis. Demonstration of the trapped-ion quantum
CCD computer architecture. Nature, 2021. DOI: 10.1038/s41586-021-03318-4.
[265] Marco Pistoia, Syed Farhan Ahmad, Akshay Ajagekar, Alexander Buts, Shouvanik Chakrabarti, Dylan Her-
man, Shaohan Hu, Andrew Jena, Pierre Minssen, Pradeep Niroula, Arthur Rattew, Yue Sun, and Romina
Yalovetzky. Quantum machine learning for finance. Proceedings of the 40th IEEE/ACM International Con-
ference on Computer Aided Design (ICCAD), November 2021. Invited Special Session Paper.
[266] Martin Plesch and Časlav Brukner. Quantum-state preparation with universal gate decompositions. Phys.
Rev. A, 83:032302, Mar 2011. DOI: 10.1103/PhysRevA.83.032302.
[267] Anupam Prakash. Quantum algorithms for linear algebra and machine learning. University of California,
Berkeley, 2014.
[268] John Preskill. Quantum computing and the entanglement frontier. arXiv preprint arXiv:1203.5813, 2012.
[269] John Preskill. Quantum computing in the NISQ era and beyond. Quantum, 2:79, Aug 2018. ISSN 2521-
327X. DOI: 10.22331/q-2018-08-06-79.
[270] Arthur G. Rattew, Shaohan Hu, Marco Pistoia, Richard Chen, and Steve Wood. A Domain-
agnostic, Noise-resistant, Hardware-efficient Evolutionary Variational Quantum Eigensolver. arXiv preprint
arXiv:1910.09694, 2020.
[271] Arthur G Rattew, Yue Sun, Pierre Minssen, and Marco Pistoia. The efficient preparation of normal
distributions in quantum registers. Quantum, 5:609, 2021. DOI: 10.22331/q-2021-12-23-609.
[272] Patrick Rebentrost and Seth Lloyd. Quantum computational finance: quantum algorithm for portfolio
optimization. arXiv preprint arXiv:1811.03975, 2018.
[273] Patrick Rebentrost, Masoud Mohseni, and Seth Lloyd. Quantum support vector machine for big data
classification. Phys. Rev. Lett., 2014. DOI: 10.1103/PhysRevLett.113.130503.
[274] Patrick Rebentrost, Brajesh Gupt, and Thomas R. Bromley. Quantum computational finance: Monte Carlo
pricing of financial derivatives. Physical Review A, 98(2), Aug 2018. ISSN 2469-9934. DOI: 10.1103/phys-
reva.98.022321.
[275] Research and Markets. Financial Services Global Market Report 2021: COVID-19 Impact and Recovery
to 2030, 2021.
[276] Christian P Robert, George Casella, and George Casella. Monte Carlo statistical methods, volume 2.
Springer, 2004. DOI: 10.1007/978-1-4757-4145-2.
[277] Martin Roetteler, Michael Naehrig, Krysta M Svore, and Kristin Lauter. Quantum resource estimates for
computing elliptic curve discrete logarithms. In International Conference on the Theory and Application of
Cryptology and Information Security, pages 241–270, 2017. DOI: 10.1007/978-3-319-70697-9_9.
[278] Jérémie Roland and Nicolas J. Cerf. Quantum search by local adiabatic evolution. Physical Review A, 65
(4), Mar 2002. ISSN 1094-1622. DOI: 10.1103/physreva.65.042308.
[279] Steven Roman. Advanced linear algebra. Springer, 2008. DOI: 10.1007/978-0-387-72831-5.
[280] G. Rosenberg, C. Adolphs, A. Milne, and A. Lee. Swap netting using a quantum annealer. White Paper
1Qbit, 2016.
[281] Gili Rosenberg and Maxwell Rounds. Long-short minimum risk parity optimization using a quantum or
digital annealer. White Paper 1Qbit, 2018.
[282] Gili Rosenberg, Poya Haghnegahdar, Phil Goddard, Peter Carr, Kesheng Wu, and Marcos Lopez de Prado.
Solving the optimal trading trajectory problem using a quantum annealer. IEEE Journal of Selected Topics
in Signal Processing, 10(6):1053–1060, Sep 2016. DOI: 10.1109/jstsp.2016.2574703.
[283] Gilli Rosenberg. Finding optimal arbitrage opportunities using a quantum annealer. White Paper 1Qbit,
2016.
[284] Yue Ruan, Xiling Xue, Heng Liu, Jianing Tan, and Xi Li. Quantum algorithm for k-nearest neighbors
classification based on the metric of Hamming distance. International Journal of Theoretical Physics, 56(11):
3496–3507, 2017. DOI: 10.1007/s10773-017-3514-4.
[285] Yue Ruan, Samuel Marsh, Xilin Xue, Xi Li, Zhihao Liu, and Jingbo Wang. Quantum approximate algorithm
for np optimization problems with constraints. arXiv preprint arXiv:2002.00943, 2020.
[286] M. Saffman, T. G. Walker, and K. Mølmer. Quantum information with Rydberg atoms. Rev. Mod. Phys.,
82:2313–2363, Aug 2010. DOI: 10.1103/RevModPhys.82.2313.
[287] T. Sakuma. Application of deep quantum neural networks to finance. arXiv preprint arXiv:2011.07319,
2020.
[288] Ruslan Salakhutdinov and Geoffrey Hinton. Deep Boltzmann machines. In Proceedings of the Twelth
International Conference on Artificial Intelligence and Statistics, volume 5, pages 448–455. PMLR, 2009.
[289] Salesforce. Trends in Financial Services, 2020.
45
[290] Mohan Sarovar, Timothy Proctor, Kenneth Rudinger, Kevin Young, Erik Nielsen, and Robin Blume-
Kohout. Detecting crosstalk errors in quantum information processors. Quantum, 4:321, Sep 2020. ISSN
2521-327X. DOI: 10.22331/q-2020-09-11-321.
[291] T. Schlegl, P. Seeböck, S. M. Waldstein, U. Schmidt-Erfurth, and G. Langs. Unsupervised anomaly detec-
tion with generative adversarial networks to guide marker discovery. In Information Processing in Medical
Imaging. Springer, 2017. DOI: 10.1007/978-3-319-59050-9_12.
[292] Maria Schuld. Supervised quantum machine learning models are kernel methods. arXiv preprint
arXiv:2101.11020, 2021.
[293] Maria Schuld, Ryan Sweke, and Johannes Jakob Meyer. Effect of data encoding on the expressive power of
variational quantum-machine-learning models. Physical Review A, 103(3):032430, 2021. DOI: 10.1103/Phys-
RevA.103.032430.
[294] Ramamurti Shankar. Principles of quantum mechanics. Springer Science & Business Media, 1994. DOI:
10.1007/978-1-4757-0576-8.
[295] Ruslan Shaydulin, Ilya Safro, and Jeffrey Larson. Multistart methods for quantum approximate optimiza-
tion. In 2019 IEEE High Performance Extreme Computing Conference (HPEC), pages 1–8, 2019. DOI:
10.1109/HPEC.2019.8916288.
[296] Ruslan Shaydulin, Hayato Ushijima-Mwesigwa, Christian FA Negre, Ilya Safro, Susan M Mniszewski, and
Yuri Alexeev. A hybrid approach for solving optimization problems on small quantum computers. Computer,
52(6):18–26, 2019. DOI: 10.1109/MC.2019.2908942.
[297] Ruslan Shaydulin, Hayato Ushijima-Mwesigwa, Ilya Safro, Susan Mniszewski, and Yuri Alexeev. Network
community detection on small quantum computers. Advanced Quantum Technologies, 2(9):1900029, 2019.
DOI: 10.1002/qute.201900029.
[298] Ruslan Shaydulin, Stuart Hadfield, Tad Hogg, and Ilya Safro. Classical symmetries and the quantum
approximate optimization algorithm. Quantum Information Processing, 20(11):359, Oct 2021. ISSN 1573-
1332. DOI: 10.1007/s11128-021-03298-4.
[299] Feihong Shen and Jun Liu. Qfcnn: Quantum fourier convolutional neural network. arXiv preprint
arXiv:2106.10421, 2021.
[300] Neil Shenvi, Julia Kempe, and K. Birgitta Whaley. Quantum random-walk search algorithm. Physical
Review A, 67(5), May 2003. DOI: 10.1103/physreva.67.052307.
[301] Andrei Shleifer and Robert W. Vishny. The limits of arbitrage. The Journal of Finance, 52(1):35–55, 1997.
DOI: 10.2307/2329555.
[302] Peter W. Shor. Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum
computer. SIAM Journal on Computing, 26(5):1484–1509, Oct 1997. DOI: 10.1137/s0097539795293172.
[303] Henrique Silvério, Sebastián Grijalva, Constantin Dalyac, Lucas Leclerc, Peter J Karalekas, Nathan
Shammah, Mourad Beji, Louis-Paul Henry, and Loïc Henriet. Pulser: An open-source package for the design
of pulse sequences in programmable neutral-atom arrays. arXiv preprint arXiv:2104.15044, 2021.
[304] N. Slate, E. Matwiejew, S. Marsh, and J. B. Wang. Quantum walk-based portfolio optimisation. Quantum,
5:513, July 2021. ISSN 2521-327X. DOI: 10.22331/q-2021-07-28-513.
[305] R. D. Somma, S. Boixo, H. Barnum, and E. Knill. Quantum simulations of classical annealing processes.
Phys. Rev. Lett., Sep 2008. DOI: 10.1103/PhysRevLett.101.130504.
[306] Wanmei Soon and Heng-Qing Ye. Currency arbitrage detection using a binary integer programming model.
International Journal of Mathematical Education in Science and Technology, 42(3):369–376, 2011. DOI:
10.1080/0020739X.2010.526248.
[307] Nikitas Stamatopoulos, Daniel J. Egger, Yue Sun, Christa Zoufal, Raban Iten, Ning Shen, and Stefan
Woerner. Option pricing using quantum computers. Quantum, 4:291, Jul 2020. ISSN 2521-327X. DOI:
10.22331/q-2020-07-06-291.
[308] Michael Streif and Martin Leib. Training the quantum approximate optimization algorithm without access
to a quantum processing unit. Quantum Science and Technology, 5(3):034008, 2020. DOI: 10.1088/2058-
9565/ab8c2b.
[309] René M Stulz. Credit default swaps and the credit crisis. Journal of Economic Perspectives, 24(1):73–92,
2010. DOI: 10.1257/jep.24.1.73.
[310] Mario Szegedy. Quantum speed-up of Markov chain based algorithms. In 45th Annual IEEE symposium
on foundations of computer science, 2004. DOI: 10.1109/FOCS.2004.53.
[311] Yuto Takaki, Kosuke Mitarai, Makoto Negoro, Keisuke Fujii, and Masahiro Kitagawa. Learning temporal
data with a variational quantum recurrent neural network. Physical Review A, 103(5), May 2021. ISSN
2469-9934. DOI: 10.1103/physreva.103.052414.
[312] Tomoki Tanaka, Yohichi Suzuki, Shumpei Uno, Rudy Raymond, Tamiya Onodera, and Naoki Yamamoto.
Amplitude estimation via maximum likelihood on noisy quantum computer. Quantum Inf. Process., 20:293,
2021. DOI: 10.1007/s11128-021-03215-9.
46
[313] Ewin Tang. A quantum-inspired classical algorithm for recommendation systems. In Proceedings of the 51st
Annual ACM SIGACT Symposium on Theory of Computing. Association for Computing Machinery, 2019.
ISBN 9781450367059. DOI: 10.1145/3313276.3316310.
[314] Hao Tang, Anurag Pal, Tian-Yu Wang, Lu-Feng Qiao, Jun Gao, and Xian-Min Jin. Quantum computation
for pricing the collateralized debt obligations. Quantum Engineering, 3(4):e84, 2021. DOI: 10.1002/que2.84.
[315] Heath P. Terry, Debra Schwartz, and Tina Sun. The Future of Finance: Volume 3: The socialization of
finance, 2015.
[316] Robert R Tucci. Quantum Bayesian nets. International Journal of Modern Physics B, 09(03):295–337,
1995. DOI: 10.1142/S0217979295000148.
[317] Shashanka Ubaru, Ismail Yunus Akhalwaya, Mark S Squillante, Kenneth L Clarkson, and Lior
Horesh. Quantum topological data analysis with linear depth and exponential speedup. arXiv preprint
arXiv:2108.02811, 2021.
[318] Hayato Ushijima-Mwesigwa, Ruslan Shaydulin, Christian F. A. Negre, Susan M. Mniszewski, Yuri Alexeev,
and Ilya Safro. Multilevel combinatorial optimization across quantum architectures. ACM Transactions on
Quantum Computing, 2(1), feb 2021. ISSN 2643-6809. DOI: 10.1145/3425607.
[319] Joran van Apeldoorn, András Gilyén, Sander Gribling, and Ronald de Wolf. Quantum SDP-solvers: Better
upper and lower bounds. Quantum, 4:230, Feb 2020. ISSN 2521-327X. DOI: 10.22331/q-2020-02-14-230.
[320] Peter JM Van Laarhoven and Emile HL Aarts. Simulated annealing. In Simulated annealing: Theory and
applications, pages 7–15. Springer, 1987. DOI: 10.1007/978-94-015-7744-1.
[321] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz
Kaiser, and Illia Polosukhin. Attention is all you need. In Proceedings of the 31st International Conference
on Neural Information Processing Systems, NIPS’17, 2017.
[322] Oswald Veblen and John Wesley Young. Projective geometry, volume 2. Ginn, 1918.
[323] Salvador Elías Venegas-Andraca. Quantum walks: a comprehensive review. Quantum Information Pro-
cessing, 11(5):101–1106, Jul 2012. ISSN 1573-1332. DOI: 10.1007/s11128-012-0432-5.
[324] Davide Venturelli and Alexei Kondratyev. Reverse quantum annealing approach to portfolio optimization
problems. Quantum Machine Intelligence, 1(1-2):17––30, Apr 2019. ISSN 2524-4914. DOI: 10.1007/s42484-
019-00001-w.
[325] Davide Venturelli, Salvatore Mandrà, Sergey Knysh, Bryan O’Gorman, Rupak Biswas, and Vadim Smelyan-
skiy. Quantum optimization of fully connected spin glasses. Phys. Rev. X, 5:031040, Sep 2015. DOI:
10.1103/PhysRevX.5.031040.
[326] Antti P. Vepsäläinen, Amir H. Karamlou, John L. Orrell, Akshunna S. Dogra, Ben Loer, Francisca Vascon-
celos, David K. Kim, Alexander J. Melville, Bethany M. Niedzielski, Jonilyn L. Yoder, et al. Impact of ionizing
radiation on superconducting qubit coherence. Nature, 584(7822):551–556, 2020. DOI: 10.1038/s41586-020-
2619-8.
[327] Guillaume Verdon, Trevor McCourt, Enxhell Luzhnica, Vikash Singh, Stefan Leichenauer, and Jack Hidary.
Quantum graph neural networks. arXiv preprint arXiv:1909.12264, 2019.
[328] Dong-Sheng Wang. A comparative study of universal quantum computing models: towards a physical
unification a comparative study of universal quantum computing models: towards a physical unification.
Quantum Engineering, page e85, 2021. DOI: 10.1002/que2.85.
[329] Gang Wang, Jinxing Hao, Jian Ma, and Hongbing Jiang. A comparative assessment of ensemble learn-
ing for credit scoring. Expert Systems with Applications, 38(1):223–230, 2011. ISSN 0957-4174. DOI:
10.1016/j.eswa.2010.06.048.
[330] Guoming Wang. Quantum algorithm for linear regression. Phys. Rev. A, 96:012335, Jul 2017. DOI:
10.1103/PhysRevA.96.012335.
[331] Yuchen Wang, Zixuan Hu, Barry C. Sanders, and Sabre Kais. Qudits and high-dimensional quantum
computing. Frontiers in Physics, 8, Nov 2020. ISSN 2296-424X. DOI: 10.3389/fphy.2020.589504.
[332] Stefan Weinzierl. Introduction to monte carlo methods. arXiv preprint hep-ph/0006269, 2000.
[333] Jacob R. West, Daniel A. Lidar, Bryan H. Fong, and Mark F. Gyure. High fidelity quantum gates via dynam-
ical decoupling. Phys. Rev. Lett., 105(23), Dec 2010. ISSN 1079-7114. DOI: 10.1103/physrevlett.105.230503.
[334] Jarrod West and Maumita Bhattacharya. Intelligent financial fraud detection: A comprehensive review.
Computers & Security, 57:47–66, 2016. ISSN 0167-4048. DOI: 10.1016/j.cose.2015.09.005.
[335] G. C. Wick. Properties of Bethe-Salpeter wave functions. Phys. Rev., 96:1124–1134, Nov 1954. DOI:
10.1103/PhysRev.96.1124.
[336] N. Wiebe, A. Kapoor, and K. Svore. Quantum algorithms for nearest-neighbor methods for supervised and
unsupervised learning. Quantum Information & Computation, 15, 2015.
[337] Nathan Wiebe, Daniel Braun, and Seth Lloyd. Quantum algorithm for data fitting. Phys. Rev. Lett., 109:
050505, Aug 2012. DOI: 10.1103/PhysRevLett.109.050505.
[338] Stefan Woerner and Daniel J. Egger. Quantum risk analysis. npj Quantum Information, 5(1), Feb 2019.
ISSN 2056-6387. DOI: 10.1038/s41534-019-0130-6.
47
[339] Laurence A. Wolsey and George L Nemhauser. Integer and combinatorial optimization, volume 55. John
Wiley & Sons, 1999. DOI: 10.1002/9781118627372.
[340] Leonard Wossnig, Zhikuan Zhao, and Anupam Prakash. Quantum linear system algorithm for dense
matrices. Physical Review Letters, 120(5), Jan 2018. ISSN 1079-7114. DOI: 10.1103/physrevlett.120.050502.
[341] Dongkuan Xu and Yingjie Tian. A comprehensive survey of clustering algorithms. Annals of Data Science,
2(2):165–193, 2015. DOI: 10.1007/s40745-015-0040-1.
[342] Romina Yalovetzky, Pierre Minssen, Dylan Herman, and Marco Pistoia. NISQ-HHL: Portfolio optimization
for near-term quantum hardware. arXiv preprint arXiv:2110.15958, 2021.
[343] Elena Yndurain, Stefan Woerner, and Daniel Egger. Exploring quantum computing use cases for financial
services, 2019.
[344] Xiao Yuan, Suguru Endo, Qi Zhao, Ying Li, and Simon C. Benjamin. Theory of variational quantum
simulation. Quantum, 3:191, Oct 2019. ISSN 2521-327X. DOI: 10.22331/q-2019-10-07-191.
[345] Zhikuan Zhao, Jack K. Fitzsimons, and Joseph F. Fitzsimons. Quantum-assisted Gaussian process regres-
sion. Physical Review A, 99(5):052331, 2019. DOI: 10.1103/PhysRevA.99.052331.
[346] Elton Yechao Zhu, Sonika Johri, Dave Bacon, Mert Esencan, Jungsang Kim, Mark Muir, Nikhil Mur-
gai, Jason Nguyen, Neal Pisenti, Adam Schouela, et al. Generative quantum learning of joint probability
distribution functions. arXiv preprint arXiv:2109.06315, 2021.
[347] Christa Zoufal, Aurélien Lucchi, and Stefan Woerner. Quantum generative adversarial networks for learning
and loading random distributions. npj Quantum Information, 5(1), 2019. DOI: 10.1038/s41534-019-0223-2.
[348] Christa Zoufal, Aurélien Lucchi, and Stefan Woerner. Variational quantum Boltzmann machines. Quantum
Machine Intelligence, 3(1), Feb 2021. ISSN 2524-4914. DOI: 10.1007/s42484-020-00033-7.
48