You are on page 1of 33

Quantum Turing Machine:

A Window into the Realm of Quantum Computation


Lauro Teixeira1 , Paulo Palmuti2
1
Departamento de Ciências da Computação da UFMG
DCC - UFMG

Abstract. This article delves into the realm of quantum computing, exploring
the fundamental concepts of Turing Machines, Quantum Turing Machines, and
their complexity classes. The primary focus is on the advancements and impli-
cations of quantum computing. Then the article begins with an overview of clas-
sical computation models, including the Turing Machine, recognizable and de-
cidable problems, the Church-Turing thesis, and complexities classes in terms of
time and space. This establishes a foundational understanding of classical com-
putation. The Quantum Turing Machine is then introduced, with discussions on
its formalization, computation concept, transition functions, states, tape, mea-
surement, halting, and its equivalence to classical Turing Machines. Moreover,
the article explores quantum complexity classes, such as QMA, and QIP which
showcase the power and limitations of quantum computation. Throughout the
article, the focus remains on quantum computing, aiming to provide readers with
a comprehensive understanding of this rapidly evolving field. By examining the
foundations, complexities, and advancements of quantum computing, the article
offers valuable insights into the potential applications and implications of this
groundbreaking technology.
Contents

1 Introduction and historical context 3

2 Review of Classification Computation Models and Turing Machine 4


2.1 Turing Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2 The Halting Problem and Limits of Computation . . . . . . . . . . . . . 5
2.3 Recognizable and Decidable Problems . . . . . . . . . . . . . . . . . . . 7
2.3.1 Turing-Recognizable . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3.2 Turing-Decidable . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.4 Church-Turing thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.5 Strong Church-Turing thesis . . . . . . . . . . . . . . . . . . . . . . . . 8
2.6 Computation Complexities Classes . . . . . . . . . . . . . . . . . . . . . 8
2.6.1 Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.6.2 Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

3 Church–Turing–Deutsch principle 12
3.1 Physical Implications of the Turing Machine . . . . . . . . . . . . . . . . 13
3.2 Quick brief about Quantum Physics . . . . . . . . . . . . . . . . . . . . 14

4 Quantum Turing Machine 16


4.1 Computation on a Quantum Turing Machine . . . . . . . . . . . . . . . . 17
4.2 The halting of the Quantum Turing machine . . . . . . . . . . . . . . . . 21
4.3 Example: the computational advantages . . . . . . . . . . . . . . . . . . 21

5 Probabilistic Turing Machine 23


5.1 Probabilistic Turing Machine . . . . . . . . . . . . . . . . . . . . . . . . 23
5.2 PTM Complexities Classes . . . . . . . . . . . . . . . . . . . . . . . . . 24
5.3 QTM Complexities class . . . . . . . . . . . . . . . . . . . . . . . . . . 27
5.4 Conjectured and Proved Relations . . . . . . . . . . . . . . . . . . . . . 31

6 Bibliographic 32
1. Introduction and historical context
The development of quantum mechanics, the theory that describes the properties of na-
ture on the scale of atoms and subatomic particles, was one of the greatest intellectual
revolutions of the 20th century, if not for the entire history of mankind. The strange
and counterintuitive predictions of this theoretical achievement induced new interpreta-
tions of reality, philosophical questions, and technological innovations. In particular, the
surprising properties of superposition, interference, and entanglement have brought new
horizons to the fields of computer science and information theory. This led to the emer-
gence of quantum information and quantum computing.
Model building is the art of selecting those aspects of a process that are
relevant to the question being asked. As with any art, this selection is
guided by taste, elegance, and metaphor; it is a matter of induction, rather
than deduction. High science depends on this art. – J.H Holland
The most accepted model to define classical computation is the Turing Machine, the
model that Turing created in order to prove the impossibility of the Entscheidungsprob-
lem, and brought the notion of ”computability” to the world. Shortly before Turing,
Alonzo Church, Turing’s Doctoral advisor, independently proved the same result using
his λ-calculus model, a model of computation based on the concept of applying functions
to arguments. The two models are proven to be equivalent and have the same compu-
tational power. Turing’s model became more famous and widespread, however. The
reason? Turing’s model is more representative of a mechanical process, as the name of
the model says, a ”machine”, which was the way computers were seen back in the 1930s.
Alonzo’s model is more similar to modern programming languages and has great theoret-
ical importance in that area.
Something similar happened with quantum computing models. Unlike classical
computing (the abacus can be considered one of the first devices used to perform com-
putation, for example), the mathematical formalization of quantum computing occurred
well before the realization of a device that performs quantum computing. So, historically,
quantum computing became known in terms of inputting qubits, transforming them with
the application of unitary operators (the quantum gates), and making measurements of
the output. Thus, the quantum circuit model turned out to be the most appropriate in this
realm, especially in the field of quantum algorithms.
However, in the field of computational complexity theory, things are different.
Classically, the complexity classes that classify the ’difficulty’ of a problem are defined
in terms of Turing machines. Class P , for example, is the class of problems that can
be solved with a polynomial number of operations on a deterministic Turing machine,
and class N P is the class of problems that can be solved in polynomial time by a non-
deterministic Turing machine. Determining whether P = N P is still one of the biggest
open questions in computational theory.
For this reason, computational complexity theorists began to pay more attention
to the Quantum Turing Machine (QTM) model. By the Church-Turing thesis, any com-
putable problem can be represented by a Classical Turing machine (CTM), so a quantum
algorithm can be represented by a CTM. However, this simulation occurs with an expo-
nential factor of operations in relation to what the quantum algorithm actually performs,
producing a dissonance in the classification of the difficulty of problems that can be sim-
plified using the quantum computational advantage. It was with this in mind that Bernstein
and Vazirani (1993, 1997) proposed a formalization of QTM, the model first proposed by
Deutsch (1985). It is about the B&V model that we will deal with in this text.

2. Review of Classification Computation Models and Turing Machine


2.1. Turing Machine
In the 20th century, David Hilbert presented, along with Wilhelm Ackermann, the
Entscheidungsproblem (decision problem, in German). The problem asks for an algo-
rithm that takes a statement as input and answers if it is true or false according to the
axioms of some specified system. Later on, it became a precursor to the emergence of
computation as a new branch of mathematics since the program sought to determine
whether it was possible to find an algorithm that could establish if a preposition might
be proven using the axioms of a given system.
Kurt Gödel contributed to this study by proving incompleteness theorems, which
showed that there are statements within formal systems that cannot be proven or dis-
proven within those systems. Subsequently, Alan Turing demonstrated that the Entschei-
dungsproblem was unsolvable by showing the existence of undecidable problems. He
achieved this by formalizing the concept of algorithms as sequences of precise instruc-
tions capable of execution by a theoretical machine, which would receive the name Turing
Machine.
The Turing Machine (TM) served as a theoretical model for performing mathe-
matical computations and could be implemented respecting physical laws and constraints.
Computation using the TM involved a tape where values could be read from and written
to, as well as a head that could move left or right along the tape.

Definition 2.1 (Classical Turing Machine). A Classical Turing Machine is a sextuple


M = (Q, Σ, □, Γ, δ, q0 , F ) where:
• Q = {q0 , q1 , . . . qm } is a finite and non-empty set of states;
• Σ is a finite set of input symbols;
• □ is the blank symbol;
• Γ = Σ ∪ {□} is the set of tape symbols;
• δ : Q × Γ → Q × Γ × {−1, 1} is a partial function, called transition function,
and {−1, 1} specifies the shift of the machine’s head (left or right);
• q0 ∈ Q is the initial state;
• F ⊆ Q is the set of final states.
The set Q can be viewed as the processor, the tape is the memory, and the transition
function specifies the program.
A Turing machine M receives as input a string of symbols of the input alphabet
Σ, s = s1 s2 . . . sn ∈ Σ∗ , written in n squares of the tape, with the head being over s1 .
The rest of the tape is filled with blank symbols □. Since □ ∈ / Σ, if we scan the tape’s
content, we’ll know that the entire input was read when we first find a □.
At any given moment, the Turing Machine exhibits the following dynamics: if the
machine M is in state q ∈ Q and the head reads the symbol a ∈ Γ in the tape, then the
machine takes an ”action”, which is a tuple δ(q, a) = (q ′ , a′ , D) ∈ Q × Γ × −1, 1. Then
the machine transits from state q to state q ′ on the processor, the head writes the symbol
a′ on the tape, overwriting a, and then shifts it’s position by D ∈ −1, 1, and so on.
Since δ is a partial function, typically there will be some pairs (q, a) ∈ Q × Γ for
which δ(q, a) is not defined. If it happens, we say that the machine halts. If the machine
halts on some q ∈ F then the input string is accepted, otherwise, it is rejected.
One can notice that it is perfectly possible that the machine never halts; Suppose
then that δ(q, a) = (q, a, 1) and that δ(q, b) = (q, b, −1). If this machine reaches state q ,
with the symbol a under the head and the next cell has a b in it, this machine will zigzag
between these two cells forever without changing state.
The Turing Machine has many variations that include machines with infinite tape
in only one direction, machines with multitrack tape, as well as multi-tape machines con-
trolled by single or multiple heads. In some models, the tape is allowed to remain station-
ary or to shift more than a cell at one transition. Additionally, there are variations in the
halting conditions as well.
We can define a machine with subsets Qreject and Qaccept of Q such that once the
machine enters one of the states in the subsets, the computation immediately stops and
the string is accepted or rejected depending on the label of the halting state. Each one of
these variants was proven to be equivalent. In other words, they all can solve the same
problems (any computable function), and some algorithms can convert one variant into
another.

2.2. The Halting Problem and Limits of Computation


The whole point for Turing to came with the idea of Turing Machine’s in his seminal paper
”On Computable Numbers, with an Application to the Entscheidungsproblem, (1936)”
was to prove that the Entscheidungsproblem is not computable, by defining a device
capable of arbitrary computations and then showing that it can not solve the Entschei-
dungsproblem.
Central to the proof is the fact that the Turing machines can simulate other Turing
machines. A Turing machine that simulates other Turing machines is called Universal
Turing Machine (UTM). Is easy to see that a UTM could exist. In the last section, we
basically described an algorithm for running a Turing machine, provided we know its
transition function. We could implement this algorithm on some Universal Turing Ma-
chine as its transition function. After that, we could just encode an arbitrary transition
function in the UTM alphabet and use it as input. We can also encode arbitrary strings in
the UTM alphabet then use these two strings, the one that describes an arbitrary Turing
machine and the one that describes an arbitrary string in this machine’s alphabet, and then
run the UTM.
In the language of Turing machine, the Entscheidungsproblem can be described
as:

Is it possible for a Universal Turing Machine to exist which is capable of


determining whether an arbitrary Turing Machine will either halt or loop
indefinitely when provided with its description and input?
This formulation is known as the Halting Problem, and Turing proved that this is
impossible. An informal sketch of the proof follows.

Figure 1. Turing Machine

Theorem 2.1 (Undecidability of the Halting Problem). The halting problem, as formu-
lated above, is undecidable; that is, there is no Universal Turing Machine that can decide
whether an arbitrary Turing Machine halts or loops forever on an arbitrary input

Proof. (informal) Let’s abuse the notation and consider a Turing machine M to be, de-
pending on the context, the proper Turing machine or its encoding as a string to be input
to a Universal Turing machine. Let M ⟨i⟩ be the notation for the machine M computing i
as input on its tape.
Suppose that there exists a machine H that solves the halting problem. That is, if
M is an arbitrary Turing Machine and i is an arbitrary string, let:
(
outputs 0 and then halts if M ⟨i⟩ loops forever;
H⟨M, i⟩ =
outputs 1 and then halts if M ⟨i⟩ halts.

Now, let N be the machine that performs the action that is the negation of your
input. That is:

outputs 0 and then loops forever if x = 1;

N ⟨x⟩ = outputs 1 and then halts if x = 0;

undefined for other values of x

And finally, consider the following Turing machine U :

U ⟨i⟩ := N ⟨H⟨i, i⟩⟩, i is any string input

What if we run U on itself, producing the computation U ⟨U ⟩? We have the fol-


lowing possibilities:
• U ⟨U ⟩ outputs 0 and then loops forever. But this implies that H⟨U, U ⟩ outputs 1,
meaning the U ⟨U ⟩ halts, contradiction.
• U ⟨U ⟩ outputs 1 and then halts. But this implies that H⟨U, U ⟩ outputs 0, meaning
the U ⟨U ⟩ loops forever, contradiction.
• there is no other possibility, because H halts on any input and always outputs or
0, or 1, meaning the U is always defined. In any case, it reaches a contradiction,
so a machine like H cannot exist.

2.3. Recognizable and Decidable Problems


Using the previous formulation of the Turing Machine (TM), Allan Turing had success to
prove that some problems are undecidable and much more, defining the limits of recog-
nizable and decidable languages of the theoretical model, solving the last doubt from the
Entscheidungsproblem.
To begin with, let’s provide a practical description of the capabilities of the Turing
Machine. Given M a TM, and Σ∗ a language, the TM computes transition steps along
their well-defined algorithm and over Σ∗ to say wheater Σ is accepted by M or not. Using
formal terminology, we can describe a decision problem by utilizing a Turing machine
and formal language. The Turing machine serves as a means to define and execute the
computational process, while the language represents the set of inputs that the machine
can accept or reject. An important fact is that we could write a TM that joins into a loop
and never stops on a final state, either rejection or acceptance.

2.3.1. Turing-Recognizable

We could confirm that a language that is Turing-Recognizable has a Turing Machine


which can accept or halt all strings which are part of the language and also reject or halt
all the strings that not belongs to the language. So the Turing-Recognizable languages are
those whichever has a TM that correctly says that some string belongs to a language or
joins in a halt.

2.3.2. Turing-Decidable

And a Turing Decidable language is those that have a TM which always correctly says if
a string belongs to the language or not. The TM from this kind of language never joins in
a halt state and could correctly define an answer.
According to the definition above, we now could delimitate decision problems as
languages, instances of these problems as strings, and the Turing Machines a sequence of
procedures that tries to check the acceptance of the string to the language and then classify
the problems as decidable or recognizable. Turing has proved the 3º Hilbert problem by
showing the existence of non-decidable problems. As a secondary consequence opening
an entirely new branch of math: the Computation Theory.

2.4. Church-Turing thesis


In 1939, Alan Turing and his contemporary Alonzo Church proposed a formulation that
had a profound impact on the field of mathematics. While the Church-Turing thesis has
not been formally proven, it is supported by several arguments that reinforce its validity.
According to this thesis, if a function is definitively computable, it can be computed by
a Turing Machine. The main purpose of this thesis is to assert that a Turing Machine
represents the bounds of what is computationally achievable in the real world, adhering
to all physical laws.
The thesis proposes that all computation is limited by what a Turing Machine can
accomplish. Currently, every theoretical model that has been devised is at most equiv-
alent in power to a Turing Machine. A Turing Machine is Turing-Complete, meaning
it can simulate other Turing Machines. This concept forms the basis for algorithms and
programming languages as we know them nowadays

2.5. Strong Church-Turing thesis


The Strong Church-Turing thesis extends the idea of computability beyond Turing ma-
chines. It suggests that any algorithmic computation can be performed by a Turing ma-
chine or an equivalent model of computation. This includes other theoretical models such
as the Lambda Calculus, Register Machines, and other Universal Computational Devices.
A definition that will help us to understand the probabilistic behavior of the Quan-
tum computational model is in the proposed Strong Church-Turing Thesis. In the thesis, a
statement also says that any efficient computation that can be performed by any physical
machine can be simulated by a probabilistic Turing machine in polynomial time.

2.6. Computation Complexities Classes


Once we have the definitions of the computation concept we could focus on the main
type of problems that could be computationally handled. The decidable problems for
sure could have TM’s that solves it, but what would be the required resources for the
solution search over a TM? From these questions, the subject of complexities classes has
been studied and finding methods to classify the problems according to the necessary
resources to execute the TM for a certain problem.

2.6.1. Time

The most important subdivisions of complex classes are time and space. Firstly, consider
the time complexity, even if a problem is decidable and has a TM that has proven to solve
the problem, how much time would be required for the TM to stop at the rejection or
acceptance state? How many transitions (computations) would be required?
Let M be a deterministic TM that solves a decidable problem, which means that
M never stops at the correct response for every entry. Then we could define f : N 7→
N where f (n) is the maximal step number required to M stops at a final state when it
receives an input string with size n. We usually say that function that defines the time
complexity of M is f (n) or just M is f (n) time complexity.
Asymptotic analysis is a methodology used to measure the complexity of a
problem implemented on a computational model, as defined earlier. It involves the use
of asymptotic notation, which represents function classes that serve as upper bounds,
limiting the growth of functions. The most commonly used notation is the Big O notation
(O), which describes the worst-case behavior of a function in relation to its input size.
By employing this notation, we can analyze and characterize the efficiency of algorithms
based on their asymptotic behavior.

Let’s analyze in detail some of the categories of time complexity problems:


1. Class P
This time complexity class is the one that contains all decidable language in
which the deterministic Turing Machine M is f (n) and f (n) = O(nk ), for
k ∈ N. This describes the class of problems in which in the worst case the
complexity function is limited by some polynomial function.
Some examples of problems in this class are Matrix Multiplication, an algorithm
responsible to receive two matrices and then calculate the output matrix using the
matrix multiplication. Also, there is the Shortest Path in a Graph where given
a weighted graph G and two vertices, the problem is to find the shortest path
between the two vertices, or the Polynomial Interpolation problem: that’s the
task is to find a polynomial function that passes through all received data points.

2. Class NP
In order to describe the problems in the complexity class NP (Nondeterministic
Polynomial), we need to introduce the concept of a verifier. Consider a computa-
tional problem where we want to determine whether there is a subset, denoted as
S ′ , of a given set S ⊂ Z such that the elements in S ′ sum up to zero. From this
problem, we can formulate a related problem: verify whether a particular solution
satisfies the original problem. This involves checking if a given subset S ′ indeed
adds up to zero.

Formally, a verifier for a language A is a Turing Machine V such that


A = w | V accepts ⟨w, c⟩ for some certificate c. Here, w represents the input
string, and c is the certificate, which serves as evidence to support the claim that
w belongs to A. Consequently, NP problems are defined as a set of decision
problems that can be verified in polynomial time by a deterministic Turing
Machine. It’s important to note that the class NP includes the class P, which
consists of problems solvable in polynomial time.

Several examples of problems in NP include the K-clique decision problem,


where we determine if a given graph G contains a clique of size k. Additionally,
many problems in this class involve permutations, such as the SAT problem.
In SAT, we are given a Boolean formula F and seek to find a set of variable
assignments that satisfy F , making it evaluate to true.

3. Class NP-Complete (NPC)


Complexity theory provides a framework for understanding the computational
difficulty of problems. One essential class is NPC, which stands for Non-
deterministic Polynomial-time Complete. This class has two fundamental
requirements that we will explore in this section.

Firstly, a problem or language L must belong to the class NP, which represents
the set of problems that can be efficiently verified by a non-deterministic Turing
machine. This means that given a solution, we can verify its correctness in
polynomial time.

The second requirement for a problem to be classified as NPC is that every


problem A in NP can be polynomially reduced to L. This reduction denoted as
A ≤p B, implies that we can transform an instance of problem A into an instance
of problem L in polynomial time while preserving the solution’s validity.

Formally we have L be a language/problem, then L ∈ NPC if and only if


L ∈ NP and every problem A ∈ NP could be reduced in a polynomial time to
L, this means A ≤p B. Many famous problems are in this class, for example, the
SAT problem, Knapsack problem, Clique problem, and Vertex Cover.

4. Class Exponential Time (EXPTIME)


The class EXPTIME encompasses decision problems that a Deterministic
Turing Machine can solve within an exponential amount of time. In terms of
asymptotic notation, problems in this class have solutions that follow a time
complexity of O(2p(n) ), where p(n) is a polynomial function dependent on the
size of the input.

One notable problem belonging to the EXPTIME class is the task of determin-
ing if a Deterministic Turing Machine halts within a maximum of k steps. This
problem captures the fundamental challenge of predicting the termination behav-
ior of a specific computational process.
5. Class Non-Exponential Time (NEXPTIME)
The class NEXPTIME, also known as NEXP, plays a significant role.
This class encompasses problems whose languages can be decided by a Non-
Deterministic Turing Machine within an exponential number of steps. In terms of
time complexity, problems in NEXPTIME have solutions that can be expressed
k
as O(2n ), where n represents the input size and k is a constant.

NEXPTIME represents a unique class of problems that go beyond the scope


of everyday computational challenges. It serves as a superset of previously men-
tioned complexity classes, encapsulating a broader range of computational com-
plexities.

2.6.2. Space

One approach to comprehending the complexity of a Turing Machine is to analyze the


required space or memory needed to solve problems. By examining memory usage, we
can gain insights into the computational resources necessary to handle specific problem
instances. As we have previously discussed, there are various subdivisions within this
complexity class. Many of them are closely related due to similar naming conventions or
definitions derived from time complexity. To delve further, let us formally define space
complexity and explore its sub-classifications.
We define the space complexity of a Deterministic Turing Machine, denoted as M,
as a function f : N 7→ N. Here, f (n) represents the maximum number of tape cells used
by M when processing an input of size n within a given time. Utilizing this concept, we
can establish and define each subclass within the realm of space complexity measurement.
Within the domain of time complexity, we encounter several subcategories that are
defined based on space requirements. These subclasses provide a refined understanding
of the space complexities associated with different computational tasks. Here are some
notable subcategories:
1. Class PSPACE

it is the class of languages that is decidable in a polynomial function of space in a


Deterministic Turing Machine. Also, a formalization to define explicitly this kind
of class is using the following notation:

[
PSPACE = SPACE(nk )
k

Where we have SPACE(f (n)) = {L|L is language decided bya a Deterministic


Turing Machine }.

2. Class NPSPACE

The class NPSPACE delves into the realm of languages that can be decided
within a polynomial amount of space by a Non-Deterministic Turing Machine.
Similar to the previous definition, we can provide a formal definition using the
following notation:

[
NPSPACE = NSPACE(nk )
k

In this equation, NSPACE(f (n)) represents the set of languages that can be de-
cided by a Non-Deterministic Turing Machine. Combining these sets for various
polynomial functions of space, we arrive at the comprehensive class NPSPACE.

3. Class L

We encounter L, which represents the class of sublinear problems. When


discussing time complexity, the notion of sublinearity becomes redundant, as
reading the entire input typically requires at least a linear factor proportional to
its size. However, in the context of space complexity, sublinear Turing Machines
present an interesting area of study.

By utilizing predefined notation, we can delve into how L classifies decidable


languages. This class specifically focuses on problems that exhibit sublinear
space complexities. It captures the unique subset of computational tasks that can
be solved using significantly less memory than a linear function of the input size.

While the concept of sublinearity loses significance in the realm of time com-
plexity, it gains relevance in the domain of space complexity. By exploring the
intricacies of L, we uncover a distinct class of problems that defy conventional
linear space requirements. Understanding the nature of sublinear Turing Ma-
chines allows us to better grasp the complexities and possibilities inherent in
computational tasks with limited memory resources.

We could define formally L = SPACE(log(n)).

4. Class NL

Analogously to the class of L for space complexity, we will have the N L


complexity class, which relies on the same definition as before now considering
a Non-deterministic Turing Machine. And then we could formally define
N L = NSPACE(log(n))

5. Class EXPPSPACE

The class NEXPSPACE holds a prominent position. It represents the com-


plexity of problems that can be solved within an exponential amount of space
by a Non-deterministic Turing Machine. By utilizing the same methodology as
before, we can define NEXPSPACE as follows:

k
[
NEXPSPACE = NPSPACE(2n )
k

NEXPSPACE encompasses problems that require an exponential amount of


space for their solutions. These problems often involve complex languages and
computationally challenging tasks. Within the hierarchy of complexity classes,
NEXPSPACE emerges as a superset, encompassing other related complexity
classes.

In general, we observe the following subset relationships: NL ⊂ PSPACE ⊂


NPSPACE ⊂ NEXPSPACE ⊂ EXPPSPACE. This hierarchy indicates that
NEXPSPACE represents a more extensive set of problems compared to its sub-
classes. However, it is important to note that proving the strict relationships be-
tween these classes requires substantial time and effort, and some relationships
are still unproven to date.

3. Church–Turing–Deutsch principle
In 1985, David Deutsch, a renowned researcher and pioneer in the field of quantum
computation, proposed a revised version of the Church-Turing thesis. This redefinition
had significant implications for the understanding of quantum computation and its
capabilities. Deutsch’s thesis stemmed from a critical observation: the prevailing notion
that computation must be carried out by a physical computing device bound by the laws
of physics.

Deutsch, however, introduced the idea that the Church-Turing thesis could
be restated by asserting that every physical process can be simulated by a universal
computing device. This proposition merged the theoretical physics underlying quantum
mechanics, which stands as the most successful theory to date, capable of successfully
explaining nearly all known physical phenomena, despite some aspects still remaining
elusive.

Within this framework, a Quantum Computer emerges as a strong candidate for


a universal computing device capable of simulating and harnessing the unique aspects
of quantum physics. By leveraging the principles of quantum mechanics, such as su-
perposition and entanglement, a Quantum Computer offers the potential for exponential
computational speedup and novel problem-solving capabilities.

Deutsch’s revised thesis opens up exciting possibilities for the future of comput-
ing and the exploration of quantum algorithms. It suggests that harnessing the power of
quantum mechanics can lead to a paradigm shift in computational capabilities, providing
new avenues for solving complex problems that are beyond the reach of classical
computers.

In summary, Deutsch’s redefined Church-Turing thesis highlights the potential of


Quantum Computers as universal computing devices capable of simulating and capital-
izing on the principles of quantum physics. This proposition offers a fresh perspective
on computation and lays the foundation for further advancements in the field of quantum
computation.

3.1. Physical Implications of the Turing Machine

Throughout history, the concept of computation has been rooted in our understanding
of physical laws and limitations. The Turing Machine, as a foundational model of
computation, was developed in the real world based on our current knowledge and capa-
bilities. However, it raises an intriguing question: What if there are yet-to-be-discovered
principles or laws that could revolutionize computation?

It is plausible that future discoveries could unveil new concepts, enabling the
development of computational models that surpass the capabilities of the classical
Turing Machine. Even if the classical Turing Machine is limited by decidable problems,
might new laws could lead to this new possibility. We must recognize that our current
understanding of physics might not encompass the complete truth, and there could be
hidden insights waiting to be unveiled.
This line of thinking gave rise to the concept of the Quantum Turing Machine. It
represents a theoretical computational model that adheres to the principles of quantum
physics, which have already demonstrated their validity, even if not yet fully compre-
hended or precisely defined. Quantum physics provides a rich tapestry of laws and
regulations that have proven to accurately describe the behavior of quantum systems.

These new theories and computational models based on quantum physics present a
fertile ground for exploration and innovation. They challenge us to rethink the fundamen-
tal nature of computation and provide avenues for transformative breakthroughs. As we
delve deeper into the realm of quantum computing, we uncover exciting possibilities and
untapped potential, offering new perspectives and solutions in the realm of computation.

3.2. Quick brief about Quantum Physics


Quantum Computation is intricately linked to the principles of Quantum Physics, a disci-
pline that encompasses advanced mathematics, foundational physics concepts, and a deep
understanding of theoretical computation. Given the complexity and depth of these sub-
jects, it is challenging to delve into each one comprehensively within the scope of this
discussion. However, we can establish some fundamental assumptions and postulates to
provide a basis for the study of quantum computing. Let’s explore these postulates:
1. The concept of Quantum Bit Information:

One of the fundamental postulates of quantum mechanics is that any isolated


physical system can be represented as a Hilbert Space. A Hilbert Space is a
complete metric space equipped with a well-defined inner product that induces
a distance function. This representation serves as the state space of the system,
where each possible state of the system corresponds to a vector within this space.

The state of a quantum system is described by a state vector, which is a unit


vector within the state space. This vector captures the essential information about
the system’s properties and allows for the calculation of probabilities associated
with different measurement outcomes.

To illustrate this concept, let’s consider the example of a single qubit. A qubit
represents the fundamental unit of quantum information and is analogous to a
classical bit. However, unlike classical bits that can only take on the values of 0
or 1, a qubit can exist in a superposition of both states.

The state space of a single qubit is a two-dimensional Hilbert Space. This space
represents the possible states of the qubit, with each state corresponding to a
unique vector within the space. For example, the basis states |0⟩ and |1⟩ represent
the qubit being in the state ”0” or ”1”, respectively.

In summary, the postulate that any isolated physical system can be represented as
a Hilbert Space provides the foundation for understanding the state of quantum
systems. This representation enables us to describe the properties and behavior of
quantum phenomena, such as the state of a single qubit within a two-dimensional
state space. By leveraging these concepts, we can explore the rich possibilities
offered by quantum information and computation.

2. Quantum Evolution:

The evolution of a closed quantum system can be described by a unitary


transformation. Formally, this means that the state of the system at an initial
time, denoted as |φ⟩ at t0 , is related to the state |φ′ ⟩ at a later time, denoted as
t1 , through a unitary operator U . The unitary operator is determined solely by
the times t0 and t1 and acts as a mathematical representation of the transformation.

In essence, the postulate states that the evolution of a quantum state is governed
by a unitary operator, which ensures that the transformation preserves important
properties of quantum mechanics. This unitary operator can be represented
by a unitary complex matrix, which captures the mathematical essence of the
transformation.

By employing unitary transformations, we can understand how a quantum system


evolves and transitions from one state to another over time. These transformations
play a vital role in quantum mechanics and are fundamental to various applica-
tions, including quantum computing and quantum simulations.

3. Quantum Measurement:

Quantum measurement can be described using a set of measurement operators


represented by matrices, denoted as Mm . These operators act on the state
space of the quantum system being measured. Each index, denoted as m, cor-
responds to a distinct measurement outcome that can occur during the experiment.

Considering a system in the state |φ⟩ just before the measurement, we can deter-

mine the probability of obtaining outcome m by defining p(m) = ⟨φ| Mm Mm |φ⟩.
The state of the system after the measurement, often referred to as the collapsed
state, is given by √ Mm†|φ⟩ .
⟨φ|Mm Mm |φ⟩

To
P ensure thatPthe probabilities form a complete set, we have the condition

m p(m) = m ⟨φ| Mm Mm |φ⟩ = 1. This equation ensures that the sum of
all probabilities equals one, indicating a consistent and complete probability
distribution for the possible measurement outcomes.

In summary, quantum measurement involves a set of measurements with a


Hermitian operator matrix acting on the state space of a quantum system. The
probabilities of different outcomes are determined by the inner product of the
initial state with the corresponding measurement operators. The state of the
system collapses to a specific outcome, and the probabilities of all possible
outcomes sum to one, ensuring a comprehensive probability distribution.

4. System Composition:
The tensor product finds particularly useful applications in representing the state
space of composite physical systems. Consider a scenario where we have mul-
tiple systems labeled from 1 to n, each with its corresponding state denoted as
|φi ⟩. The joint state of the entire system can be succinctly described using the
tensor product as |φ0 ⟩ ⊗ |φ1 ⟩ ⊗ . . . ⊗ |φn ⟩ or simply |φi ⟩ ⊗n . By combining these
individual states using the tensor product, we can construct more complex and
comprehensive descriptions of composite systems.
The tensor product is not limited to its application in describing the state space
of composite systems; it serves as a versatile mathematical tool with broad im-
plications. Its properties, such as linearity, distributive property, associativity, and
the generation of basis elements, empower mathematicians to perform intricate
calculations, manipulate expressions, and explore the dimensions of composite
spaces.
Furthermore, the tensor product finds significance across various mathematical
fields. It forms the foundation for advanced concepts in tensor calculus, multilin-
ear algebra, and differential geometry. Through the understanding and utilization
of the tensor product, researchers gain a powerful framework for comprehend-
ing the intricacies of composite systems and uncovering profound mathematical
relationships.

4. Quantum Turing Machine


The Quantum Turing Machine is similar to the probabilistic one (see 5.1). The change is
in the transition function. In this case it no map the actual configuration of the machine
to some probability distribution in the set of actions, but rather maps it to a function that
describes the probability amplitude of each possible action.

Definition 4.1 (Quantum Turing Machine). A Quantum Turing Machine is a sextuple


M = (Q, Σ, □, Γ, δ, q0 , F ) where:
• Q = {q0 , q1 , . . . qm } is a finite and non-empty set of states;
• Σ is a finite set of input symbols;
• □ is the blank symbol;
• Γ = Σ ∪ {□} is the set of tape symbols;
• δ : Q × Γ → C Q×Γ×{−1,1} is the transition function, and {−1, 1} specifies the
shift of the machine’s head (left or right);
• q0 ∈ Q is the initial state;
• F ⊆ Q is the set of final states.
This definition is similar to the other classes of Turing Machines. Note that if we
change C for {0, 1} we get the classical model, and if we change for [0, 1] we will have the
probabilistic one. Now, the δ represents the amplitudes of the probability of the machine’s
head executing a particular action. We maintain the restriction to δ(q, a)(p, bD) ∈ (C)
be restricted to the ser of computable complex numbers, to prohibit the construction of
a model built on top of non-computable information. From the point of view of com-
plexity theorists, it is particularly interesting that even the restriction that the amplitudes
are polynomially computable is added. Moreover, we want the sum of the squares of the
amplitudes to be equal 1. That it is, if δ(q, a)(p, b, D) = α then the machine takes the
action (q, b, D) with probability |α|2 when it’s measured.
4.1. Computation on a Quantum Turing Machine
How does a computation in the Quantum Turing Machine take place? In the classic case,
it was enough to carry out successive evaluations of the transition function, according
to the machine’s current states and symbols. In the probabilistic Turing machine, there
was a probability distribution over the sample space of possible actions Q × Γ{−1, 1}
that the machine could take, which induces a probability distribution over the pairs
Q × Γ that the machine could assume. In the Quantum case, the possible actions are
not a probability distribution, but probability amplitude whose squares add up to 1. This
means, as specified by quantum mechanics, that the machine takes a superposition of
actions, which induces a superposition of possible pairs of states and symbols that may
be on the processor and head of the machine. So it makes sense that the time evolution of
the steps of a quantum computation can be formalized through a unitary transformation.
This section shows how this can be done.

Definition 4.2 (Configuration). Suppose that the tape of the Quantum Turing Machine is
indexed by the set of integers, Z. Then:

(q, T, i) ∈ Q × ΓZ × Z

is a configuration of the Quantum Turing Machine, where q is the actual state of the
machine, T : Z → Γ is a function such that T (x) describes the symbol that is under the
cell indexed by x, and i is the index of the cell under the head.
In this way, the configuration gives us a complete description of the machine at
some computation step. Note that only functions T : Z → Γ such as T (i) ̸= □ for a finite
number of i’s are of interest to us, since the inputs are finite strings and at any one time
only a finite number of new symbols could be written to tape.
By the format of our transition function, and quantum postulates, at any moment
the machine does not have a single configuration but is in a superposition of possible
configurations. To account for this, we can formalize the set of possible configurations
that the machine can assume as a Hilbert space.

Definition 4.3 (Hilbert Space). A Hilbert Space is a real or complex vector space, with
an inner product ⟨·, ·⟩ that is also a complete metric space with respect to the norm

|| · || = ⟨·, ·⟩2

Hilbert Spaces are a generalization of Euclidean vector spaces to the infinite-


dimensional case.

Definition 4.4 (Sequences). Let I = i1 , . . . , ik , . . . be an infinite countable set. A se-


quence on a set X, x• = (xi )i∈I is a map x• : I → X, whose value at ik is denoted by
xi k .
Definition 4.5 (ℓ2 (B) space). let (xb )b∈B , xc ∈ C a sequence of complex numbers Then:
( )
X
ℓ2 (B) = (xb )b∈B | |xb |2 < ∞
b∈B
ℓ2 (B) is a Hilbert space with the inner product:
X p
⟨x|y⟩ = xb yb and norm ⟨x|x⟩, x, y ∈ ℓ2 (B)
b∈B

We can see a vector in ℓ(B) as a vector with infinite countable entries indexed by
B:
(. . . , xbi , xbi+1 , . . . ), xbj ∈ C, bj inB, jinN
.
Let C be the set of configurations of a Quantum turing machine. We can work
with the configurations in the quantum formalism treating then as a Hilbert Space using
the ℓ2 (C) space. But, to do this, we need C to be infinite countable.

Proposition 4.1. The set C of possible configurations of Quantum Turing Machine is


infinite countable.

Proof. The elements of C has the form (q, T, i) And T (i) ̸= square only for a finite
number of i’s. Then:
1. Q is finite and Z is countable infinite.
2. Let TS = {f | f : S → Σ} for some S ∈ X = {S | S ⊂ Z, |S| < ∞}
3. the set of all finite subsets of a countable infinite set is countable, therefore X is
countable.
|S| |S|
S P
4. Fix S.
S We have that |TS | = |Σ| . Therefore, | S∈X T S | = S∈X |Σ| , and
then S∈X TS is infinite countable. S
5. Extend the domain of each function in S∈X TS to Z by sending Z \ S to □.
Then the set of all possible T ’s that describe the tape of some QTM is a countable
infinity.
6. The product set of countable infinite sets is countable infinite.

Let |C⟩ = (q, T, i) ∈ C be a configuration. Then


|C⟩ = |q, T, i⟩
represents, in Dirac notation, the vector (. . . , 0, 0, 1, 0, 0, . . . ), ∈ ℓ2 (C) that has 1 on the
entry indexed by C and 0 in the entries indexed by the other configurations. We can see
that |q, T, i⟩ is Hilbert base for ℓ2 (C, called the computational basis:
 
Conf ig1 0
Conf ig0 0. 

.. .
.  .  = |Ci ⟩
Conf igi 1
 
.. ..
. .

At anyPmoment, the QTM is in a superposition of configurations that can be


represented as αi |Ci ⟩, Ci ∈ C. Now we define the evolution of the QTM’s computation
in time through an operator U acting in the configuration space.
Definition 4.6 (Evolution operator of a QTM). Let C = (q, T, i) be some QTM’s config-
uration. Then we define an operator U acting on ℓ2 (C) as:

X
Uδ |C⟩ = Uδ |q, T, i⟩ = δ(q, T (i)) [(p, a, D)] |p, T(a→i) , i + D⟩
p,a,D

Where T(a→i) is defined as:


(
a, if j = i
T(a→i) (j) =
T (x), otherwise.

That is, T(a→i) is the same function as T , but if the symbol of the cell indexed by i
is overwritten with a.
Since |C⟩ is a basis, Uδ extends to the whole space by linearity. Each application
of Uδ in the actual superposition of configurations is a computational step. Computing
T steps is UδT |Cinitial ⟩. Since the notation is heavy, we give a step-by-step illustration of
Uδ ’s dynamics: Suppose that the machine is in configuration (q, T, i) and Uδ is applied,
to perform one computation step. The following happens:
P state of the machine, (q, T (i)), for
1. The transition function evaluates the current
every configuration in the superposition αi |Ci ⟩, Ci ∈ C, returning a function
δ(q, T (i)) : Q × Γ × {−1, 1} → C that describes the probability amplitudes of the
actions.
2. Then the machine realizes every action in superposition, with probability ampli-
tude given by the function δ(q, T (i)). That is, if (p, a, D) is an action, then the
QTM performs it with probability amplitude δ(q, T (i))[(p, a, D)].
3. Then the machine enters a superposition of configurations given by the defini-
tion of U , weighted by the probability amplitudes given above. That is, where
δ(q, T (i))(p, a, D) is the probability amplitude of the configuration that the ma-
chine assumes after performing (p, a, D): the configuration |p, T(a→i),i+d ⟩.
The operator U can be viewed as a countable infinite matrix, with rows and
columns indexed by the configurations. The entry [Uδ ](i,j) ] is the probability amplitude
induced by δ of configuration Cj to transition to Ci . Note that the majority of the entry
will be 0’s:

...
Conf ig1 = |q, T1 , i⟩ ··· Conf igj = |p, Tj , i + 1⟩
.
Conf ig1 = |q, T1 , i⟩ 0 ··· δ(p, T2 (i + 1)) [(q, T1 (i + 1), −1)] . .
.. .. ... .. ...
. . .
..
Conf igj = |p, Tj , i + 1⟩ δ(q, T1 (i)) [(p, T2 (i), 1)] · · · 0 .
... ... ... ... ...

with:
(
δ(p, Ty (k)) [(q, Tx (k), 1 − k)], if |p, Tt , k⟩ ≺ |q, Tx , i⟩
⟨q, Tx , i|Uδ |p, Ty , k⟩ =
0, otherwise.
The diagonals are 0 because there is no action that can take a configuration to itself
since in each step the head must move to another cell.
Up to this point, we haven’t placed any restrictions on delta, and said nothing
about the properties of Uδ , but for Uδ to be a unitary operator, in a way that respects the
postulates of quantum mechanics, we need delta to satisfy certain conditions.
Theorem 4.1. The operator Uδ is unitary if, and only if, δ satisfies the following condi-
tions:
1. (Unit length) The For al (q, a) ∈ Q × Σ,
X
| δ(q, a) [(p, b, D)]|2 = 1
p,b,D

2. (Orthogonality) For all (q1 , a1 ), (q2 , a2 ) ∈ Q × Σ, with (q1 , a1 ) ̸= (q2 , a2 ),


X
δ(q2 , a2 )[(p, b, D)]∗ δ(q1 , a1 )[(p, b, D)] = 0
p,b,D

3. (Separability) For all (q1 , a1 , b), (q2 , a2 , b′ ) ∈ Q × Σ × Σ


X
δ(q2 , a2 )[(p, b′ , −1)]∗ δ(q1 , a1 )[(p, b, 1)] = 0
p∈Q

The first condition tells us that the amplitudes of actions leaving any state-symbol
pair have unit square magnitude. The second says that the superposition of actions leav-
ing two different state-symbols must be orthogonal. And the third says that the actions
that go to the right must be orthogonal to the actions that go to the left when fixed by the
state of the machine.

Proof. If U is unitary then U ∗ U = I, that is, the columns have length 1 and are mutually
orthogonal. Since each column in Uδ is a configuration, it has a fixed state (q, a), and then
a column has values given by the evaluation of a single function δ(q, a), which probability
amplitudes sums to 1, so the diagonals are 1 if and only if the condition 1 holds.
The following configurations cannot reach the same configuration in one step and
are guaranteed orthogonal (since this columns will never have two non-zero values in the
same line):
• The one’s in which the tape have different symbols in cells others than the one
where the head is;
• the one’s with the head in different cells
• the one’s in which the heads is not in cells that differs in exactly two units
So, we need to take care of the other cases.
• The configurations with heads in the same cell and same tape content, except for
the symbol on the head and the states are orthogonal if and only if condition 2
holds;
• the condition 3 holds only and if only the configurations in which the head differ
by two units ar orthogonal;
– These pairs can reach the same configuration in one step if they differ only
in states and in the symbol written under the reading head
– These configurations are such that the first one is in state q1 , reading p1 ,
with b two spaces to the left, and the second one is in state q2 , reading p2 ,
with b′ two spaces to the right.

4.2. The halting of the Quantum Turing machine


There are divergences in the definition of a QTM’ halting. Deutsch (1985) suggested
adding a qubit at the beginning of the tape to mark |0⟩ if the machine has not yet stopped,
and |1⟩ if it has reached a stop setting. Then, periodic measurements would be made
on this qubit to verify if the stop condition was reached. Myers (1997) showed that the
halting qubit could entangle with remaining qubits and a measurement of the halting qubit
could spoil the computation.
Naive idea: after reaching a final configuration (configuration in the final state)
|Cf ⟩, the computation will keep looping in it until we decide to measure, that is, Uδ |Cf ⟩ =
|Cf ⟩. This is impossible. Assume that |D⟩ is the configuraion that preceded |Cf ⟩ in the
computation, that is, Uδ |D⟩ = |Cf ⟩. Then:

1 = ⟨Cf |Cf ⟩ = (Uδ |D⟩, |Cf ⟩)


= ⟨D|Uδ∗ |Cf ⟩
= ⟨D|Cf ⟩
=0

A contradiction.
Thinking about this idea, Guerrini, Martini & Masini (2020) proposed a model
with additional configurations that simulate the final configuration. So with n final config-
urations we could loop through them n times before deciding to halt and measure. Thus,
it would suffice to stipulate a minimum number to require the halting and measurement
of the tape.
The most accepted definition is the one given by Bernstein and Varziani (1993):

Definition 4.7. A final configuration is any configuration in some final state qf . If with
input x, at time T the superposition contains only final configurations and at a time before
T the superposition had no final configurations, then QTM M stops with runtime T at
input x.
After the halting, one must measure the superposition configurations, and then
check the contents of the tape. The result of the measure is probabilistic, as stipulated by
quantum theory.

4.3. Example: the computational advantages


A notation for a Turing Machine configuration is |a1 a2 . . . aj−1 Qq aj . . . an ⟩, where the
string a1 . . . an is the (finite) portion of the tape that is utilized, there is, is not in blank,
ai is the symbol written on the ith cell (counting on the used portion of the tape, not the
integer indexing suggested earlier), Qq represents the position of the head, that is, reading
the next cell, in this case, aj and q is the state of the processor.
Consider the following problem: receiving as input an odd-length string in the
alphabet {0, 1}, deciding whether it has any digits 1, starting with the head in the middle
of the string. The computation on a classic Turing machine is the following:
1. The machine scans the smaller half of the tape. At any moment, if it finds a 1, it
halts and accepts the input.
2. If the machine didn’t find a 1, then it changes direction and scans the tape until it
finds a □. At any moment, if it finds a 1, it halts and accepts the input. If it reaches
the □, it halts and rejects the input.

Figure 2. Classical computation of the problem

n
In the worst case, this computation takes n + 2
operations. It can be optimized to
a maximum of n, starting from the tape start.
In the quantum case, the QTM takes advantage of the superposition of configura-
tions, and then scans the tape to the left in one configuration and to the right in another
configuration.

Figure 3. Transition table of the QTM that solves the problem

Figure 4. Quantum computation of the problem

n
In this case, the computation takes 2
operations in the worst case.
To illustrate that quantum machines are not so different from quantum circuits,
here is an implementation of a transition function that applies the famous Hadamard Gate
to the input.

Figure 5. Transition function for Hadamard’s Gate

5. Probabilistic Turing Machine


After introducing a new computational model, it becomes essential to define complexity
classes that capture the computational steps and limitations within that model. This was
demonstrated previously in the context of the Turing Machine and its complexity classes
based on computation steps and requirements.
Similarly, with the Quantum Turing Machine, once it is defined, we can explore
the classes and asymptotic behavior that arise from this notion of ”computation” within
the model. As discussed earlier, the Quantum Turing Machine incorporates a probabilis-
tic approach, inheriting properties, and behaviors from the Probabilistic Turing Machine
(PTM), another computational model. To gain a better understanding of our new complex-
ity classes, which utilize probability information to establish their bounds, it is worthwhile
to delve into the PTM and its significance.
By leveraging the insights from the PTM, we can develop a comprehensive un-
derstanding of the Quantum Turing Machine’s complexity classes and their implications.
These classes incorporate the probabilistic nature of quantum computation, allowing us
to explore and analyze the limits and capabilities of quantum algorithms. The interplay
between probability and computation within the quantum realm introduces novel com-
plexities that were not present in classical models.

5.1. Probabilistic Turing Machine


The definition of the PTM is almost the same as we got from the Classical Turing
Machine, then we could formalize it as the following:

Definition 5.1 (Probabilistic Turing Machine). A Probabilistic Turing Machine is a sex-


tuple M = (Q, Σ, □, Γ, δ, q0 , F ) where:
• Q = {q0 , q1 , . . . qm } is a finite and non-empty set of states;
• Σ is a finite set of input symbols;
• □ is the blank symbol;
• Γ = Σ ∪ {□} is the set of tape symbols;
• δ : Q × Σ → [0, 1]Q×Σ×{−1,1} is the transition function, and {−1, 1} specifies the
shift of the machine’s head (left or right);
• q0 ∈ Q is the initial state;
• F ⊆ Q is the set of final states.
The same definition that we originally got from the Classical Turing Machine.
What changes here is how we could define the transition function, which now, denoted as
δ, represents the probability of the machine’s head executing a particular action. Since
we are interested in a model of computation that doesn’t permit difficult or is impossible
to compute information to be somehow hidden inside of a given transition function, it is
important that the values δ(q, a)(p, b, D) ∈ [0, 1] are restricted to the set of computable
numbers. Additionally, there is a restriction placed on the δ function. Specifically, for
a given state and input symbol, the sum of probabilities of all possible transitions must
equal 1.
In this type of computation model as the Quantum Turing Machine we got the
process of probabilistic computation where each non-deterministic step taken along the
computation process, is said as a coin-flip step. Then we could define:

Definition 5.2 (Coin-flip Step on PTM). Let M be a Probabilistic Turing Machine, where
each nondeterministic step is called a coin-flip step and it has two legal next moves.
Considering this, we could define the probability of each b computation branch of M on
an input w as follows. We could define the probability of the branch b as:

Pr[b] = 2−k , where k is the number of coin-flip that occurs on the branch b.

And then we define the total probability to M accept a w input as:


X
Pr[M accepts w] = Pr[b]
b is an acceptance branch

With the given definitions, we can now turn our attention to the languages defined
by a PTM that halts within a polynomial number of steps relative to the size of the input.
This particular criterion allows us to establish boundaries for complexity classes.

5.2. PTM Complexities Classes


Definition 5.3 (Class BPP). We initially could define the first complexity class bounded-
error probabilistic polynomial time (BPP), as the set of all languages L of a PTM M
which could be decided into a polynomial time within a bounded error by a constant
factor of the acceptance/rejection correctness. The most actual reference says that this
2
constant factor at least could be . So let x be an input, and M which decides L ∈ BPP
3
so we got that:
2
1. x ∈ L =⇒ Prob[M accept x] > and,
3
2
2. x ∈
/ L =⇒ Prob[M reject x] > .
3
In the probabilistic approach of this computational model, errors can occur when
2
accepting or rejecting an input. The factor of represents the percentage of acceptance
3
branches that correctly lead to an accepted state, which is also applicable to rejection
branches.
But as you can see, many computational problems require that the algorithms do
not give chances for false-positive or false-negative errors. So firstly let’s give a special
focus to the problems that could only admit possibilities for false-negative errors, and we
could delimitate a new class over the PTM which handles this type of problem, the RP
class, formally:

Definition 5.4 (Class RP). We initially could define the complexity class randomized poly-
nomial time (RP), where RP ⊂ BP P , as the set of all languages L of a PTM M which
could be decided into a polynomial time within a bounded error by a constant factor of
the acceptance correctness and do not admit possibilities of false-positive. So let x be an
input, and M which decides L ∈ RP so we got that:
2
1. x ∈ L =⇒ Prob[M accept x] > and,
3
2. x ∈
/ L =⇒ Prob[M reject x] = 1
Analogously we could define the class of problems that do not admits possibilities
of false-positive errors, this will describe the main proposition of the co-RP class:

Definition 5.5 (Class co-RP). We initially could define the complexity class complement
of the randomized polynomial time (co-RP), where co-RP ⊂ BP P , as the set of all
languages L of a PTM M which could be decided into a polynomial time within a bounded
error by a constant factor of the acceptance correctness and do not admit possibilities of
false-negatives. So let x be an input, and M which decides L ∈ co-RP so we got that:
1. x ∈ L =⇒ Prob[M accept x] = 1 and,
2
2. x ∈
/ L =⇒ Prob[M reject x] >
3
Then we have naturally that co-RP ⊂ BP P and RP ⊂ BP P are the more
specified class of the BPP definition. Also, we could relate the problems in the RP class
to the Classical Turing Machine classes, as an instance to prove that P ⊂ RP .

Theorem 5.1. P ⊆ RP .

Proof. This represents the main trivial relationship between TM and PTM computational
model classes. It’s almost trivial to see that P is a subset of RP since we could construct
a Turing Machine for any language L ∈ P that ignores the random bits it receives and
acts deterministically on its input.
Theorem 5.2. RP ⊆ N P .

Proof. Let’s define an arbitrary language L ∈ RP , then by definition, there is a PTM


such there is at least one random string r ∈ {0, 1}p(|x|) for p(|x|) a polynomial function
related to the size of entry x, as M (x, r) returns acceptance state if and only if x ∈ L. We
could use r as the certifier for any x ∈ L because there is no r if x ∈ / L then we could
know that it’s possible to construct a new deterministic Turing Machine that should only
accept if it’s given a x ∈ L and its corresponding certificate r.
Join more deeply over the BPP class we will find out the ZPP class which is only
the intersection of the co-RP and the RP classes. In this new class, we won’t admit any
type of errors, being false-positive or false-negative, as we could formally describe:

Definition 5.6 (Class ZPP). The complexity class zero-error probabilistic polynomial
time (ZPP), where ZP P = co-RP ∩ RP , as the set of all languages L of a PTM M
which could be decided into a polynomial time within zero errors. So let for a L ∈ ZP P
and an input x, then M halts in a state ”I don’t know” with less than 50% of chances,
otherwise, we got:
1. x ∈ L =⇒ Prob[M accept x] = 1 and,
2. x ∈
/ L =⇒ Prob[M reject x] = 1
The ZPP complexity class, although intriguing, constitutes only a small fraction of
the problems within the broader BPP class. It is noteworthy to consider the relationship
between classical Turing Machines and Probabilistic Turing Machines. In this context,
we can assert that the class P, which consists of classical polynomial problems with exact
solutions, is a subset of ZPP. However, it is important to emphasize that membership in
ZPP does not guarantee termination.
Furthermore, an interesting observation is that ZPP problems are not more large
than RP problems. This leads us to formulate and prove the following theorem:

Theorem 5.3. ZP P ⊆ RP .

Proof. This proof could be made by simple construction, but as was said before the RP
2
class does not need to be defined exactly with a bounded probability error of , it only
3
2
needs to be defined by a constant factor, and most of the literature uses , but for this
3
1
current proof, let’s define the bound factor as . Let M be a PTM that decides a language
2
1
L ∈ ZP P , so by definition, we got that M halt in a ”don’t know” state less than of
2
the times, then we could write a M ∗ PTM which receives an input x and run M (x) if
the answer is ”rejection” then M ∗ returns ”rejection”, otherwise when the response be
”acceptance” or ”don’t know”, for both cases ”acceptance” will be returned. Then, we
will get:
1
1. x ∈ L =⇒ Prob[M ∗ accept x] > and,
2
/ L =⇒ Prob[M ∗ reject x] = 1
2. x ∈

In the diagram provided, we observe the overall relationship between complexity


classes. Notably, two notable classes emerge: BQP, which we will explore further in
the upcoming section, and PP, a comprehensive superset encompassing all probabilistic
languages solvable in polynomial time.
One advantage of employing probabilistic deciders within a polynomial time
framework, despite the presence of a limited error factor, is the ability to run these de-
cisions multiple times on an input. By leveraging these multiple responses, we can obtain
Figure 6. Probabilistic Complexities Classes

a more precise estimation of the probability of acceptance or rejection. This approach


allows for improved accuracy in the decision-making process.

5.3. QTM Complexities class


Once we already saw the process of PTM, QTM, and the classical TM we are ready to
simply apply the acquired knowledge to the boundaries of the Quantum Turing Machine
complexities classes. Initially, we will give a deep focus on the base complexities class
from the QTM and explore different kinds of classes.
Immediately from the previous session, we have two classes that are almost equal
to the PTM classes. The BQP and the ZQP are representations of the BBP and ZPP
respectively, for the Quantum Turing Machine, so formally we got:

Definition 5.7 (Class BQP). We initially could define the first complexity class quantum
bounded-error on both sides in a polynomial time (BQP), as the set of all languages L
of a QTM M which could be decided into a polynomial time within a bounded error by
a constant factor of the acceptance/rejection correctness. The most actual reference says
2
that this constant factor at least could be . So let x be an input, and M which decides
3
L ∈ BQP so we got that:
2
1. x ∈ L =⇒ Prob[M accept x] > and,
3
2
2. x ∈/ L =⇒ Prob[M reject x] > .
3
The problems which belong to the BQL, are those abstractly viewed as efficiently
solvable through the quantum computers solved by a polynomial time of quantum
computations with a small probability of error.

Definition 5.8 (Class ZQP). This complexity class quantum zero errors in a polynomial
time (ZQP) is the set of all languages L of a QTM M which could be decided into a
polynomial time within zero errors. So let for a L ∈ ZQP and an input x, then M halts
in a state ”I don’t know” with less than 50% of chances, otherwise, we got:
1. x ∈ L =⇒ Prob[M accept x] = 1 and,
2. x ∈
/ L =⇒ Prob[M reject x] = 1
As we could see there is a strong similarity between the definition of these two
classes and their probabilistic version. And then we could prove a visible relationship:

Theorem 5.4. BP P ⊆ BQP .

Proof. We could say that BQP contains BPP because a Quantum Turing Machine M
could simulate any classical circuit and also has the capacity to generate random bits
through special quantum gates as the Hadamard, this characterization of generating ran-
domized paths is the main qualification of the BPP space too.
1
1. x ∈ L =⇒ Prob[M ∗ accept x] > and,
2
/ L =⇒ Prob[M ∗ reject x] = 1
2. x ∈

Then we could define a new special class that arises from the ZQP definition.

Definition 5.9 (Class EQP). This complexity class quantum zero exactly errors in a poly-
nomial time (EQP) is the set of all languages L of a QTM M which could be decided into
a polynomial time within zero errors without a halt in an undefined final state. So let for
a L ∈ EQP and an input x, then M we got:
1. x ∈ L =⇒ Prob[M accept x] = 1 and,
2. x ∈
/ L =⇒ Prob[M reject x] = 1
It’s obvious that EQP ⊆ ZQP ⊆ BQP , as we have during the construction
and definition of those states, and also P ⊆ EQP . The Turing Machine also could be
simulated by a Quantum Turing Machine and then we could implement any polynomial
algorithm from the classical computation over the QTM.
Till the moment was seen the main basic definitions of complexity classes from
the Quantum Turing Machine, most of the classes, in general, made analogs to the
classical complexity classes. Then we could define different manners to see a classical
class through the QTM iris. One of the first non-typical that we could define is the QMA
(Quantum Merlin-Artur) but before joining on a formal definition to this, let’s discuss
what is a verifier in quantum computation.

Definition 5.10 (Quantum Poof). In order to build a seams-like NP class delimited by the
Quantum Turing Machine we need to specify a verifier or in this case a quantum proof.
The quantum proof is a quantum state that works as a certificate/ witness to a quantum
computer, which runs into a polynomial time within a given input, with a high proba-
bility confidence. This is what would call as an efficient verification over the quantum
computation.
To understand it better, let’s introduce the principle of the Merlin(M) Arthur (A)
game. Let’s say that M is a powerful one and Arthur has a polynomial time bound. We
could define a boolean function evaluated over a n constraints, an f (x1 , x2 , . . . , xn ). Then
Merlin should generate (a1 , a2 , . . . , an ) and sends the attempt of values to Arthur which
will say whether or not the attempt satisfies f with a high probability. Now, if Arthur
could do this by using a PTM into a polynomial computation, then we could say that
f ∈ M A class, which describes a different kind of NP class from the ability of Arthur to
use randomness.
And then we could define QMA as an analog class to the NP and has almost the
same relationship from NP to P, for the BPP class with the probabilistic approach:

Definition 5.11 (Class QMA). QMA is the class of decision problems that could be solved
by a V quantum algorithm over a Quantum Turing Machine which for all possible inputs
x, and a language L ∈ QM A which respects the following proposition:
3
1. if x ∈ L =⇒ there is a |ϕ⟩ for which V (x, |ϕ⟩) = 1 with probability of a =
4
and,
3
2. if x ∈/ L =⇒ ∀ |ϕ⟩ the probability of V (x, |ϕ⟩) = 1 is ≤ b =
4
Where |ϕ⟩ is the quantum proof, the verifier quantum state.
3
An interesting fact about the QMA is that we don’t exactly could hinge in the
4
probability, we could amplify the success probability to any constant factor lower than 1.
So there is also proof of roobostener about the error bounds from this class which could
be enunciated as the following.

Theorem 5.5. Let a, b : N 7→ [0, 1] and q a polynomial function that satisfies a(n) −
1
b(n) ≥ , ∀n ∈ N. Then QM A(a, b) ⊆ QM A(1 − 2−2 , 2−r ) for every r ∈ poly.
q(n)

Proof. The first part of the proof has a simple idea, if we have a verification algorithm
V with completeness and soundness probabilities given by a, b we coukd construct a new
verification procedure that runs A over a large number of copies of the original certificate
and accepts if the number of acceptances of A is greater than (a + b)/2. The second and
hardest part is to show how to construct the new certificate where we cannot assume that
consists of several copies of the original certificate but may be an arbitrary and possibly
high entangled quantum state.

While it is trivial to establish that N P ⊆ QM A, it is noteworthy that QMA


encompasses a broader range of problems beyond those in NP. In fact, there are problems
in QMA that are not currently known to be in NP. This highlights the significant power of
quantum proofs in tackling a wider class of problems.
Several well-known problems fall within the realm of QMA, including the Local
Hamiltonian problem, Density Matrix Consistency, the Quantum Clique problem, and
the Group Non-Membership problem. These problems exemplify the diverse range of
challenges that can be addressed using quantum proofs.
By examining these problems more closely, we can gain deeper insights into the
capabilities and implications of quantum proofs in the context of QMA.
1. Local Hamiltonian Problem
Let’s define the K-local Hamiltonian H as a hermetian matrix acting over n qubits
that could be represented by the sum of m hamiltonian terms each also acting over
at most k qubits, then we got:
m
X
H= Hi
i=1

The objective of this problem is when a H matrix is given, is needed to find the
smallest eigenvalue λ of H, this λ is used to be called the ground state energy of
the Hamiltonian.

2. Group Non-Membership problem

The Group Non-Membership problem involves determining whether a given


element g belongs to a subgroup H of a finite group G. In this problem, the input
consists of the subgroup H, the finite group G, and the element g. The objective
is to determine whether g is a member (acceptance) or a non-member (rejection)
of the subgroup H.

This problem exhibits various variants depending on the representation of the


group elements. The choice of representation can significantly impact the diffi-
culty of the problem, introducing additional nuances and challenges to overcome.
From the definition of the quantum proofs we could start to take a look at another
type of certifier, the interactive proof system, another concept used for the computational
complexity theory.

Definition 5.12 (Interactive Proof system). An interactive Proof (IP) system is a theo-
retical computational model in which a provider and a verifier exchange messages to
establish the correctness of a given input string belonging to a specific language. This
model is characterized by certain properties: the verifier has a bounded computational
complexity but is always honest in their decision-making, while the prover has unlimited
computational power but is not inherently trustworthy.
The computation takes place as the verifier receives a string and engages in a
process of communication with the prover through message exchanges. The goal is for
the verifier to reach a certain answer and become convinced of its correctness.
Interactive proof systems are capable of recognizing and verifying problems within
the boundaries of the verifier’s computational abilities. These bounds determine the ver-
ifier’s limited capabilities in effectively assessing and validating the solution provided by
the prover. Usually, the verifiers have polynomial time complexity as it computational
power.
We can extend the concept of Interactive Proof to the realm of quantum computa-
tion, giving rise to the Quantum Interactive Proof System. In this system, the messages
exchanged between the prover and the verifier consist of quantum information, introduc-
ing novel branches and intriguing properties that differ from the classical definition. This
quantum variant of interactive proof holds promise for exploring the capabilities and
Figure 7. Messages Protocol in a IP

limitations of quantum communication and verification.

Definition 5.13 (Class QIP). Based on the properties which arise from the quantum inter-
actions between a provider and a verifier we also could propose a new complexity class.
So let L be a language and m be a polynomial bounded function and a, b : N 7→ [0, 1]
some polynomial computable functions. Then is said that L ∈ QIP (m, a, b) if and only
if there is an m-message quantum verifier V with the following properties:
1. Completeness property. ∀x ∈ L there is a quantum prover P that causes V accept
x with probability at least of a(n) where n is the size of x.
2. Soundness property. ∀x ∈ / L then every quantum prover P causes V to reject x
with probability at most of b(n) where n is the size of x.
S
Then QIP = m QIP (m) is the union of all polynomial bounded functions m.

Figure 8. Quantum Interactive Proof System

This kind of class is quite robust according to the choice of completeness and
soundness probabilities. And then we could define a new relationship with the classical
complexities: QIP ⊆ EXP .

5.4. Conjectured and Proved Relations


There are many open questions about the relationship between the classical, probabilistic,
and quantum complexity classes from the Turing Machine, this subject is an important
Figure 9. Conjectured and Proved Relationships

starting point of study to highlight how much computation it’s possible and currently effi-
cient and what is the advantages and failures to move a problem along to the complexity
classes.
In Figure 9, the left side illustrates the established relationships in terms of class
inclusion, moving from top to bottom. This representation captures the relationships be-
tween classes derived from the three computational models based on the Turing Machine
that have been discussed.

6. Bibliographic
Down below referred to the main bibliography used in the present article.

References
[Bennett et al. 1997] Bennett, C. H., Bernstein, E., Brassard, G., and Vazirani, U. (1997).
Strengths and weaknesses of quantum computing. SIAM Journal on Computing,
26(5):1510–1523.
[Bernstein and Vazirani 1993] Bernstein, E. and Vazirani, U. (1993). Quantum complexity
theory. In Proceedings of the Twenty-Fifth Annual ACM Symposium on Theory of
Computing, STOC ’93, page 11–20, New York, NY, USA. Association for Computing
Machinery.
[Bernstein and Vazirani 1997] Bernstein, E. and Vazirani, U. (1997). Quantum complexity
theory. SIAM Journal on Computing, 26(5):1411–1473.
[Boulic and Renault 1991] Boulic, R. and Renault, O. (1991). 3d hierarchies for animation.
In Magnenat-Thalmann, N. and Thalmann, D., editors, New Trends in Animation and
Visualization. John Wiley & Sons ltd.
[Carpentieri 2003] Carpentieri, M. (2003). On the simulation of quantum turing machines.
Theoretical Computer Science, 304(1):103–128.
[Deutsch and Penrose 1985] Deutsch, D. and Penrose, R. (1985). Quantum theory, the
church–turing principle and the universal quantum computer. Proceedings of the Royal
Society of London. A. Mathematical and Physical Sciences, 400(1818):97–117.
[Fortnow 2000] Fortnow, L. (2000). One complexity theorist’s view of quantum computing.
Electronic Notes in Theoretical Computer Science, 31:58–72. CATS 2000 Computing:
the Australasian Theory Symposium.
[Fouche et al. 2007] Fouche, W., Heidema, J., Jones, E. G., and Potgieter, P. (2007).
Deutsch’s universal quantum turing machine (revisited).
[Guerrini et al. 2020] Guerrini, S., Martini, S., and Masini, A. (2020). Quantum turing ma-
chines: Computations and measurements. Applied Sciences, 10(16).
[Hu et al. 2019] Hu, S., Liu, P., Chen, C. R., Pistoia, M., and Gambetta, J. (2019).
Reduction-based problem mapping for quantum computing. Available at https:
//hushaohan.github.io/pdf/hu2019computer.pdf.
[Kaye et al. 2007] Kaye, P., Laflamme, R., and Mosca, M. (2007). An Introduction to Quan-
tum Computing. OUP Oxford.
[Knuth ] Knuth, D. Knuth: Computers and typesetting. Available at http://www.qqq.
com, version 1.6.0.
[Knuth 1984] Knuth, D. E. (1984). The TEX Book. Addison-Wesley, 15th edition.
[Linden and Popescu 1998] Linden, N. and Popescu, S. (1998). The halting problem for
quantum computers. arXiv: Quantum Physics.
[Molina and Watrous 2018] Molina, A. and Watrous, J. (2018). Revisiting the simulation of
quantum turing machines by quantum circuits. CoRR, abs/1808.01701.
[Ozawa 2002] Ozawa, M. (2002). Quantum Turing Machines: Local Transition, Prepara-
tion, Measurement, and Halting, pages 241–248. Springer US, Boston, MA.
[Sipser 2013] Sipser, M. (2013). Introduction to the Theory of Computation. Course Tech-
nology, Boston, MA, third edition.
[Smith and Jones 1999] Smith, A. and Jones, B. (1999). On the complexity of computing. In
Smith-Jones, A. B., editor, Advances in Computer Science, pages 555–566. Publishing
Press.
[Watrous ] Watrous, J. Quantum computational complexity. Avail-
able at https://cs.uwaterloo.ca/˜watrous/Papers/
QuantumComputationalComplexity.pdf.
[Yamakami 1999] Yamakami, T. (1999). A foundation of programming a multi-tape quan-
tum turing machine. In Kutyłowski, M., Pacholski, L., and Wierzbicki, T., editors,
Mathematical Foundations of Computer Science 1999, pages 430–441, Berlin, Heidel-
berg. Springer Berlin Heidelberg.
[Yanofsky and Mannucci 2008] Yanofsky, N. and Mannucci, M. (2008). Quantum Comput-
ing for Computer Scientists. Cambridge University Press.

You might also like