You are on page 1of 31

Image classification using Hybrid

Quantum Neural network:


A comparative study
Design Supervisor Name :Dr KIRUTHIKA DEVI
Name of the Student : ELKANA VILVIN EASTER
Register Number : 42611034

October 30, 2023 Department of CSE 1


Presentation Outline
• Abstract
• Introduction
• Existing Availabilities
• Disadvantage of existing Product
• Proposed Product
• Advantage of proposed Product
• Design requirements
• Design Diagram
• Product Working Principle
• Product Photograph view(3D)
• Reference
October 30, 2023 Department of CSE 2
Abstract

• This study focuses on comparing the efficacy of classical Convolutional Neural


Networks (CNN), AlexNet, MobileNet, and a Hybrid Quantum Neural Network
(QNN) for the task of classifying MNIST digits. The objective is to analyze and
compare the performance of these models to ascertain whether Quantum
Machine Learning (QML) techniques can offer enhanced efficiency in digit
recognition tasks. The research methodology involves a systematic comparison of
the architecture, computational requirements, and accuracy of each model.
Initially, the classical CNN, known for its robustness in image classification tasks,
will serve as the benchmark. Then, the deeper and more complex AlexNet and the
lightweight MobileNet architectures will be evaluated for their accuracy and
computational efficiency in comparison to the classical CNN.

• Furthermore, the study introduces a Hybrid Quantum Neural Network (QNN)


designed to leverage quantum computation techniques for enhanced
performance. The QNN model is expected to demonstrate the potential for
improved efficiency in the classification of MNIST digits, combining classical and
quantum methodologies.

October 30, 2023 Department of CSE


Abstract

• The comparative analysis will encompass various parameters, including model


accuracy, training and inference speed, computational complexity, and resource
requirements. Evaluating these aspects will provide insight into the strengths and
limitations of each model. Through this rigorous comparative analysis, the study
aims to draw conclusions regarding the effectiveness of QML techniques in digit
classification tasks. The findings will be supported by empirical evidence obtained
through rigorous experimentation and analysis.
• In conclusion, this research endeavours to contribute valuable insights into the
potential of Quantum Machine Learning in enhancing the efficiency of image
classification tasks. The findings will shed light on the practical application and
viability of QNNs in comparison to classical CNN architectures.

October 30, 2023 Department of CSE


Introduction
In the grand tapestry of artificial intelligence, image classification stands as one of its most visually
resonant threads. From deciphering medical scans to enabling autonomous vehicles and sorting
our digital photo albums, the accuracy of image classification models carries real-world
consequences. To improve this vital aspect of AI, we must set our sights on a quantum horizon,
where classical machine learning finds its quantum counterpart in the form of Hybrid Quantum-
Classical Neural Networks (QNNs).

As we embark on this transformative journey, imagine a world where image recognition transcends
the limitations of classical deep learning and delves into the realm of quantum computing. Hybrid
QNNs, the flag-bearers of this new era, represent the marriage of two computational paradigms
that, at first glance, seem irreconcilable—the classical and the quantum.

The Classical Realm:

Convolutional Neural Networks (CNNs) and deep learning architectures have revolutionized image
classification, achieving human-level accuracy in a myriad of applications.
October 30, 2023 Department of CSE 5
Introduction
The Quantum Revolution:

The emergence of quantum computing, a technological tour de force that has long dwelled in the

realm of theory and speculation, is finally transitioning into practicality.

Quantum Computing: A Different Paradigm:

What sets quantum computing apart is its fundamental deviation from classical computing.

Classical computers, driven by bits, handle data as either 0s or 1s, straightforward and linear.

Quantum computing, on the other hand, relies on quantum bits or "qubits."

The Quantum Revolution:

The emergence of quantum computing, a technological tour de force that has long dwelled in the

realm of theory and speculation, is finally transitioning into practicality. Quantum processors

from IBM Quantum, Google Quantum AI, Ringette Quantum Computing, and others have begun

to showcase
October 30, 2023the tangible progress that beckons the union
Department of CSE of quantum and classical computing. 6
Introduction
Quantum Computing: A Different Paradigm:

What sets quantum computing apart is its fundamental deviation from classical computing. Classical
computers, driven by bits, handle data as either 0s or 1s, straightforward and linear. Quantum
computing, on the other hand, relies on quantum bits or "qubits." These qubits dance in a
superposition, a mesmerizing state of being 0 and 1 simultaneouslyThe Quantum-Classical
Convergence:

In the world of Hybrid QNNs, quantum and classical computing converge harmoniously. This is not
merely a matter of concatenating quantum and classical operations but rather a fusion of their innate
capabilities, a symphony where qubits interact gracefully with classical bits. Leveraging Quantum
Phenomena:

The secret of the Hybrid QNN lies in harnessing the quantum phenomena of superposition and
entanglement to elevate image classification. Quantum data encoding, variational quantum circuits,
quantum error correction, and quantum speed-up techniques all have pivotal roles to play.
October 30, 2023 Department of CSE 7
Introduction
The motivation behind quantum machine learning (QML) is to integrate notions from quantum
computing and classical machine learning to open the way for new and improved learning schemes.
QNNs apply this generic principle by combining classical neural networks and parametrized quantum
circuits. Because they lie at an intersection between two fields, QNNs can be viewed from two
perspectives:

From a machine learning perspective, QNNs are, once again, algorithmic models that can be trained
to find hidden patterns in data in a similar manner to their classical counterparts. These models can
load classical data (inputs) into a quantum state, and later process it with quantum gates
parametrized by trainable weights. Figure 1 shows a generic QNN example including the data loading
and processing steps. The output from measuring this state can then be plugged into a loss function
to train the weights through backpropagation.

October 30, 2023 Department of CSE 8


EXISTING AVAILABILITIES
• quantum neural networks (QNNs) were an emerging and active area of
research in quantum computing. However, I don't have information on
the latest developments that may have occurred in 2023. Here are some
existing availabilities and considerations related to QNNs based on my
last update:

• Quantum Computing Hardware: Quantum neural networks require


access to quantum computing hardware, such as quantum processors
from companies like IBM, Google, and Rigetti. Access to these quantum
devices often depends on partnerships, collaborations, or memberships
with quantum computing providers.
• Quantum Programming Frameworks: Various quantum programming
frameworks, such as Qiskit (IBM), Cirq (Google), and Forest (Rigetti), are
available for developing and running quantum algorithms, including
QNNs.
October 30, 2023 Department of CSE 9
EXISTING AVAILABILITIES
• Open-Source Quantum Libraries: Several open-source quantum libraries
and tools are available to the research community. These libraries provide
quantum circuit simulation capabilities and QNN-specific components.
Examples include PennyLane, TensorFlow Quantum, and QSharp
(Microsoft).

• Quantum Machine Learning Platforms: Companies and organizations offer


quantum machine learning platforms, which may include QNN
capabilities. Examples include Amazon Braket, D-Wave Leap, and Azure
Quantum (Microsoft).

• Quantum Software Development Kits (SDKs): Quantum SDKs, like the


Quantum Development Kit from Microsoft and the Quantum Software
Development Kit from IBM, often include tools for developing and
simulating quantum neural networks.
October 30, 2023 Department of CSE 10
Disadvantages of the existing Product
• Quantum Hardware: QNNs require quantum computers to run, which
are currently expensive and not widely available.
• Error Correction: Quantum systems are prone to errors due to
environmental noise, requiring complex error correction techniques.
• Quantum machine learning (QML) is an exciting and evolving field with
the potential to revolutionize various industries. However, like any
technology, it has its disadvantages and challenges. Here are some of the
disadvantages of quantum machine learning as of my last knowledge
update in January 2022:
• Limited Hardware Access: Quantum computers with a sufficient number
of qubits and low error rates are still in the early stages of development.
Access to quantum hardware for practical quantum machine learning
tasks can be limited, and researchers often rely on simulations.

October 30, 2023 Department of CSE 11


Disadvantages of the existing Product
• Quantum machine learning (QML) is an exciting and evolving field with the
potential to revolutionize various industries. However, like any technology, it has its
disadvantages and challenges.
• Limited Hardware Access: Quantum computers with a sufficient number of qubits
and low error rates are still in the early stages of development. Access to quantum
hardware for practical quantum machine learning tasks can be limited, and
researchers often rely on simulations.
• Error Rates: Quantum computers are inherently noisy, and qubits are susceptible to
errors, which can lead to inaccurate results. Error correction techniques are still an
active area of research.
• Quantum-Only Advantage: Quantum algorithms typically provide an advantage
over classical algorithms for specific problems. Not all machine learning tasks
benefit significantly from quantum computing, and identifying problems where
quantum advantage exists can be challenging.
• Algorithm Development: Developing quantum algorithms and mapping classical
machine learning problems to quantum systems can be complex and require
expertise in both quantum physics and machine learning.
October 30, 2023 Department of CSE 12
Proposed Product
This meticulously crafted methodology unveils our pioneering journey into the realm of
quantum-enhanced image classification, a fusion of quantum computing's transformative
potential and classical machine learning's established prowess. The initial phase of our
approach delves into the intricacies of data preparation. Here, we undertake the
normalization and encoding of classical MNIST dataset images into quantum states, a pivotal
process that serves as the foundation for subsequent quantum processing. The heart of our
methodology lies in the conceptualization and orchestration of efficient quantum circuits.

We meticulously tailor these quantum circuits using TensorFlow Quantum, empowering


them to work in unison with classical Convolutional Neural Networks (CNNs). This harmonic
convergence of quantum and classical components holds the promise of substantially
augmenting image classification accuracy, effectively transcending the limitations of classical
methods
October 30, 2023 Department of CSE 13
Proposed Product
To ensure the reliability and robustness of quantum computations, quantum error
correction techniques are judiciously integrated. Subsequently, our Quantum-Enhanced
Image Classifier undergoes intensive training and optimization. A meticulous performance
evaluation follows, scrutinizing crucial metrics such as classification accuracy, processing
speed, and computational efficiency.

As our journey unfolds, the documentation of this intricate methodology emerges as an


indispensable component. We envisage this documentation as an invaluable resource, a
testament to the synergy of quantum computing and machine learning. Furthermore, the
commitment to open-source contribution is a testament to our vision, igniting the spark of
innovation in the quantum computing and machine learning communities. Ultimately, our
approach holds the potential to redefine the boundaries of image recognition, and in the
larger context, catalyze transformative advancements in the realms of quantum computing
and machine learning
October 30, 2023 Department of CSE 14
Advantages of the proposed product
• Quantum Speedup: HQNNs can harness the power of quantum computing
to accelerate specific parts of the machine learning process, such as
quantum data encoding, quantum feature mapping, and quantum circuit
execution. This can lead to faster training and inference times for certain
tasks.
• Improved Robustness: By combining classical and quantum elements,
HQNNs can enhance the robustness of quantum algorithms. Classical
components can help mitigate quantum noise and errors, making the
overall model more reliable.
• Compatibility: HQNNs can be integrated into existing classical machine
learning frameworks, making it easier for organizations to adopt quantum
technologies without the need for a full transition to quantum computing.
• Transfer Learning: HQNNs can facilitate transfer learning by leveraging pre-
trained classical neural networks and fine-tuning quantum components for
specific tasks. This reduces the training time and resource requirements.
October 30, 2023 Department of CSE 15
Advantages of the proposed product
• Quantum Advantage Identification: Hybrid models allow researchers to
identify tasks where quantum computing provides a clear advantage over
classical methods. This can help guide the selection of quantum
components for specific machine learning problems.
• Resource Efficiency: Quantum computers are still resource-limited, and
hybrid models enable a more efficient use of quantum hardware.
Quantum processors can be reserved for the parts of the problem that
truly require quantum speedup.
• Practical Implementation: Quantum hardware is not always available, and
HQNNs can be implemented on classical hardware or cloud-based
quantum simulators, making quantum-inspired machine learning more
accessible.
• Privacy and Security: Quantum cryptography and secure multiparty
computation techniques can be integrated with HQNNs to enhance data
privacy and security in machine learning applications.
October 30, 2023 Department of CSE 16
Design Requirements
HARDWARE REQUIREMENTS
Computer Device:
High-performance CPU (multi-core)
Dedicated GPU (NVIDIA GeForce or AMD Radeon) with CUDA support for accelerated training
(if available)
Minimum 8GB VRAM for larger models and faster computations
Memory (RAM):
Minimum 16GB DDR4 RAM (Higher RAM recommended for more complex models and parallel
processing)
Storage:
 SSD storage (minimum 256GB) for faster data access and model storage
 Additional storage space for datasets, models, and experimental results
 Quantum Computing Hardware (if applicable):
 Quantum processing unit (QPU) compatible with the Hybrid Quantum Neural Network
October 30, 2023 Department of CSE 17
implementation
Design Requirements
SOFTWARE REQUIREMENTS
Operating System:

Linux (Ubuntu, CentOS) or Windows 10/11 for compatibility with popular deep learning frameworks

Frameworks and Libraries:

Python 3.x
 TensorFlow, PyTorch, or other preferred deep learning frameworks
 Quantum computing libraries or simulators for QNN implementation (e.g., Qiskit, Cirq)

Development Environment:

 Integrated Development Environment (IDE) such as PyCharm, Jupyter Notebook, or VS Code for
code development and experimentation
 Version control systems (e.g., Git) for tracking code changes and collaboration

October 30, 2023 Department of CSE 18


Design Requirements
SOFTWARE REQUIREMENTS
Data Processing and Visualization:

 NumPy, Pandas for data manipulation


 Matplotlib, Seaborn for visualization
 Docker for containerizing the development environment and experiments

Additional Tools:

 Anaconda or Miniconda for managing Python environments and packages


• Virtual environments for maintaining project-specific dependencies

October 30, 2023 Department of CSE 19


Design diagram

October 30, 2023 Department of CSE 20


Product working principle
MNIST CLASSIFICATION, CLASSICAL CNN:

Step 1: Setting Up Python Environment for Data Analysis and Machine Learning

This Python code snippet establishes an environment for data analysis and machine learning .It
imports essential libraries for data manipulation, visualization, and machine learning tasks, sets the
style for plots, and provides comments for locating and accessing input data files.

Step 2: Image resizing and plotting

In this step, we will be resizing the image and will be plotting some examples to see how the data
looks like.

Step 3: Data Preprocessing for Image Classification with One-Hot Encoding

• This code snippet performs essential data preprocessing tasks for image classification. It first
converts a list of resized images into a NumPy array for efficient handling. Then, it uses one-hot
encoding to transform categorical class labels into a binary format, making them suitable for
October 30, 2023 Department of CSE 21
training machine learning models, especially for tasks like image classification.
Product working principle
Step 4: Creating a Convolutional Neural Network (CNN) Model for Image Classification using
TensorFlow and Keras.

A convolutional neural network (CNN) model is defined using TensorFlow and Keras. The model consists
of convolutional layers with ReLU activation, max-pooling layers, and fully connected (dense) layers. It is
designed for image classification tasks and takes input images of size (28, 28, 1) with a final output layer
having 10 units and sigmoid activation for multi-class classification. This architecture is a common
choice for image recognition tasks.

Step 5: Training a Convolutional Neural Network Model for Image Classification

In this code, the previously defined Convolutional Neural Network (CNN) model is trained using the “fit”
method. It takes training data (‘X_train’ and ‘y_train’) and runs for 15 epochs, optimizing the model's
weights to fit the training data. The validation data (‘X_val’ and ‘y_val’) is used to monitor the model's
performance during training. This code is a crucial step in the machine learning workflow, where the
model learns from the training data and validates its performance to make improvements.

October 30, 2023 Department of CSE 22


Product working principle
Step 6: Visualizing Training and Validation Accuracy of a CNN Model

In this code, the training and validation accuracy of a Convolutional Neural Network (CNN) model is
visualized over the course of training. The ‘plt.plot’ function is used to create a line plot of training
accuracy and validation accuracy as a function of the number of training epochs. This visualization
allows you to monitor how well the model is learning from the training data and if it's overfitting or
underfitting. The ‘plt.xlabel’, ‘plt.ylabel’, and ‘plt.legend’ functions are used to label and format the plot,
making it easier to interpret the model's performance.

Step 7: Making Predictions on Test Images and Preparing Output for Submission

In this code, test images are resized to match the model's input size (28x28), and the resulting data is
stored in the test_set_resized array. The model.predict function is then used to make predictions on the
resized test images, and the results are stored in the predictions array. Additionally, an array of image
IDs is created using np.arange, which will be useful for tracking the order of predictions, typically used
when preparing predictions for submission. The code prepares the essential components for evaluating
and submitting the model's performance on unseen test data.
October 30, 2023 Department of CSE 23
.
Product working principle
Step 9: Creating a DataFrame

The image IDs and their corresponding labels (predictions) are combined horizontally using np.hstack to
create a NumPy array named full. This array contains two columns, 'ImageId' and 'Label'. Then, a Pandas
DataFrame called submission is constructed from this array, specifying column names and data types.
The submission DataFrame is used to organize the model's predictions for submission in a Kaggle
competition. It allows for easy export of the results in the required format.

Step 10: Saving the Model Predictions as a CSV Submission File

In this code, the submission DataFrame containing the model's predictions is saved as a CSV file named
'submissions.csv'.

October 30, 2023 Department of CSE 24


Product working principle
MNIST CLASSIFICATION, ALEXNET Vs MOBILENET

AlexNet and MobileNet are two distinct convolutional neural network architectures widely used
in the field of computer vision, including image classification tasks such as the MNIST dataset.
AlexNet, introduced in 2012, was a pioneering deep learning model that significantly advanced
the accuracy of image classification tasks. It consists of eight layers, including five convolutional
layers, and was designed for high-performance computing, making it computationally intensive.
In contrast, MobileNet, introduced in 2017, focuses on efficient deployment on mobile and
embedded devices. It employs depthwise separable convolutions to reduce the number of
parameters and computational cost while maintaining competitive accuracy. When applied to the
MNIST dataset, which is relatively small compared to other image datasets, both models can
achieve high accuracy. However, MobileNet's design prioritizes efficiency, making it more suitable
for resource-constrained environments, while AlexNet might be overkill for such scenarios due to
its computational demands.

October 30, 2023 Department of CSE 25


Product working principle
Step 1: Defining Data-loaders for train and test We take advantage of the torchvision
API to directly load a subset of the MNIST dataset and define torch DataLoaders (link)
for train and test.

Step 2: Defining the QNN and Hybrid Model This second step shows the power of the
TorchConnector. After defining our quantum neural network layer (in this case, a
EstimatorQNN), we can embed it into a layer in our torch Module by initializing a
torch connector as TorchConnector(qnn)

Step 3: Training

Step 4: Evaluation We start from recreating the model and loading the state from the
previously saved file. You create a QNN layer using another simulator or a real
hardware. So, you can train a model on real hardware available on the cloud and then
for inference use a simulator or vice verse. Fo r a sake of simplicity we create a new
quantum neural network in the same way as above.

October 30, 2023 Department of CSE 26


Product output view

October 30, 2023 Department of CSE 27


Product output view

October 30, 2023 Department of CSE 28


Reference
[1] LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to
document recognition. Proceedings of the IEEE 86(11), 2278–2324 (1998)

[2] Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep
convolutional neural networks. Advances in neural information processing systems 25
(2012)

[3] Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image
recognition. arXiv preprint arXiv:1409.1556 (2014)

[4] Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D.,
Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proceedings of the
IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 49 (2015)

[5] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recogni tion. In:
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.
770–778 (2016)

October 30, 2023 Department of CSE 29


Reference
[6] Ngiam, J., Chen, Z., Chia, D., Koh, P., Le, Q., Ng, A.: Tiled convolutional neural
networks. Advances in neural information processing systems 23 (2010)

[7] Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional net works. In:
Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland,
September 6-12, 2014, Proceedings, Part I 13, pp. 818–833 (2014). Springer

[8] Yu, F., Koltun, V.: Multi-scale context aggregation by dilated convolutions. arXiv
preprint arXiv:1511.07122 (2015)

[9] Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto,
M., Adam, H.: Mobilenets: Efficient convolutional neural networks for mobile vision
applications. arXiv preprint arXiv:1704.04861 (2017)

[10] Biamonte, J., Wittek, P., Pancotti, N., Rebentrost, P., Wiebe, N., Lloyd, S.: Quantum
machine learning. Nature 549(7671), 195–202 (2017)

October 30, 2023 Department of CSE 30


THANK YOU

October 30, 2023 School of Computing 31

You might also like