Professional Documents
Culture Documents
As we embark on this transformative journey, imagine a world where image recognition transcends
the limitations of classical deep learning and delves into the realm of quantum computing. Hybrid
QNNs, the flag-bearers of this new era, represent the marriage of two computational paradigms
that, at first glance, seem irreconcilable—the classical and the quantum.
Convolutional Neural Networks (CNNs) and deep learning architectures have revolutionized image
classification, achieving human-level accuracy in a myriad of applications.
October 30, 2023 Department of CSE 5
Introduction
The Quantum Revolution:
The emergence of quantum computing, a technological tour de force that has long dwelled in the
What sets quantum computing apart is its fundamental deviation from classical computing.
Classical computers, driven by bits, handle data as either 0s or 1s, straightforward and linear.
The emergence of quantum computing, a technological tour de force that has long dwelled in the
realm of theory and speculation, is finally transitioning into practicality. Quantum processors
from IBM Quantum, Google Quantum AI, Ringette Quantum Computing, and others have begun
to showcase
October 30, 2023the tangible progress that beckons the union
Department of CSE of quantum and classical computing. 6
Introduction
Quantum Computing: A Different Paradigm:
What sets quantum computing apart is its fundamental deviation from classical computing. Classical
computers, driven by bits, handle data as either 0s or 1s, straightforward and linear. Quantum
computing, on the other hand, relies on quantum bits or "qubits." These qubits dance in a
superposition, a mesmerizing state of being 0 and 1 simultaneouslyThe Quantum-Classical
Convergence:
In the world of Hybrid QNNs, quantum and classical computing converge harmoniously. This is not
merely a matter of concatenating quantum and classical operations but rather a fusion of their innate
capabilities, a symphony where qubits interact gracefully with classical bits. Leveraging Quantum
Phenomena:
The secret of the Hybrid QNN lies in harnessing the quantum phenomena of superposition and
entanglement to elevate image classification. Quantum data encoding, variational quantum circuits,
quantum error correction, and quantum speed-up techniques all have pivotal roles to play.
October 30, 2023 Department of CSE 7
Introduction
The motivation behind quantum machine learning (QML) is to integrate notions from quantum
computing and classical machine learning to open the way for new and improved learning schemes.
QNNs apply this generic principle by combining classical neural networks and parametrized quantum
circuits. Because they lie at an intersection between two fields, QNNs can be viewed from two
perspectives:
From a machine learning perspective, QNNs are, once again, algorithmic models that can be trained
to find hidden patterns in data in a similar manner to their classical counterparts. These models can
load classical data (inputs) into a quantum state, and later process it with quantum gates
parametrized by trainable weights. Figure 1 shows a generic QNN example including the data loading
and processing steps. The output from measuring this state can then be plugged into a loss function
to train the weights through backpropagation.
Linux (Ubuntu, CentOS) or Windows 10/11 for compatibility with popular deep learning frameworks
Python 3.x
TensorFlow, PyTorch, or other preferred deep learning frameworks
Quantum computing libraries or simulators for QNN implementation (e.g., Qiskit, Cirq)
Development Environment:
Integrated Development Environment (IDE) such as PyCharm, Jupyter Notebook, or VS Code for
code development and experimentation
Version control systems (e.g., Git) for tracking code changes and collaboration
Additional Tools:
Step 1: Setting Up Python Environment for Data Analysis and Machine Learning
This Python code snippet establishes an environment for data analysis and machine learning .It
imports essential libraries for data manipulation, visualization, and machine learning tasks, sets the
style for plots, and provides comments for locating and accessing input data files.
In this step, we will be resizing the image and will be plotting some examples to see how the data
looks like.
• This code snippet performs essential data preprocessing tasks for image classification. It first
converts a list of resized images into a NumPy array for efficient handling. Then, it uses one-hot
encoding to transform categorical class labels into a binary format, making them suitable for
October 30, 2023 Department of CSE 21
training machine learning models, especially for tasks like image classification.
Product working principle
Step 4: Creating a Convolutional Neural Network (CNN) Model for Image Classification using
TensorFlow and Keras.
A convolutional neural network (CNN) model is defined using TensorFlow and Keras. The model consists
of convolutional layers with ReLU activation, max-pooling layers, and fully connected (dense) layers. It is
designed for image classification tasks and takes input images of size (28, 28, 1) with a final output layer
having 10 units and sigmoid activation for multi-class classification. This architecture is a common
choice for image recognition tasks.
In this code, the previously defined Convolutional Neural Network (CNN) model is trained using the “fit”
method. It takes training data (‘X_train’ and ‘y_train’) and runs for 15 epochs, optimizing the model's
weights to fit the training data. The validation data (‘X_val’ and ‘y_val’) is used to monitor the model's
performance during training. This code is a crucial step in the machine learning workflow, where the
model learns from the training data and validates its performance to make improvements.
In this code, the training and validation accuracy of a Convolutional Neural Network (CNN) model is
visualized over the course of training. The ‘plt.plot’ function is used to create a line plot of training
accuracy and validation accuracy as a function of the number of training epochs. This visualization
allows you to monitor how well the model is learning from the training data and if it's overfitting or
underfitting. The ‘plt.xlabel’, ‘plt.ylabel’, and ‘plt.legend’ functions are used to label and format the plot,
making it easier to interpret the model's performance.
Step 7: Making Predictions on Test Images and Preparing Output for Submission
In this code, test images are resized to match the model's input size (28x28), and the resulting data is
stored in the test_set_resized array. The model.predict function is then used to make predictions on the
resized test images, and the results are stored in the predictions array. Additionally, an array of image
IDs is created using np.arange, which will be useful for tracking the order of predictions, typically used
when preparing predictions for submission. The code prepares the essential components for evaluating
and submitting the model's performance on unseen test data.
October 30, 2023 Department of CSE 23
.
Product working principle
Step 9: Creating a DataFrame
The image IDs and their corresponding labels (predictions) are combined horizontally using np.hstack to
create a NumPy array named full. This array contains two columns, 'ImageId' and 'Label'. Then, a Pandas
DataFrame called submission is constructed from this array, specifying column names and data types.
The submission DataFrame is used to organize the model's predictions for submission in a Kaggle
competition. It allows for easy export of the results in the required format.
In this code, the submission DataFrame containing the model's predictions is saved as a CSV file named
'submissions.csv'.
AlexNet and MobileNet are two distinct convolutional neural network architectures widely used
in the field of computer vision, including image classification tasks such as the MNIST dataset.
AlexNet, introduced in 2012, was a pioneering deep learning model that significantly advanced
the accuracy of image classification tasks. It consists of eight layers, including five convolutional
layers, and was designed for high-performance computing, making it computationally intensive.
In contrast, MobileNet, introduced in 2017, focuses on efficient deployment on mobile and
embedded devices. It employs depthwise separable convolutions to reduce the number of
parameters and computational cost while maintaining competitive accuracy. When applied to the
MNIST dataset, which is relatively small compared to other image datasets, both models can
achieve high accuracy. However, MobileNet's design prioritizes efficiency, making it more suitable
for resource-constrained environments, while AlexNet might be overkill for such scenarios due to
its computational demands.
Step 2: Defining the QNN and Hybrid Model This second step shows the power of the
TorchConnector. After defining our quantum neural network layer (in this case, a
EstimatorQNN), we can embed it into a layer in our torch Module by initializing a
torch connector as TorchConnector(qnn)
Step 3: Training
Step 4: Evaluation We start from recreating the model and loading the state from the
previously saved file. You create a QNN layer using another simulator or a real
hardware. So, you can train a model on real hardware available on the cloud and then
for inference use a simulator or vice verse. Fo r a sake of simplicity we create a new
quantum neural network in the same way as above.
[2] Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep
convolutional neural networks. Advances in neural information processing systems 25
(2012)
[3] Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image
recognition. arXiv preprint arXiv:1409.1556 (2014)
[4] Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D.,
Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proceedings of the
IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 49 (2015)
[5] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recogni tion. In:
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.
770–778 (2016)
[7] Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional net works. In:
Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland,
September 6-12, 2014, Proceedings, Part I 13, pp. 818–833 (2014). Springer
[8] Yu, F., Koltun, V.: Multi-scale context aggregation by dilated convolutions. arXiv
preprint arXiv:1511.07122 (2015)
[9] Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto,
M., Adam, H.: Mobilenets: Efficient convolutional neural networks for mobile vision
applications. arXiv preprint arXiv:1704.04861 (2017)
[10] Biamonte, J., Wittek, P., Pancotti, N., Rebentrost, P., Wiebe, N., Lloyd, S.: Quantum
machine learning. Nature 549(7671), 195–202 (2017)