Quantum Neural Networks

Eniola Sobimpe
8 min readDec 24, 2020

This article outlines the investigation, progression and perspectives of quantum neural networks — a flourishing new field which arranges classical neurocomputing with quantum computation. It is fought that the examination of quantum neural networks may give us both new cognizance of brain work similarly as marvelous possibilities in making new structures for information processing, including dealing with classically ardent problems. At the end of this paper, I will be implementing quantum neural networks (QNN), partial classical neural networks, and classical neural networks (CNN) to train simple MNIST classification and see how QNN performs.

Classical neural systems can order transcribed digits. The model originates from the MNIST informational index which comprises of 55,000 preparing tests that are 28 by 28 pixilated pictures of written by hand digits that have been marked by people as speaking to one of the ten digits from 0 to 9. Numerous early on classes in AI utilize this informational collection as a testbed for examining basic neural systems. So it appears to be normal for us to check whether quantum neural system can deal with the MNIST information. There is no undeniable method to assault this logically so I resort to recreation. The impediment here is that I can just effectively deal with state 16 piece information utilizing a traditional test system of a 17 qubit quantum PC with one readout bit. So I utilize a downsampled form of the MNIST information which comprises of 4 by 4 pixilated pictures. With one readout bit I can’t mark ten digits so all things being equal I pick two digits, state 7 and 9, and lessen the informational collection to comprise of just those examples named as 7 or 9 and inquire as to whether the quantum system can recognize the information tests.

The 55,000 preparing tests break into gatherings of approximately 5,500 examples for every digit. Yet, upon closer assessment we see that the examples relating to state the digit 7, comprise of 797 unmistakable 16 piece strings while for the digit 9 there are 617 particular 16 piece strings. The pictures are hazy and in actuality there are 197 unmistakable strings that are marked as both 7 and 9. For my digit qualification task I chose to lessen the Bayes mistake to 0 by evacuating the questionable strings. Returning to the 5,500 examples for every digit and evacuating vague strings, leaves 3,514 examples that are marked as 7’s and 2,517 that are named as 9’s. I join these to make a preparation set of 6,031 examples As a fundamental advance I present the named tests to a classical neural system. Here I run a tensorflow classifier with one interior layer comprising of 10 neurons. Every neuron has 16 coefficient loads and one inclination weight so there are 170 parameters on the interior layer and 4 on the yield layer. The old style organize experiences no difficulty discovering loads that give short of what one percent arrangement mistake on the preparation set. The Python program additionally takes a gander at the speculation mistake however to do so it picks an arbitrary 15 percent of the info information to use for a test set. Since the information collection has rehashed events of a similar 16 piece strings, the test set isn’t absolutely concealed models. Still the speculation blunder is short of what one percent.

I presently go to the quantum classifier. Here I have little direction with respect to how to plan the quantum circuit. I chose to limit my toolbox of unitaries to comprise of one and two qubit administrators of the structure. I take the one qubit Σ’s to be X, Y and Z following up on any of the 17 qubits. For the two qubit unitaries I take Σ to be XY, Y Z, ZX, XX, Y and ZZ between any pair of various qubits. The main thing I attempted was an arbitrary choice of 500 (or 1000) of these unitaries. The irregularity relates to which of the 9 door types are picked just as to which qubits the entryways are applied to. Beginning with an irregular arrangement of 500 (or 1000) edges, subsequent to introducing a couple hundred preparing tests, the all out blunder settled in at around 10 percent. However, the example misfortune for singular strings was commonly just a piece beneath 1 which relates to a quantum achievement likelihood of a little more than 50 percent for most stings. Here the pattern was the correct way yet I were planning to improve.

After some playing around I took a stab at confining my entryway set to ZX and XX with the second qubit continually being the readout qubit and the first qubit being one of the other 16. The inspiration here is that the related unitaries viably turn the readout qubit around the x course by a sum constrained by the information qubits. A full layer of ZX has 16 parameters as does a full layer of XX. I attempted a variation of 3 layers of ZX with 3 layers of XX for a sum of 96 parameters. Here I found that beginning from an arbitrary arrangement of points I could accomplish two percent straight out mistake in the wake of seeing not exactly the full example set. The achievement here is that I showed that a quantum neural system could figure out how to group certifiable information. In fact the informational index could without much of a stretch be grouped by an old style arrange. Furthermore, working at a fixed low number of bits blocks any conversation of scaling. Yet, my work is exploratory and absent a lot of exertion I have a quantum circuit that can arrange certifiable information. Presently the assignment is to refine the quantum neural system so it performs better. Ideally I can discover a few standards (or just motivation) that manages the decision of door sets.

Implementation

Imported Dependencies
Imported Dataset

Filter the dataset to keep just the 7s and 9s, remove the other classes. At the same time convert the label, y, to boolean: True for 7 and False for 9.

Downscale the images

An image size of 28x28 is much too large for current quantum computers. Resize the image down to 4x4:

Remove Contradictory examples

Filter the dataset to remove images that are labeled as belonging to both classes.

The resulting counts do not closely match the reported values, but the exact procedure is not specified.

It is also worth noting here that applying filtering contradictory examples at this point does not totally prevent the model from receiving contradictory training examples: the next step binarizes the data which will cause more collisions.

Encode the data as quantum circuits

To process images using a quantum computer, Farhi et al. proposed representing each pixel with a qubit, with the state depending on the value of the pixel. The first step is to convert to a binary encoding.

Quantum Neural Networks

Since the classification is based on the expectation of the readout qubit, Farhi et al. propose using two qubit gates, with the readout qubit always acted upon.

This following example shows this layered approach. Each layer uses n instances of the same gate, with each of the data qubits acting on the readout qubit. Start with a simple class that will add a layer of these gates to a circuit:

Now build a two-layered model, matching the data-circuit size, and include the preparation and readout operations.

Build the Keras model with the quantum components. This model is fed the “quantum data”, from x_train_circ, that encodes the classical data. It uses a Parametrized Quantum Circuit layer, tfq.layers.PQC, to train the model circuit, on the quantum data.

To classify these images, Farhi et al. proposed taking the expectation of a readout qubit in a parameterized circuit. The expectation returns a value between 1 and -1.

Second, use a custiom hinge_accuracy metric that correctly handles [-1, 1] as the y_true labels argument. tf.losses.BinaryAccuracy(threshold=0.0) expects y_true to be a boolean, and so can't be used with hinge loss).

Train the quantum model

Using fewer examples just ends training earlier (5min), but runs long enough to show that it is making progress in the validation logs.

Classical neural network

While the quantum neural network works for this simplified MNIST problem, a basic classical neural network can easily outperform a QNN on this task. After a single epoch, a classical neural network can achieve >98% accuracy on the holdout set.

In the following example, a classical neural network is used for for the 3–6 classification problem using the entire 28x28 image instead of subsampling the image. This easily converges to nearly 100% accuracy of the test set.

The above model has nearly 1.2M parameters. For a more fair comparison, try a 37-parameter model, on the subsampled images:

Comparison

Higher resolution input and a more powerful model make this problem easy for the CNN. While a classical model of similar power (~32 parameters) trains to a similar accuracy in a fraction of the time. One way or the other, the classical neural network easily outperforms the quantum neural network. For classical data, it is difficult to beat a classical neural network.

Reference

Farhi et al.

Quantum MNIST Tutorial.

--

--

Eniola Sobimpe

Some people call me unstable force, while some say I'm quantum dot.