Use Cases

We have been working hard to demonstrate quantum artificial intelligence using our 2nd generation quantum processing unit. Our first two use cases are cell membrane wall detection and satellite imaging. Below, we detail how we used quantum coherent noise from our machine to significantly improve the training of neural networks.

1. Cell Membrane Wall Detection

The first example we choose is a small set of 30 consecutive 256×256-pixel monochrome images from an electron microscopy (ssTEM) dataset (ref) of the Drosophila first instar larva ventral nerve cord. Below illustrates two examples of the raw images and the corresponding cell walls which are the target for our semantic segmentation task.

 

   

 

The images are representative of real-world images: there is some noise; there are image registration errors; there is even a small stitching error in one section. None of these would lead to any difficulties in the manual labelling of each element in the image stack by an expert human neuroanatomist. A software application that aims at removing or reducing human operation must be able to cope with all these issues.

We inject quantum coherent noise generated from our 424-quantum dot array structure into the neural network used to detect membrane walls. Employing this enhanced network to make predictions, results in the improved ice coefficient scores. The images below show a comparison of the UNET model trained with default settings, and a model trained with the unitary noise. The clearest comparison can be observed in the last image in Fig. 2 (purple segmentation), where we apply harder thresholds to create sharper imagery.

 

Figure 11: Default U-Net prediction with default configuration

 

Figure 12: U-Net model with added 5% stddev unitary noise

 

2. Satellite Imaging

The second use case focuses on how satellite imagery could be used by humanitarian organizations (such as www.crowdai.org). Following a natural disaster, it would be extremely useful to map impassable sections of roads, as well as to identify the most damaged residential areas and the most vulnerable schools, hospitals, and public buildings.

The objective is to adapt to the situation as quickly as possible to enable intervention procedures as the crisis evolves.

In the first days following such an event, it is essential to have detailed maps of communication networks, housing areas, infrastructure and areas dedicated to agriculture, etc. Images are available from a variety of sources, including nano-satellites, drones and conventional high-altitude satellites.

Currently, when new maps are required, they are drawn by hand, often by volunteers who participate in so-called Mapathons. A machine learning approach can automate the production of maps with relevant features in a short timeframe and from disparate data sources.

Our training data contains individual tiles of satellite imagery in RGB format, and labels (colour segmentation super-imposed on the images) are used to annotate recognized information. The goal is to train a model, which given a new tile (satellite image), can annotate all buildings. The typical training dataset has 280,741 tiles (as 300×300- pixel RGB images), so we restrict our test case to 0.5% in the interests of speed.

 

 

We form predictions with the trained model to see how these improvements come through visually. The results are illustrated below, where we compare a UNET where we added a Gaussian noise layer (Figure 1), to the performance achieved with the Equal1 unitary noise layer (Figure 2).

 

Figure 1: Prediction using default model.

Figure 1: Prediction using default model.

 

Figure 2: Prediction using unitary noise.

Figure 2: Prediction using unitary noise.

 

These use cases are available on our github for you to explore further. In the meantime we are busy exploring use cases in medical imaging, materials science and others – please contact us to discuss your use case.