The image above shows 25 examples of GAN generated images.
Generative Adversarial Networks were first described in 2014 by deep learning pioneer Ian Goodfellow. GANs consist of 2 sections: a generator and a discriminator. During training, the generator learns a mapping from white noise to an image (in our case, an image of a chest radiograph), while the discriminator tries to learn to recognise the difference between generated images and real images. After training, the discriminator part of the network is discarded, leaving just the generator. The trained generator can generate realistic images that can then be used in the next section of the pipeline to detect abnormal images.
In this work Goodfellow et al propose a minimax game (a zero sum game, where one ”players” loss is the other ”players” gain) in which one player tries to generate images that the other player cannot discriminate from real data. ”The generative model can be thought of as analogous to a team of counterfeiters trying to produce fake currency and to spend it without detection, while the discriminative model analogous to the police, trying to detect the counterfeit currency.” In the initial experiments, the generator and discriminator both take the form of a multi layer perceptron (MLP) and are optimised in turn, with k steps optimising the discriminator and 1 step of optimising the generator. In the generator training phase we minimise: log(1 − D(G(z))).
Where is this project going from here? From here I will be extending the GAN with different flavours of GANs including unrolling GANs and AdaGAN.