20

Sketch to color anime translation using Generative Adversarial Networks(GANs)

 4 years ago
source link: https://towardsdatascience.com/sketch-to-color-anime-translation-using-generative-adversarial-networks-gans-8f4f69594aeb?gi=973c3ebf819c
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Sketch2Color anime translation using Generative Adversarial Networks(GANs)

A step-by-step guide for building a GAN that colors an anime sketch.

0*HsN8fYzf1Vo99xPM.jpg

Dec 28 ·9min read

“Generative Adversarial Networks is the most interesting idea in the last 10 years in Machine Learning” — Yann LeCun.

0*K4ufnx0rFa9tkKkH.jpg?q=20

GAN is based on the zero-sum non-cooperative game (minimax) | Photo by Yin and Yang on alpha coders

Problem Statement :

The task is to generate a compatible colored anime image from a given

black-and-white anime sketch with the help of Generative Adversarial Networks(GANs).

Logistics for this article are as follows,

  1. Introduction,
  2. Getting and preprocessing the data,
  3. Generator architecture,
  4. Discriminator architecture,
  5. Generator and discriminator loss,
  6. Training the generator and discriminator,
  7. Tensorboard logs,
  8. In-progress training results,
  9. The output of the sample sketches and
  10. Conclusion.

So, let’s keep going…

1. Introduction:

Generative Adversarial Networks(GANs)are an approach to generative modeling using deep learning methods, such as convolutional neural networks(CNNs).

Generative modeling is an unsupervised learning machine learning task that involves automatically discovering and learning the patterns in input data in such a way that the model can be used to generate new examples, that plausibly could have been drawn from the original dataset.

The GAN model architecture involves two sub-models:

1. A g enerator model for generating new examples,

2. A discriminator model for classifying whether generated examples are real, from the domain, or fake, generated by the generator model.

0*ZUrI9t-zc3wlkUDc?q=20

GANis based on the zero-sum non-cooperative game (minimax) i.e if one wins, the other loses. In-game theory, the GAN model converges when the discriminator and the generator reach a Nash equilibrium . This is the optimal point for the minimax equation below.

1*RwMbFeKk9NLKpQ5aqBWcaA.png?q=20
Source: https://arxiv.org/abs/1406.2661

GANsare an exciting and rapidly changing field, delivering on the promise of generative models in their ability to generate realistic examples across a range of problem domains.

Most notably in image-to-image translation tasks such as translating photos of summer to winter or day to night, and in generating photorealistic photos of objects, scenes, and people that even humans cannot tell are fake.

Today we’re going to use these GANs for image-to-image translation task i.e, automatically generate compatible colors for a given black-and-white anime sketch, not even grayscale.

In case if you would like to dig deeper into the math then you can check out the original paper by Ian J. Goodfellow, I really enjoyed reading the paper.

2. Getting and preprocessing the data:

The anime sketch colorization dataset that I’ve used for training the GAN can be downloaded from the kaggle website, here .

After downloading and unzipping the dataset, I had to preprocess it as both sketch and colored anime were in the same image.

Once we save the sketch and colored images to the separate folders for both training and validation/test data while loading the data we normalize the same such that all the values that are in the range [0, 255] come into the range of [-1, 1] as follows,

The reason for this is that according to the well studied GAN hacks , normalizing the input image values to be in the range of [-1, 1] and using “tanh” as generator’s output layer activation yields much better results.

3 . Generator architecture:

Sketch2Color anime GAN is a supervised learning model i.e given a black-and-white sketch it can generate a colored image based on the sketch-color image pairs used in the training data.

The architecture of the generator that is used for the sketch to color anime translation is a kind of “U-Net” .

0*Qh59Szqh7xXS-3tK?q=20

Instead of using fully connected layers in encoder-decoder units as of many previous solutions here we use convolution , transposed convolution to downsample the input and upsample the same to present the output size that avoids information loss when passed through fully connected layers.

Especially in this Sketch2Color anime problem, we need to keep the edges as the most important information from the input to ensure the quality of the output image.

Hence, a “U-net” kind of architecture is employed by concatenating layers in the encoder to the corresponding layers of the decoder.

1*NUJGjDVIESO7IAJnrG1ARQ.png?q=20

B and B’ are concatenated to obtain A’ through deconvolution.

Whereas yellow blocks represent layers in the encoder and blue blocks in the decoder . In each layer of decoding, the corresponding layers of the encoder are concatenated to the current layer to decode the next layer.

4. Discriminator architecture:

As compared to the generator the discriminator only has the encoder units and it aims to classify whether the input sketch-color image pair is “real” or “fake” i.e if the colored image is from actual data or by the generator.

0*7_3D2LfNTfvGY1nR?q=20

The input of discriminator is either the pair of sketch ( yellow ) and real target image ( red ), or the pair of sketch ( yellow ) and generated image ( blue ).

The discriminator network is trained to maximize classification accuracy.

The discriminator output is a matrix of probabilities of shape 30x30x1 , in which each element gives the probability of being real for a pair of corresponding patches from the input sketch and colored anime image.

We do also avoid using fully connected layers here, in the end, to avoid any information loss deep in the network and use global average pooling to get a single value.

The convolutional layers between the input and the output extract the high-level features of the input pairs to output the probability.

5. Generator and discriminator loss:

The task of generating colored anime from black-and-white sketch is much harder as it’s a simple line sketch compared to a grayscale image that contains more useful information.

Hence because of this issue, we might have to impose more constraints to yield better results.

Since our sketch2anime GAN is supervised learning we will be using conditional GANs for this purpose.

The loss function for general conditional GANs is as shown below.

1*-AE1p2pKlUa3ZbEdwwUr1w.png?q=20
“x” is the input sketch, “y” is the target (the colored cartoon image) and “G(x, z)” is the generated color image

The conditional GANs learn a mapping from random noise vector “z” to the output image “y” conditioned on observed “x” .

In our case, the GAN is conditioned on a black-and-white sketch “x” for generating a colored anime.

While the generator tries to minimize the loss, simultaneously the discriminator tries to maximize it and eventually they reach an equilibrium .

The generator that which tries to minimize the loss during the training such that it produces plausible color anime images is,

1*aNFHF6go8Z203TiNO37tEg.png?q=20

Training the discriminator simultaneously encourages more variations in the colored image generation. But In order to produce realistic colored images, we mix the GAN loss with some more traditional loss functions.

The first loss that we use is PixelLevel loss i.e L1 distance between each pixel of target color image and generated color image as,

1*jMIZ_BRmLQF5zEChLPItZQ.png?q=20

The second loss that we use is FeatureLevel loss i.e L2 distance between the activation (φj) of the 4th layer of the 16-layer VGG network ( VGG16 ) pre-trained on the ImageNet dataset to retain high-level features like specific colors to objects and shapes,

1*cGMPjfdCQGPjOw5H05Fu_w.png?q=20

The final loss that we use is TotalVariation loss such that the GAN produces similar colors that were used for sketch-color image pairs in the training data.

This encourages smoothness (acts as a form of regularization) and prevents output denoising.

1*UDt-E9-3zlB9GOrI9nEAnw.png?q=20

Finally, the GAN loss function is a weighted combination of all the above losses as the following,

1*3saMq31MldCGscoOAaoXKg.png?q=20
Wp , Wf , Wg , and Wtv are weights given to Pixel-Level, Feature-Level, Generator, and Total-Variation Loss.

Hence by minimizing this final loss function, the GAN learns better patterns between the sketch-color image pairs.

The weights Wp , Wf , Wg , and Wtv are adjusted accordingly to control the importance of each of the losses.

6. Training the generator and discriminator :

The GAN was trained for 43 epochs with batch size=8 based on the

in-progress generator results for seed sketches after every epoch.

The label smoothing was used during the training process i.e soft labels instead of hard labels(0.9 instead of 1).

The discriminator was trained less say at even-numbered batch because if it learns more about the real and generated color images then it starts dominating over the generator.

Then the generator becomes weak and never learns, captures the distribution of the real target images. Hence, it’s hard to find a schedule of # iterations for discriminator and generator.

Adamoptimizer was used with a learning rate=0.0002 and beta_1=0.5 .

The discriminator and generator were trained alternatively in a loop: first, train the discriminator, then the generator, then the discriminator again, etc.

7. Tensorboard logs:

At every batch after training, the loss was calculated for the discriminator on the pair of real target image and sketch or pair of generated color image and sketch, and for the generator on the black-and-white sketches.

At the end of each epoch mean of these losses was calculated for the corresponding model.

The discriminator and the generator mean losses were logged after every epoch to monitor their progress using the tensorboard callbacks.

As expected the discriminator loss fluctuates somewhere between 0.23 and 0.35 and the generator loss decreases steadily which indicates a good sign that our generator is capturing the distribution of real target images.

0*QfZosra0eFc_LGix?q=20

The ModelCheckpoint was made for generator after every epoch so that we can use the best output model so far as to predict the colors after completion of training.

8. In-progress training results:

After every epoch, the generator was made to predict the colors for some of the fixed/seed sketches to check if it’s learning the correlations between sketch and its associated colored image.

Some of these results at different epochs are as shown below,

1*afJyKXuxtUOI6rrncgI5EQ.png?q=20

1*Cf5noRpTW6Yem50NRxYn7g.png?q=20

1*3eukmWRtOxa29Fw4lCOj9A.png?q=20

Left: Epoch 30 | Middle: Epoch 40 | Right: Epoch 43

We can see that as the training progress the colors get more plausible and realistic. Even the colors don’t come out of the edges and spread unevenly.

Hence at this stage, the generator has almost learned the correlations between the anime sketch and its associated colored image.

9. The output of sample sketches:

Finally… the wait is over, the very thing that we’re excited about has come.

After the completion of the training process, the prediction of colors was made on some of the sample sketches by the generator.

1*YpE1vMVGuu2QNTFExK17Ig.png?q=20

1*6uWJnPelkHNFW0EnsB7Qqw.png?q=20

We can see that the generator produces reasonable colors for the given simple line sketches.

The GAN was trained on 13K sketch-color image pairs further, this can be improved by collecting and adding more image pairs to the training data to produce more rich colors.

10. Conclusion:

In this case study, I’ve worked on the GAN with the “U-Net” structure that allows the output image to have both low-level information of sketch as well as learned high-level color information.

And also, more constraints were introduced based on previous papers to obtain better performance.

Both the networks are not fighting over each other, they have to work cooperatively to achieve their joint goal. The discriminator has to teach the generator by providing tweaks to the generated colored images during the training process. While it also learns to be a better teacher over time.

They both get stronger together and hopefully reach an equilibrium.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK