18

Predict League of Legends Matches While Learning PyTorch (Part 2)

 3 years ago
source link: https://mc.ai/predict-league-of-legends-matches-while-learning-pytorch-part-2/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Training on a GPU

As neural network models get more complex, the computational demands for training these models rise astronomically high. Graphics Processing Units, known as GPUs or graphics cards, are specially designed to undertake mass matrix operations. If you hadn’t known already, unless enabled, PyTorch was always using your CPU for computations, which is definitely not as efficient as GPUs. This time, we’ll find how to harness the GPU to crunch numbers for our neural network.

Before we start, only NVIDIA GPUs are supported, sorry AMD fans :cry:.

PyTorch offers a function torch.cuda.is_available() , which outputs a boolean indicating the presence of a compatible (NVIDIA) GPU with CUDA installed. You could go through the setup process if you have a supported GPU, or you can make a kaggle or google colab account and have access to a free GPU for deep learning purposes (with some limitations, of course). Let’s use the is_available() function to setup for GPU use, but fallback to the CPU if a GPU is absent:

Torch.device(…) is how you refer to the available hardware in PyTorch.

With PyTorch you can move data in and out of the GPU device by using the .to() method with any tensor or a model. So, to start working with GPUs, you first have to move your model to the GPU :

We initialize the model `LOLModelmk2()` and move it to the GPU by using the method `.to(device)`, with device = `torch.device(“cuda”)`

Now, we start training:

Testing the model with test data before training. Loss hovers at around 16, and accuracy at 50%.
You can see a sharp decrease in validation loss and a spike at accuracy.
The trend continues in a very small magnitude

And here are some pretty graphs below:grin::

And, here are our results from the test dataset:

hmmm…

Hmm… It looks like our model performed quite the same compared to our linear regression model (74%). Now, we have some possibilities as to this outcome:

  1. Some piece of code is incorrect
  2. A neural network is generally worse than a logistic regression model
  3. The neural network was overfitting
  4. The logistic regression model was lucky in its training (which was possible since the dataset was randomly split into train, validation, and test sets for both the regressor and neural network)
  5. Using a neural network for this scenario may not be advantageous and we are experiencing diminishing return.

Well, let’s use process of elimination, shall we?

After a long debugging session, I couldn’t find anything wrong in the code (if you do find something, please let me know!!!) , so #1 is out. #2 is likely not the case: we established earlier how neural networks are based off of linear regression models, which are basically logistic regressors without a sigmoid/softmax function. They should be able to draw more relationships out of the data, which calls for better accuracy, not the opposite.

#3 is much more probable than the other two, since a neural network is much more complex than a logistic regressor and is thus more suceptible to this sort of issue. Usually, overfitting could be resolved with the use of dropout , which simply means disabling a set fraction of the model’s nodes, picked randomly, while training. For PyTorch, that would mean initializing a nn.Dropout() layer in __init__() , and putting it in between the layers with ReLU. Here is the implementation:

we only have to initialize one instance of the `nn.Dropout` since it can be used multiple times in the forward function of the model class.
Still, the model’s accuracy on the test dataset remained in the low 70s.

Surprisingly, even this didn’t work, meaning that the model didn’t overfit the training data. Finally, to test our hypothesis of #4, I revisited my old notebook on the logistic regressor, and ran a few more trials with the model. It turns out that the logistic regressor’s 74% accuracy last time was pretty lucky. In fact, let’s look at the plot of accuracy over the # of epochs again:

The accuracy was pretty unstable for the most part, but overall it hovered in the low 70s, which is more similar to the later trials I’ve run on the logistic regressor, and the neural network in this article.

Conclusion

There’s a lot to be learned with the discipline of deep learning through this example. Mainly, deep learning is no voodoo magic; it can’t magically solve every classification problem you give to it. It can’t predict every single League of Legends match; in many cases, the first 10 minutes of a match isn’t enough to determine which team’s going to win (I can testify through my experience). Nonetheless, there’s a lot to be gained out of this experience, such as learning the concept of a neural network and implementing it in PyTorch, utilizing the GPU, as well as dropout in case the model overfits. In that note, I hope you enjoyed your journey with me on building a PyTorch model for this League of Legends dataset. Happy coding (and keep on playing League)!

If you want the source for the jupyter notebook used for this mini-project, look here: https://jovian.ml/richardso21/lol-nn .


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK