23

Pytorch [Basics] — Intro to Dataloaders and Loss Functions

 4 years ago
source link: https://towardsdatascience.com/pytorch-basics-intro-to-dataloaders-and-loss-functions-868e86450047?gi=5fbdf2adf8b7
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

YRbqEf6.jpg!web

Photo by Kees Streefkerk on Unsplash [Image [0]]

How to train your neural net

Pytorch [Basics] — Intro to Dataloaders and Loss Functions

This blog post takes you through Dataloaders and different types of Loss Functions in PyTorch.

ymQJfyn.jpg!web

Feb 1 ·6min read

In this blog post, we will see a short implementation of custom dataset and dataloader as well as see some of the common loss functions in action.

Datasets and Dataloaders

A custom dataset class is created using 3 main components.

__init__
__len__
__getitem__
class CustomDataset(Dataset):
    def __init__(self):
        pass    def __getitem__(self, index):
        pass    def __len__(self):
        pass

__init__ : used to perform initializing operations such as reading data and preprocessing.

__len__ : returns the size of the input data.

__getitem__ : returns data (input and output) in batches.

A dataloader is then used on this dataset class to read the data in batches.

train_loader = DataLoader(custom_dataset_object, batch_size=32, shuffle=True)

Let’s implement a basic PyTorch dataset and dataloader. Assume you had input and output data as -

X : 1, 2, 3, 4, 5, 6, 7, 8, 9, 10

y : 0, 0, 0, 1, 0, 1, 1, 0, 0, 1

Let’s define the dataset class. We will return a tuple of (input, output).

class CustomDataset(Dataset):
    
    def __init__(self, X_data, y_data):
        self.X_data = X_data
        self.y_data = y_data
        
    def __getitem__(self, index):
        return self.X_data[index], self.y_data[index]
        
    def __len__ (self):
        return len(self.X_data)

Initialise the dataset object. The inputs have to be of the type Tensor.

data = CustomDataset(torch.FloatTensor(X), torch.FloatTensor(y))

Let’s use the methods __len__() and __getitem__() . __getitem__() takes the index as input.

data.__len__()################### OUTPUT #####################
10

Printing out the 4th element (3rd index) from out data.

data.__getitem__(3)################### OUTPUT #####################
(tensor(4.), tensor(1.))

Let’s initialise our dataloader now. Here we specify the batch size and shuffle.

data_loader = DataLoader(dataset=data, batch_size=2, shuffle=True)data_loader_iter = iter(data_loader)
print(next(data_loader_iter))################### OUTPUT #####################
[tensor([3., 6.]), tensor([0., 1.])]

Let’s use the dataloader with a for loop.

for i,j in data_loader:
    print(i,j)################### OUTPUT #####################tensor([ 1., 10.]) tensor([0., 1.])
tensor([4., 6.]) tensor([1., 1.])
tensor([7., 5.]) tensor([1., 0.])
tensor([9., 3.]) tensor([0., 0.])
tensor([2., 8.]) tensor([0., 0.])

Loss Functions

Following are the commonly used loss functions for different deep learning tasks.

Regression:

torch.nn.L1Loss()
torch.nn.MSELoss()

Classification:

torch.nn.BCELoss()
torch.nn.BCEWithLogitsLoss()
torch.nn.NLLLoss()
torch.nn.CrossEntropyLoss()

Learn more about the loss functions from the official PyTorch docs .

Import Libraries

import torch
import torch.nn as nn

Regression

Let’s begin by defining the actual and predicted output tensors in order to calculate the loss.

y_pred = torch.tensor([[1.2, 2.3, 3.4], [4.5, 5.6, 6.7]], requires_grad=True)print("Y Pred: \n", y_pred)
print("\nY Pred shape: ", y_pred.shape, "\n")print("=" * 50)y_train = torch.tensor([[1.2, 2.3, 3.4], [7.8, 8.9, 9.1]])
print("\nY Train: \n", y_train)
print("\nY Train shape: ", y_train.shape)
###################### OUTPUT ######################Y Pred: 
 tensor([[1.2000, 2.3000, 3.4000],
        [4.5000, 5.6000, 6.7000]], requires_grad=True)Y Pred shape:  torch.Size([2, 3]) ==================================================Y Train: 
 tensor([[1.2000, 2.3000, 3.4000],
        [7.8000, 8.9000, 9.1000]])Y Train shape:  torch.Size([2, 3])

Mean Absolute Error — torch.nn.L1Loss()

The input and output have to be the same size and have the dtype float .

y_pred = (batch_size, *) and y_train = (batch_size, *) .

mae_loss = nn.L1Loss()print("Y Pred: \n", y_pred)print("Y Train: \n", y_train)output = mae_loss(y_pred, y_train)
print("MAE Loss\n", output)output.backward()
###################### OUTPUT ######################Y Pred: 
 tensor([[1.2000, 2.3000, 3.4000],
        [4.5000, 5.6000, 6.7000]], requires_grad=True)
Y Train: 
 tensor([[1.2000, 2.3000, 3.4000],
        [7.8000, 8.9000, 9.1000]])
MAE Loss
 tensor(1.5000, grad_fn=<L1LossBackward>)

Mean Squared Error — torch.nn.MSELoss()

The input and output have to be the same size and have the dtype float .

y_pred = (batch_size, *) and y_train = (batch_size, *) .

mse_loss = nn.MSELoss()print("Y Pred: \n", y_pred)print("Y Train: \n", y_train)output = mse_loss(y_pred, y_train)
print("MSE Loss\n", output)output.backward()
###################### OUTPUT ######################Y Pred: 
 tensor([[1.2000, 2.3000, 3.4000],
        [4.5000, 5.6000, 6.7000]], requires_grad=True)
Y Train: 
 tensor([[1.2000, 2.3000, 3.4000],
        [7.8000, 8.9000, 9.1000]])
MSE Loss
 tensor(4.5900, grad_fn=<MseLossBackward>)

Binary Classification

y_train has two classes - 0 and 1. We use this BCE loss function in the situation when the final output from the network is a single value (final dense layer is of size 1) that lies between 0 and 1.

Binary classification can be re-framed to use NLLLoss or Crossentropy loss if the output from the network is a tensor of length 2 (final dense layer is of size 2) where both values lie between 0 and 1.

Let’s define the actual and predicted output tensors in order to calculate the loss.

y_pred = torch.tensor([[1.2, 2.3, 3.4], [7.8, 8.9, 9.1]], requires_grad = True)
print("Y Pred: \n", y_pred)
print("\nY Pred shape: ", y_pred.shape, "\n")print("=" * 50)y_train = torch.tensor([[1, 0, 1], [0, 0, 1]])
print("\nY Train: \n", y_train)
print("\nY Train shape: ", y_train.shape)
###################### OUTPUT ######################Y Pred: 
 tensor([[1.2000, 2.3000, 3.4000],
        [7.8000, 8.9000, 9.1000]], requires_grad=True)Y Pred shape:  torch.Size([2, 3]) ==================================================Y Train: 
 tensor([[1, 0, 1],
        [0, 0, 1]])Y Train shape:  torch.Size([2, 3])

Binary Cross Entropy Loss — torch.nn.BCELoss()

The input and output have to be the same size and have the dtype float .

y_pred = (batch_size, *) , Float (Value should be passed through a Sigmoid function to have a value between 0 and 1)

y_train = (batch_size, *) , Float

bce_loss = nn.BCELoss()y_pred_sigmoid = torch.sigmoid(y_pred)print("Y Pred: \n", y_pred)print("\nY Pred Sigmoid: \n", y_pred_sigmoid)print("\nY Train: \n", y_train.float())output = bce_loss(y_pred_sigmoid, y_train.float())
print("\nBCE Loss\n", output)output.backward()
###################### OUTPUT ######################Y Pred: 
 tensor([[1.2000, 2.3000, 3.4000],
        [7.8000, 8.9000, 9.1000]], requires_grad=True)Y Pred Sigmoid: 
 tensor([[0.7685, 0.9089, 0.9677],
        [0.9996, 0.9999, 0.9999]], grad_fn=<SigmoidBackward>)Y Train: 
 tensor([[1., 0., 1.],
        [0., 0., 1.]])BCE Loss
 tensor(3.2321, grad_fn=<BinaryCrossEntropyBackward>)

Binary Cross Entropy with Logits Loss — torch.nn.BCEWithLogitsLoss()

The input and output have to be the same size and have the dtype float . This class combines Sigmoid and BCELoss into a single class. This version is numerically more stable than using Sigmoid and BCELoss individually.

y_pred = (batch_size, *) , Float

y_train = (batch_size, *) , Float

bce_logits_loss = nn.BCEWithLogitsLoss()print("Y Pred: \n", y_pred)print("\nY Train: \n", y_train.float())output = bce_logits_loss(y_pred, y_train.float())
print("\nBCE Loss\n", output)output.backward()
###################### OUTPUT ######################Y Pred: 
 tensor([[1.2000, 2.3000, 3.4000],
        [7.8000, 8.9000, 9.1000]], requires_grad=True)Y Train: 
 tensor([[1., 0., 1.],
        [0., 0., 1.]])BCE Loss
 tensor(3.2321, grad_fn=<BinaryCrossEntropyWithLogitsBackward>)

Multiclass Classification

Let’s define the actual and predicted output tensors in order to calculate the loss.

y_train has 4 classes - 0, 1, 2, and 3.

y_pred = torch.tensor([[1.2, 2.3, 3.4], [4.5, 5.6, 6.7], [7.8, 8.9, 9.1]], requires_grad = True)
print("Y Pred: \n", y_pred)
print("\nY Pred shape: ", y_pred.shape, "\n")print("=" * 50)y_train = torch.tensor([0, 1, 2])
print("\nY Train: \n", y_train)
print("\nY Train shape: ", y_train.shape)
###################### OUTPUT ######################Y Pred: 
 tensor([[1.2000, 2.3000, 3.4000],
        [4.5000, 5.6000, 6.7000],
        [7.8000, 8.9000, 9.1000]], requires_grad=True)Y Pred shape:  torch.Size([3, 3]) ==================================================Y Train: 
 tensor([0, 1, 2])Y Train shape:  torch.Size([3])

Negative Log Likelihood — torch.nn.NLLLoss()

y_pred = (batch_size, num_classes) , Float (Value should be passed log probabilities which are obtained using the log_softmax function.

y_train = (batch_size) , Long (range of values = 0, num_classes-1). The classes must start from 0, 1, 2, ...

nll_loss = nn.NLLLoss()y_pred_logsoftmax = torch.log_softmax(y_pred, dim = 1)print("Y Pred: \n", y_pred)print("\nY Pred LogSoftmax: \n", y_pred_logsoftmax)print("\nY Train: \n", y_train)output = nll_loss(y_pred_logsoftmax, y_train)
print("\nNLL Loss\n", output)output.backward()
###################### OUTPUT ######################Y Pred: 
 tensor([[1.2000, 2.3000, 3.4000],
        [4.5000, 5.6000, 6.7000],
        [7.8000, 8.9000, 9.1000]], requires_grad=True)Y Pred LogSoftmax: 
 tensor([[-2.5672, -1.4672, -0.3672],
        [-2.5672, -1.4672, -0.3672],
        [-2.0378, -0.9378, -0.7378]], grad_fn=<LogSoftmaxBackward>)Y Train: 
 tensor([0, 1, 2])NLL Loss
 tensor(1.5907, grad_fn=<NllLossBackward>)

CrossEntropyLoss — torch.nn.CrossEntropyLoss()

This class combines LogSoftmax and NLLLoss into a single class.

y_pred = (batch_size, num_classes) , Float

y_train = (batch_size) , Long (range of values = 0, num_classes-1). The classes must start from 0, 1, 2, ...

ce_loss = nn.CrossEntropyLoss()print("Y Pred: \n", y_pred)print("\nY Train: \n", y_train)output = ce_loss(y_pred, y_train)
print("\nNLL Loss\n", output)output.backward()
###################### OUTPUT ######################Y Pred: 
 tensor([[1.2000, 2.3000, 3.4000],
        [4.5000, 5.6000, 6.7000],
        [7.8000, 8.9000, 9.1000]], requires_grad=True)Y Train: 
 tensor([0, 1, 2])NLL Loss
 tensor(1.5907, grad_fn=<NllLossBackward>)

Thank you for reading. Suggestions and constructive criticism are welcome. :) You can find me on LinkedIn . You can view the the full code here . Check out the Github repo here and star it if you like it.

You can also check out my other blogposts here .


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK