内容简介:Thank you for reading. Suggestions and constructive criticism are welcome. :)You can also check out my other blogposts
How to train your neural net
Pytorch [Basics] — Intro to Dataloaders and Loss Functions
This blog post takes you through Dataloaders and different types of Loss Functions in PyTorch.
Feb 1 ·6min read
In this blog post, we will see a short implementation of custom dataset and dataloader as well as see some of the common loss functions in action.
Datasets and Dataloaders
A custom dataset class is created using 3 main components.
__init__ __len__ __getitem__
class CustomDataset(Dataset): def __init__(self): pass def __getitem__(self, index): pass def __len__(self): pass
__init__
: used to perform initializing operations such as reading data and preprocessing.
__len__
: returns the size of the input data.
__getitem__
: returns data (input and output) in batches.
A dataloader is then used on this dataset class to read the data in batches.
train_loader = DataLoader(custom_dataset_object, batch_size=32, shuffle=True)
Let’s implement a basic PyTorch dataset and dataloader. Assume you had input and output data as -
X
: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
y
: 0, 0, 0, 1, 0, 1, 1, 0, 0, 1
Let’s define the dataset class. We will return a tuple of (input, output).
class CustomDataset(Dataset): def __init__(self, X_data, y_data): self.X_data = X_data self.y_data = y_data def __getitem__(self, index): return self.X_data[index], self.y_data[index] def __len__ (self): return len(self.X_data)
Initialise the dataset object. The inputs have to be of the type Tensor.
data = CustomDataset(torch.FloatTensor(X), torch.FloatTensor(y))
Let’s use the methods __len__()
and __getitem__()
. __getitem__()
takes the index as input.
data.__len__()################### OUTPUT ##################### 10
Printing out the 4th element (3rd index) from out data.
data.__getitem__(3)################### OUTPUT ##################### (tensor(4.), tensor(1.))
Let’s initialise our dataloader now. Here we specify the batch size and shuffle.
data_loader = DataLoader(dataset=data, batch_size=2, shuffle=True)data_loader_iter = iter(data_loader) print(next(data_loader_iter))################### OUTPUT ##################### [tensor([3., 6.]), tensor([0., 1.])]
Let’s use the dataloader with a for loop.
for i,j in data_loader: print(i,j)################### OUTPUT #####################tensor([ 1., 10.]) tensor([0., 1.]) tensor([4., 6.]) tensor([1., 1.]) tensor([7., 5.]) tensor([1., 0.]) tensor([9., 3.]) tensor([0., 0.]) tensor([2., 8.]) tensor([0., 0.])
Loss Functions
Following are the commonly used loss functions for different deep learning tasks.
Regression:
torch.nn.L1Loss() torch.nn.MSELoss()
Classification:
torch.nn.BCELoss() torch.nn.BCEWithLogitsLoss() torch.nn.NLLLoss() torch.nn.CrossEntropyLoss()
Learn more about the loss functions from the official PyTorch docs .
Import Libraries
import torch import torch.nn as nn
Regression
Let’s begin by defining the actual and predicted output tensors in order to calculate the loss.
y_pred = torch.tensor([[1.2, 2.3, 3.4], [4.5, 5.6, 6.7]], requires_grad=True)print("Y Pred: \n", y_pred) print("\nY Pred shape: ", y_pred.shape, "\n")print("=" * 50)y_train = torch.tensor([[1.2, 2.3, 3.4], [7.8, 8.9, 9.1]]) print("\nY Train: \n", y_train) print("\nY Train shape: ", y_train.shape) ###################### OUTPUT ######################Y Pred: tensor([[1.2000, 2.3000, 3.4000], [4.5000, 5.6000, 6.7000]], requires_grad=True)Y Pred shape: torch.Size([2, 3]) ==================================================Y Train: tensor([[1.2000, 2.3000, 3.4000], [7.8000, 8.9000, 9.1000]])Y Train shape: torch.Size([2, 3])
Mean Absolute Error — torch.nn.L1Loss()
The input and output have to be the same size and have the dtype float .
y_pred = (batch_size, *)
and y_train = (batch_size, *)
.
mae_loss = nn.L1Loss()print("Y Pred: \n", y_pred)print("Y Train: \n", y_train)output = mae_loss(y_pred, y_train) print("MAE Loss\n", output)output.backward() ###################### OUTPUT ######################Y Pred: tensor([[1.2000, 2.3000, 3.4000], [4.5000, 5.6000, 6.7000]], requires_grad=True) Y Train: tensor([[1.2000, 2.3000, 3.4000], [7.8000, 8.9000, 9.1000]]) MAE Loss tensor(1.5000, grad_fn=<L1LossBackward>)
Mean Squared Error — torch.nn.MSELoss()
The input and output have to be the same size and have the dtype float .
y_pred = (batch_size, *)
and y_train = (batch_size, *)
.
mse_loss = nn.MSELoss()print("Y Pred: \n", y_pred)print("Y Train: \n", y_train)output = mse_loss(y_pred, y_train) print("MSE Loss\n", output)output.backward() ###################### OUTPUT ######################Y Pred: tensor([[1.2000, 2.3000, 3.4000], [4.5000, 5.6000, 6.7000]], requires_grad=True) Y Train: tensor([[1.2000, 2.3000, 3.4000], [7.8000, 8.9000, 9.1000]]) MSE Loss tensor(4.5900, grad_fn=<MseLossBackward>)
Binary Classification
y_train
has two classes - 0 and 1. We use this BCE loss function in the situation when the final output from the network is a single value (final dense layer is of size 1) that lies between 0 and 1.
Binary classification can be re-framed to use NLLLoss or Crossentropy loss if the output from the network is a tensor of length 2 (final dense layer is of size 2) where both values lie between 0 and 1.
Let’s define the actual and predicted output tensors in order to calculate the loss.
y_pred = torch.tensor([[1.2, 2.3, 3.4], [7.8, 8.9, 9.1]], requires_grad = True) print("Y Pred: \n", y_pred) print("\nY Pred shape: ", y_pred.shape, "\n")print("=" * 50)y_train = torch.tensor([[1, 0, 1], [0, 0, 1]]) print("\nY Train: \n", y_train) print("\nY Train shape: ", y_train.shape) ###################### OUTPUT ######################Y Pred: tensor([[1.2000, 2.3000, 3.4000], [7.8000, 8.9000, 9.1000]], requires_grad=True)Y Pred shape: torch.Size([2, 3]) ==================================================Y Train: tensor([[1, 0, 1], [0, 0, 1]])Y Train shape: torch.Size([2, 3])
Binary Cross Entropy Loss — torch.nn.BCELoss()
The input and output have to be the same size and have the dtype float .
y_pred = (batch_size, *)
, Float (Value should be passed through a Sigmoid function to have a value between 0 and 1)
y_train = (batch_size, *)
, Float
bce_loss = nn.BCELoss()y_pred_sigmoid = torch.sigmoid(y_pred)print("Y Pred: \n", y_pred)print("\nY Pred Sigmoid: \n", y_pred_sigmoid)print("\nY Train: \n", y_train.float())output = bce_loss(y_pred_sigmoid, y_train.float()) print("\nBCE Loss\n", output)output.backward() ###################### OUTPUT ######################Y Pred: tensor([[1.2000, 2.3000, 3.4000], [7.8000, 8.9000, 9.1000]], requires_grad=True)Y Pred Sigmoid: tensor([[0.7685, 0.9089, 0.9677], [0.9996, 0.9999, 0.9999]], grad_fn=<SigmoidBackward>)Y Train: tensor([[1., 0., 1.], [0., 0., 1.]])BCE Loss tensor(3.2321, grad_fn=<BinaryCrossEntropyBackward>)
Binary Cross Entropy with Logits Loss — torch.nn.BCEWithLogitsLoss()
The input and output have to be the same size and have the dtype float . This class combines Sigmoid and BCELoss into a single class. This version is numerically more stable than using Sigmoid and BCELoss individually.
y_pred = (batch_size, *)
, Float
y_train = (batch_size, *)
, Float
bce_logits_loss = nn.BCEWithLogitsLoss()print("Y Pred: \n", y_pred)print("\nY Train: \n", y_train.float())output = bce_logits_loss(y_pred, y_train.float()) print("\nBCE Loss\n", output)output.backward() ###################### OUTPUT ######################Y Pred: tensor([[1.2000, 2.3000, 3.4000], [7.8000, 8.9000, 9.1000]], requires_grad=True)Y Train: tensor([[1., 0., 1.], [0., 0., 1.]])BCE Loss tensor(3.2321, grad_fn=<BinaryCrossEntropyWithLogitsBackward>)
Multiclass Classification
Let’s define the actual and predicted output tensors in order to calculate the loss.
y_train
has 4 classes - 0, 1, 2, and 3.
y_pred = torch.tensor([[1.2, 2.3, 3.4], [4.5, 5.6, 6.7], [7.8, 8.9, 9.1]], requires_grad = True) print("Y Pred: \n", y_pred) print("\nY Pred shape: ", y_pred.shape, "\n")print("=" * 50)y_train = torch.tensor([0, 1, 2]) print("\nY Train: \n", y_train) print("\nY Train shape: ", y_train.shape) ###################### OUTPUT ######################Y Pred: tensor([[1.2000, 2.3000, 3.4000], [4.5000, 5.6000, 6.7000], [7.8000, 8.9000, 9.1000]], requires_grad=True)Y Pred shape: torch.Size([3, 3]) ==================================================Y Train: tensor([0, 1, 2])Y Train shape: torch.Size([3])
Negative Log Likelihood — torch.nn.NLLLoss()
y_pred = (batch_size, num_classes)
, Float (Value should be passed log probabilities which are obtained using the log_softmax function.
y_train = (batch_size)
, Long (range of values = 0, num_classes-1). The classes must start from 0, 1, 2, ...
nll_loss = nn.NLLLoss()y_pred_logsoftmax = torch.log_softmax(y_pred, dim = 1)print("Y Pred: \n", y_pred)print("\nY Pred LogSoftmax: \n", y_pred_logsoftmax)print("\nY Train: \n", y_train)output = nll_loss(y_pred_logsoftmax, y_train) print("\nNLL Loss\n", output)output.backward() ###################### OUTPUT ######################Y Pred: tensor([[1.2000, 2.3000, 3.4000], [4.5000, 5.6000, 6.7000], [7.8000, 8.9000, 9.1000]], requires_grad=True)Y Pred LogSoftmax: tensor([[-2.5672, -1.4672, -0.3672], [-2.5672, -1.4672, -0.3672], [-2.0378, -0.9378, -0.7378]], grad_fn=<LogSoftmaxBackward>)Y Train: tensor([0, 1, 2])NLL Loss tensor(1.5907, grad_fn=<NllLossBackward>)
CrossEntropyLoss — torch.nn.CrossEntropyLoss()
This class combines LogSoftmax and NLLLoss into a single class.
y_pred = (batch_size, num_classes)
, Float
y_train = (batch_size)
, Long (range of values = 0, num_classes-1). The classes must start from 0, 1, 2, ...
ce_loss = nn.CrossEntropyLoss()print("Y Pred: \n", y_pred)print("\nY Train: \n", y_train)output = ce_loss(y_pred, y_train) print("\nNLL Loss\n", output)output.backward() ###################### OUTPUT ######################Y Pred: tensor([[1.2000, 2.3000, 3.4000], [4.5000, 5.6000, 6.7000], [7.8000, 8.9000, 9.1000]], requires_grad=True)Y Train: tensor([0, 1, 2])NLL Loss tensor(1.5907, grad_fn=<NllLossBackward>)
Thank you for reading. Suggestions and constructive criticism are welcome. :) You can find me on LinkedIn . You can view the the full code here . Check out the Github repo here and star it if you like it.
You can also check out my other blogposts here .
以上所述就是小编给大家介绍的《Pytorch [Basics] — Intro to Dataloaders and Loss Functions》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!
猜你喜欢:本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
疯狂Java讲义
李刚 / 电子工业出版社 / 2012-1-1 / 109.00元
《疯狂Java讲义(附光盘第2版)》是《疯狂Java讲义》的第2版,第2版保持了第1版系统、全面、讲解浅显、细致的特性,全面介绍了新增的Java 7的新特性。 《疯狂Java讲义(附光盘第2版)》深入介绍了Java编程的相关方面,全书内容覆盖了Java的基本语法结构、Java的面向对象特征、Java集合框架体系、Java泛型、异常处理、Java GUI编程、JDBC数据库编程、Java注释、......一起来看看 《疯狂Java讲义》 这本书的介绍吧!