The One PyTorch Trick Which You Should Know

栏目: IT技术 · 发布时间: 5年前

内容简介:If you have ever used deep learning before, you know that debugging a model can be really hard sometimes. Tensor shape mismatches, exploding gradients, and countless other issues can surprise you. Solving these require looking at the model under the micros

How hooks can improve your workflow significantly

If you have ever used deep learning before, you know that debugging a model can be really hard sometimes. Tensor shape mismatches, exploding gradients, and countless other issues can surprise you. Solving these require looking at the model under the microscope. The most basic methods include littering the forward() methods with print statements or introducing breakpoints. These are of course not very scalable, because they require guessing where things went wrong, and are quite tedious to do overall.

However, there is a solution: hooks. These are specific functions, able to be attached to every layer and called each time the layer is used. They basically allow you to freeze the execution of the forward or backward pass at a specific module and process its inputs and outputs.

Let’s see them in action!

Hooks crash course

So, a hook is just a callable object with a predefined signature, which can be registered to any nn.Module object. When the trigger method is used on the module (i.e. forward() or backward() ), the module itself with its inputs and possible outputs are passed to the hook, executing before the computation proceeds to the next module.

In PyTorch, you can register a hook as a

  • forward prehook (executing before the forward pass),
  • forward hook (executing after the forward pass),
  • backward hook (executing after the backward pass).

It might sound complicated at first, so let’s take a look at a concrete example!

An example: saving the outputs of each convolutional layer

Suppose that we want to inspect the output of each convolutional layer in a ResNet34 architecture. This task is perfectly suitable for hooks. In the next part, I will show you how can this be performed. If you would like to follow it interactively, you can find the accompanying Jupyter notebook at https://github.com/cosmic-cortex/pytorch-hooks-tutorial .

Our model is defined by the following.

Creating a hook to save outputs is very simple, a basic callable object is perfectly enough for our purposes.

An instance of SaveOutput will simply record the output tensor of the forward pass and stores it in a list.

A forward hook can be registered with the register_forward_hook(hook) method. (For the other types of hooks, we have register_backward_hook and register_forward_pre_hook .) The return value of these methods is the hook handle, which can be used to remove the hook from the module.

Now we register the hook to each convolutional layer.

When this is done, the hook will be called after each forward pass of each convolutional layer. To test it out, we are going to use the following image.

The One PyTorch Trick Which You Should Know

Photo by Manja Vitolic on Unsplash

The forward pass:

As expected, the outputs were stored properly.

>>> len(save_output.outputs)
36

By inspecting the tensors in this list, we can visualize what the network sees.

The One PyTorch Trick Which You Should Know

Outputs of the first layer of ResNet34.

Just for curiosity, we can check what happens later. If we go deeper in the network, the learned features become more and more high level. For instance, there is a filter which seems to be responsible for detecting the eyes.

The One PyTorch Trick Which You Should Know

Outputs of the 16th convolutional layer of ResNet34.

Going beyond

Of course, this is just the tip of the iceberg. Hooks can do much more than simply store outputs of intermediate layers. For instance, neural network pruning, which is a technique to reduce the number of parameters, can also be performed with hooks .

To summarize, applying hooks is a very useful technique to learn if you want to enhance your workflow. With this under your belt, you’ll be able to do much more and do them more effectively.


以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

C#本质论

C#本质论

米凯利斯 / 周靖 / 人民邮电出版社 / 2010-9 / 99.00元

《C#本质论(第3版)》是一部好评如潮的语言参考书,作者用一种非常合理的方式来组织《C#本质论(第3版)》的内容,由浅人深地介绍了C#语言的各个方面。全书共包括21章及6个附录,每章开头的“思维导图”指明了本章要讨论的主题,以及各个主题之间的层次关系。书中所包含的丰富的示例代码和精要的语言比较,都有助于读者理解C#语言。《C#本质论(第3版)》首先介绍了C#语言的基础知识,随后深人讲解了泛型、迭代......一起来看看 《C#本质论》 这本书的介绍吧!

图片转BASE64编码
图片转BASE64编码

在线图片转Base64编码工具

HTML 编码/解码
HTML 编码/解码

HTML 编码/解码

html转js在线工具
html转js在线工具

html转js在线工具