The One PyTorch Trick Which You Should Know

栏目: IT技术 · 发布时间: 4年前

内容简介:If you have ever used deep learning before, you know that debugging a model can be really hard sometimes. Tensor shape mismatches, exploding gradients, and countless other issues can surprise you. Solving these require looking at the model under the micros

How hooks can improve your workflow significantly

If you have ever used deep learning before, you know that debugging a model can be really hard sometimes. Tensor shape mismatches, exploding gradients, and countless other issues can surprise you. Solving these require looking at the model under the microscope. The most basic methods include littering the forward() methods with print statements or introducing breakpoints. These are of course not very scalable, because they require guessing where things went wrong, and are quite tedious to do overall.

However, there is a solution: hooks. These are specific functions, able to be attached to every layer and called each time the layer is used. They basically allow you to freeze the execution of the forward or backward pass at a specific module and process its inputs and outputs.

Let’s see them in action!

Hooks crash course

So, a hook is just a callable object with a predefined signature, which can be registered to any nn.Module object. When the trigger method is used on the module (i.e. forward() or backward() ), the module itself with its inputs and possible outputs are passed to the hook, executing before the computation proceeds to the next module.

In PyTorch, you can register a hook as a

  • forward prehook (executing before the forward pass),
  • forward hook (executing after the forward pass),
  • backward hook (executing after the backward pass).

It might sound complicated at first, so let’s take a look at a concrete example!

An example: saving the outputs of each convolutional layer

Suppose that we want to inspect the output of each convolutional layer in a ResNet34 architecture. This task is perfectly suitable for hooks. In the next part, I will show you how can this be performed. If you would like to follow it interactively, you can find the accompanying Jupyter notebook at https://github.com/cosmic-cortex/pytorch-hooks-tutorial .

Our model is defined by the following.

Creating a hook to save outputs is very simple, a basic callable object is perfectly enough for our purposes.

An instance of SaveOutput will simply record the output tensor of the forward pass and stores it in a list.

A forward hook can be registered with the register_forward_hook(hook) method. (For the other types of hooks, we have register_backward_hook and register_forward_pre_hook .) The return value of these methods is the hook handle, which can be used to remove the hook from the module.

Now we register the hook to each convolutional layer.

When this is done, the hook will be called after each forward pass of each convolutional layer. To test it out, we are going to use the following image.

The One PyTorch Trick Which You Should Know

Photo by Manja Vitolic on Unsplash

The forward pass:

As expected, the outputs were stored properly.

>>> len(save_output.outputs)
36

By inspecting the tensors in this list, we can visualize what the network sees.

The One PyTorch Trick Which You Should Know

Outputs of the first layer of ResNet34.

Just for curiosity, we can check what happens later. If we go deeper in the network, the learned features become more and more high level. For instance, there is a filter which seems to be responsible for detecting the eyes.

The One PyTorch Trick Which You Should Know

Outputs of the 16th convolutional layer of ResNet34.

Going beyond

Of course, this is just the tip of the iceberg. Hooks can do much more than simply store outputs of intermediate layers. For instance, neural network pruning, which is a technique to reduce the number of parameters, can also be performed with hooks .

To summarize, applying hooks is a very useful technique to learn if you want to enhance your workflow. With this under your belt, you’ll be able to do much more and do them more effectively.


以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

七周七语言(卷2)

七周七语言(卷2)

【美】Bruce A. Tate(泰特)、Fred Daoud(达乌德)、Ian Dees(迪斯) / 7ML翻译组 / 人民邮电出版社 / 2016-12 / 59

深入研习对未来编程具有重要意义的7种语言 Lua、Factor、Elixir、Elm、Julia、Idris和MiniKanren 本书带领读者认识和学习7种编程语言,旨在帮助读者探索更为强大的编程工具。 本书延续了同系列的畅销书《七周七语言》《七周七数据库》和《七周七Web开发框架》的体例和风格。 全书共8章,前7章介绍了Lua、Factor、Elm、Elixir、Jul......一起来看看 《七周七语言(卷2)》 这本书的介绍吧!

Markdown 在线编辑器
Markdown 在线编辑器

Markdown 在线编辑器

UNIX 时间戳转换
UNIX 时间戳转换

UNIX 时间戳转换

HEX CMYK 转换工具
HEX CMYK 转换工具

HEX CMYK 互转工具