Understand Local Receptive Fields In Convolutional Neural Networks

栏目: IT技术 · 发布时间: 4年前

Technical and Explanation

Understand Local Receptive Fields In Convolutional Neural Networks

Ever wondered why all the neurons in a convolutional neural network connected?

This article is aimed at all levels of individuals that practice machine learning or more specifically deep learning.

C onvolutional Neural Networks (CNN) have characteristics that enable invariance to the affine transformations of images that are fed through the network. This provides the ability to recognize patterns that are shifted, tilted or slightly warped within images.

These characteristics of affine invariance are introduced due to three main properties of the CNN architecture.

  1. Local Receptive Fields
  2. Shared Weights
  3. Spatial Sub-sampling

In this article, we’ll be exploring local receptive fields and understand their purpose and the advantages they serve within CNN architecture.

Introduction

Within a CNN architecture, there are compositions of several layers which have within them a set of units or neurons.

These units receive inputs from corresponding units from a similar subsection in the previous layer. In the traditional fully connected feed-forward neural network, units/neuron within a layer receives input from all units of the prior layer.

Ever wondered why aren’t all the neurons in a convolutional neural network connected?

Well, it’s rather impractical to connect all units from the previous layer to units within the current layer. The computation resource to train such a network will be vast due to the increase in connections. Also, such a network will require a more extensive set of training data to utilize the full capacity of the network.

But more importantly, each neuron within a CNN is responsible for a defined region of the input data, and this enables neurons to learn patterns such as lines, edges and small details that make up the image.

This defined region of space that a neuron or unit is exposed to in the input data is called the Local Receptive Field .

Receptive Fields

Receptive fields are defined portion of space or spatial construct containing units that provide input to a set of units within a corresponding layer.

The receptive field is defined by the filter size of a layer within a convolution neural network. The receptive field is also an indication of the extent of the scope of input data a neuron or unit within a layer can be exposed to (see image below).

Example

The image below illustrates an input data(red) with an input volume of 32x32x3.

The input volume essentially tells us that the images within the input data have the dimensions 32x32 (height/width), along three colour channels: Red, Green, and Blue.

The second object in the image(blue) represents a convolutional layer. The conv layer has a filter size of 5x5, which corresponds to the area of the local receptive field of each neuron in the layer has on the input data.

The receptive field does not only act on the area of the input volume, but it’s also imposed on the depth, which in this case is 3.

We can derive the number of trainable parameters that each neuron has based on the input volume for the example in the image below. This is the receptive field multiplied by the depth of the input volume (5x5x3 = 75 trainable parameters).

Supposedly we have an input volume of (32,32,3), and the receptive field of a convolutional layer is 5x5 then each neuron in the convolutional layer will have weights 5x5x3 region, which is 75 weights within the neurons.

Understand Local Receptive Fields In Convolutional Neural Networks

Illustration of local receptive fields

The output of convolutional layers are feature maps, the number of feature maps within a layer is a defined hyperparameter, and it’s possible to derive the number of connections within a feature map by multiplying the feature map dimensions by the number of trainable parameters.

The local receptive field is a defined segmented area that is occupied by the content of input data that a neuron within a convolutional layer is exposed to during the process of convolution.

The LeNet paper introduced the first use case of the utilization of the convolutional neural network for character recognition. It also introduced the idea and implementation of local receptive fields within CNN.

Understand Local Receptive Fields In Convolutional Neural Networks

Photo by Cole Wyland on Unsplash

But the idea of local receptive fields or rather subsequent units only exposed to a segment of input data — local connections, was introduced as early as the 1960s within a study that explored the visual cortex of a cat .

Advantages

Advantages of local receptive fields in recognizing visual patterns lie in the fact that the units or neurons within a layer are directly tasked with learning visual features from a small region of the input data — this isn’t the case in fully connected neural networks, where a unit receives input from units within the previous layer.

In the lower layers within a CNN, the units/neurons learn low-level features within the image such as lines, edges, contours etc. The higher layers learn more abstract features of the image such as shapes, since the region of the image a unit within a higher layer is exposed to is larger as a result of the accumulated receptive fields of previous lower layers.

Understand Local Receptive Fields In Convolutional Neural Networks

Neural Network Simulation credit to Denis Dmitriev

To conclude below is a snippet of code that shows how a convolutional layer is defined using the TensorFlow deep learning python library.

The Conv2D class constructor takes in argument ‘filter’ which corresponds to the number of output produced by the filter, which is also the number of feature maps. The argument ‘kernel_size’ takes an integer that represents the height and width dimensions of the kernel/filter; in this case, the integer 5 corresponds to the dimensions 5x5.

simple_conv_layer = tf.keras.layers.Conv2D(filters=6, kernel_size=5, activation='relu', input_shape=(28,28,1))

I hope you found the article useful.

To connect with me or find more content similar to this article, do the following:

  1. Subscribe to my YouTube channel for video contents coming soon here
  2. Follow me on Medium
  3. Connect and reach me on LinkedIn

以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

重来

重来

[美] 贾森·弗里德、[丹] 戴维·海涅迈尔·汉森 / 李瑜偲 / 中信出版社 / 2010-10 / 36.00元

大多数的企业管理的书籍都会告诉你:制定商业计划、分析竞争形势、寻找投资人等等。如果你要找的是那样的书,那么把这本书放回书架吧。 这本书呈现的是一种更好、更简单的经商成功之道。读完这本书,你就会明白为什么计划实际上百害而无一益,为什么你不需要外界投资人,为什么将竞争视而不见反倒会发展得更好。事实是你所需要的比你想象的少得多。你不必成为工作狂,你不必大量招兵买马,你不必把时间浪费在案头工作和会议......一起来看看 《重来》 这本书的介绍吧!

HTML 压缩/解压工具
HTML 压缩/解压工具

在线压缩/解压 HTML 代码

在线进制转换器
在线进制转换器

各进制数互转换器

正则表达式在线测试
正则表达式在线测试

正则表达式在线测试