EfficientNet: Scaling of Convolutional Neural Networks done right

栏目: IT技术 · 发布时间: 4年前

内容简介:Ever since Alex net won the 2012 ImageNet Challenge, Convolutional Neural Networks have become ubiquitous in the world of Computer Vision. They have even found their applications in natural language processing, with state of the art models using convolutio

How to intelligently scale a CNN for achieving accuracy gains

EfficientNet: Scaling of Convolutional Neural Networks done right

Photo by Lidya Nada on Unsplash

Ever since Alex net won the 2012 ImageNet Challenge, Convolutional Neural Networks have become ubiquitous in the world of Computer Vision. They have even found their applications in natural language processing, with state of the art models using convolution operations to retain context and provide better predictions. However, one of the key issues in designing CNNs, as with all other neural networks, is model scaling i.e deciding how to increase the model size so as to provide better accuracy.

This is a tedious process, requiring manual hit and trial until a sufficiently accurate model is produced that satisfies the resource constraints. The process is resource and time consuming and often yields models with sub-optimal accuracy and efficiency.

Taking this issue in consideration, Google released a paper in 2019 that dealt with a new family of CNNs i.e EfficientNet . These CNNs not only provide better accuracy but also improve the efficiency of the models by reducing the parameters and FLOPS (Floating Point Operations Per Second) manifold in comparison to the state of art models such as GPipe. The main contributions of this paper are:

  • Designing a simple mobile-size baseline architecture: EfficientNet-B0
  • Providing an effective compound scaling method for increasing the model size to achieve maximum accuracy gains.

EfficientNet-B0 Architecture

EfficientNet: Scaling of Convolutional Neural Networks done right

Table 1. Architecture Details for the baseline network

The compound scaling method can be generalized to existing CNN architectures such as Mobile Net and ResNet. However, choosing a good baseline network is critical for achieving the best results since the compound scaling method only enhances the predictive capacity of the networks by replicating base network’s underlying convolutional operations and network structure.

To this end, the authors use Neural Architecture Search to build an efficient network architecture, EfficientNet-B0 . It achieves 77.3% accuracy on ImageNet with only 5.3M parameters and 0.39B FLOPS. (Resnet-50 provides 76% accuracy with 26M parameters and 4.1B FLOPS).

The main building block of this network consists of MBConv to which squeeze-and-excitation optimization is added. MBConv is similar to the inverted residual blocks used in MobileNet v2. These form a shortcut connection between the beginning and end of a convolutional block. The input activation maps are first expanded using 1x1 convolutions to increase the depth of the feature maps. This is followed by 3x3 Depth-wise convolutions and Point-wise convolutions that reduce the number of channels in the output feature map. The shortcut connections connect the narrow layers whilst the wider layers are present between the skip connections. This structure helps in decreasing the overall number of operations required as well as the model size.

EfficientNet: Scaling of Convolutional Neural Networks done right

Figure 1. Inverted residual block

The code for this block can be surmised as :

from keras.layers import Conv2D, DepthwiseConv2D, Add
def inverted_residual_block(x, expand=64, squeeze=16):
    block = Conv2D(expand, (1,1), activation=’relu’)(x)
    block = DepthwiseConv2D((3,3), activation=’relu’)(block)
    block = Conv2D(squeeze, (1,1), activation=’relu’)(block)
    return Add()([block, x])

Compound Scaling

EfficientNet: Scaling of Convolutional Neural Networks done right

Figure 2. Model Scaling. (a) is a baseline network example; (b)-(d) are conventional scaling that only increases one dimension of network width, depth, or resolution. (e) is our proposed compound scaling method that uniformly scales all three dimensions with a fixed ratio.

A convolutional neural network can be scaled in three dimensions: depth, width, resolution . The depth of the network corresponds to the number of layers in a network. The width is associated with the number of neurons in a layer or more pertinently, the number of filters in a convolutional layer. The resolution is simply the height and width of the input image. Figure 2 above, gives a clearer picture of scaling across these 3 dimensions.

Increasing the depth, by stacking more convolutional layers, allows the network to learn more complex features. However deeper networks tend to suffer from vanishing gradients and become difficult to train. Although new techniques such as batch normalization and skip connections are effective in resolving this problem, empirical studies suggest that the actual accuracy gains by only increasing the depth of the network quickly saturate. For instance Resnet-1000 provides the same accuracy as Resnet-100 despite all the extra layers.

Scaling the width of the networks allows layers to learn more fine grained features. This concept has been used extensively in numerous works such as Wide ResNet and Mobile Net. However, as is the case of increasing depth, increasing width prevents the network from learning complex features , resulting in diminishing accuracy gains.

Higher input resolution provides a greater detail about the image and hence enhances the model’s ability to reason about smaller objects and extract finer patterns. But like the other scaling dimensions, this too provides limited accuracy gains on its own.

EfficientNet: Scaling of Convolutional Neural Networks done right

Figure 3. Scaling Up a Baseline Model with Different Network Width (w), Depth (d), and Resolution (r) Coefficients.

This leads to an important observation:

Observation 1 : Scaling up any dimension of network width, depth, or resolution improves accuracy, but the accuracy gain diminishes for bigger models.

EfficientNet: Scaling of Convolutional Neural Networks done right

Figure 4. Scaling Network Width for Different Baseline Net-works.

This implies that the scaling of network for increase in accuracy should be contributed in part by a combination of the three dimensions. This is corroborated by empirical evidence in Figure 4 , where the networks’s accuracy is modeled with an increasing width for various depth and resolution settings.

The results depict that scaling only one dimension (width) quickly stagnates the accuracy gains. However, coupling this with an increase in number of layers (depth) or input resolution enhances the models predictive capabilities.

These observations are somewhat expected and can be explained by intuition. For instance, if the spatial resolution of the input image is increased , the number of convolutional layers should also be increased so that the receptive field is large enough to span the entire image that now contains more pixels. This leads to the second observation :

Observation 2: In order to pursue better accuracy and efficiency, it is critical to balance all dimensions of network width, depth, and resolution during ConvNet scaling.

The proposed scaling method

A convolutional neural network can be thought of as stacking or composition of various convolutional layers. Furthermore these layers can be partitioned into different stages e.g ResNet has five stages, and all layers in each stage have the same convolutional type. Therefore, a CNN can be represented mathematically as:

EfficientNet: Scaling of Convolutional Neural Networks done right
Equation 1

where N depicts the network, i represents the stage number, F ᵢ represents the convolution operation for the i-th stage, and L ᵢ represents the number of times F ᵢ is repeated in stage i. H ᵢ , W ᵢ and C ᵢ simply denote the input tensor shape for stage i.

As can be deduced from the equation 1, L ᵢ controls the depth of the network, C ᵢ is responsible for the width of the network whereas H ᵢ and W ᵢ affect the input resolution. Finding a set of good coefficients to scale these dimensions for each layer is impossible, since the search space is huge. So, in order to restrict the search space, the authors lay down a set of ground rules.

  • All the layers/stages in the scaled models will use the same convolution operations as the baseline network
  • All layers must be scaled uniformly with constant ratio

With these rules established , equation 1 can be parameterized as:

Equation 2

where w, d, r are coefficients for scaling network width,depth, and resolution; F̂ ᵢ , L̂ ᵢ , Ĥ ᵢ , Ŵ ᵢ , Ĉ ᵢ are predefined parameters in baseline network.

The authors propose a simple, albeit effective scaling technique that uses a compound coefficient ɸ to uniformly scale network width, depth, and resolution in a principled way:

EfficientNet: Scaling of Convolutional Neural Networks done right

Equation 3

ɸ is a user-defined, global scaling factor (integer) that controls how many resources are available whereas α , β , and γ determine how to assign these resources to network depth, width, and resolution respectively. The FLOPS of a convolutional operation are proportional to d, w², r², since doubling the depth will double the FLOPS while doubling width or resolution increases FLOPS almost by four times. So ,scaling the network using equation 3 will increase the total FLOPS by (α * β² * γ²) ^ɸ . Hence, in order to make sure that the total FLOPS don’t exceed 2^ϕ, the constraint (α * β² * γ²) ≈ 2 is applied. What this means, is that if we have twice the resources available we can simply use compound coefficient of 1 to scale the FLOPS by 2¹.

The parameters - α , β , and γ- can be determined using grid search by setting ɸ=1 and finding parameters that result in the best accuracy. Once found, these parameters can then be fixed , and the compound coefficient ɸ can be increased to get larger but more accurate models. This was how EfficientNet-B1 to EfficientNet-B7 are constructed , with the integer in the end of the name indicating the value of compound coefficient.

Results

This technique allowed the authors to produce models that provided accuracy higher than the existing ConvNets and that too with a monumental reduction in overall FLOPS and model size.

EfficientNet: Scaling of Convolutional Neural Networks done right

Table 2. Comparison of EfficientNet with existing networks for ImageNet Challenge

This scaling method is generic and can be used with other architectures to effectively scale Convolutional Neural Networks and provide better accuracy.

EfficientNet: Scaling of Convolutional Neural Networks done right

Table 3. Scaling Up MobileNets and ResNet.

References:


以上所述就是小编给大家介绍的《EfficientNet: Scaling of Convolutional Neural Networks done right》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

大视觉创意宝典

大视觉创意宝典

2008-8 / 28.00元

《大视觉创意宝典:网页设计》主要内容:将优秀的网页分为设计、卡通、教育、金融、通讯、企业、房地产、娱乐等十四个章节,并详尽解析其页面布局、配色参考、设计特色及细节元素。丛书编写以设计基础的角度出发,具备速查、参照、案头工具书等功能。一起来看看 《大视觉创意宝典》 这本书的介绍吧!

HTML 压缩/解压工具
HTML 压缩/解压工具

在线压缩/解压 HTML 代码

随机密码生成器
随机密码生成器

多种字符组合密码

Markdown 在线编辑器
Markdown 在线编辑器

Markdown 在线编辑器