内容简介:我已经搞砸了克拉斯,喜欢它到目前为止.在使用相当深入的网络时,我遇到了一个大问题:当调用model.train_on_batch或model.fit等时,Keras会分配比模型本身需要更多的GPU内存.这不是因为尝试训练一些非常大的图像,而是网络模型本身似乎需要大量的GPU内存.我创造了这个玩具示例来显示我的意思.基本上是这样做的:我首先创建一个相当深入的网络,并使用model.summary()获取网络所需的参数总数(在这种情况下为206538153,对应于大约826 MB).然后我使用nvidia-s
我已经搞砸了克拉斯,喜欢它到目前为止.在使用相当深入的网络时,我遇到了一个大问题:当调用model.train_on_batch或model.fit等时,Keras会分配比模型本身需要更多的GPU内存.这不是因为尝试训练一些非常大的图像,而是网络模型本身似乎需要大量的GPU内存.我创造了这个玩具示例来显示我的意思.基本上是这样做的:
我首先创建一个相当深入的网络,并使用model.summary()获取网络所需的参数总数(在这种情况下为206538153,对应于大约826 MB).然后我使用nvidia-smi来看看Keras已经分配了多少GPU内存,我可以看到它是完美的(849 MB).
然后我编译网络,并且可以确认这不会增加GPU内存使用.正如我们在这种情况下可以看到的那样,我现在有近1 GB的VRAM可用.
然后,我尝试向网络提供一个简单的16×16图像和1×1的地面真相,然后一切都会爆炸,因为Keras再次分配大量的内存,没有任何理由对我来说是显而易见的.培训网络的东西似乎需要更多的记忆,而不仅仅是模型,这对我来说没有意义.我已经在其他框架中对这个GPU进行了明显的更深入的网络培训,所以这让我觉得我使用了Keras错误(或者我的设置有问题,或者在Keras中,但是当然这是很难知道的).
以下是代码:
from scipy import misc import numpy as np from keras.models import Sequential from keras.layers import Dense, Activation, Convolution2D, MaxPooling2D, Reshape, Flatten, ZeroPadding2D, Dropout import os model = Sequential() model.add(Convolution2D(256, 3, 3, border_mode='same', input_shape=(16,16,1))) model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2))) model.add(Convolution2D(512, 3, 3, border_mode='same')) model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2))) model.add(Convolution2D(1024, 3, 3, border_mode='same')) model.add(Convolution2D(1024, 3, 3, border_mode='same')) model.add(Convolution2D(1024, 3, 3, border_mode='same')) model.add(Convolution2D(1024, 3, 3, border_mode='same')) model.add(Convolution2D(1024, 3, 3, border_mode='same')) model.add(Convolution2D(1024, 3, 3, border_mode='same')) model.add(Convolution2D(1024, 3, 3, border_mode='same')) model.add(Convolution2D(1024, 3, 3, border_mode='same')) model.add(Convolution2D(1024, 3, 3, border_mode='same')) model.add(Convolution2D(1024, 3, 3, border_mode='same')) model.add(Convolution2D(1024, 3, 3, border_mode='same')) model.add(Convolution2D(1024, 3, 3, border_mode='same')) model.add(Convolution2D(1024, 3, 3, border_mode='same')) model.add(Convolution2D(1024, 3, 3, border_mode='same')) model.add(Convolution2D(1024, 3, 3, border_mode='same')) model.add(Convolution2D(1024, 3, 3, border_mode='same')) model.add(Convolution2D(1024, 3, 3, border_mode='same')) model.add(Convolution2D(1024, 3, 3, border_mode='same')) model.add(Convolution2D(1024, 3, 3, border_mode='same')) model.add(Convolution2D(1024, 3, 3, border_mode='same')) model.add(Convolution2D(1024, 3, 3, border_mode='same')) model.add(Convolution2D(1024, 3, 3, border_mode='same')) model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2))) model.add(Convolution2D(256, 3, 3, border_mode='same')) model.add(Convolution2D(32, 3, 3, border_mode='same')) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Flatten()) model.add(Dense(4)) model.add(Dense(1)) model.summary() os.system("nvidia-smi") raw_input("Press Enter to continue...") model.compile(optimizer='sgd', loss='mse', metrics=['accuracy']) os.system("nvidia-smi") raw_input("Compiled model. Press Enter to continue...") n_batches = 1 batch_size = 1 for ibatch in range(n_batches): x = np.random.rand(batch_size, 16,16,1) y = np.random.rand(batch_size, 1) os.system("nvidia-smi") raw_input("About to train one iteration. Press Enter to continue...") model.train_on_batch(x, y) print("Trained one iteration")
哪个给我以下输出:
Using Theano backend. Using gpu device 0: GeForce GTX 960 (CNMeM is disabled, cuDNN 5103) /usr/local/lib/python2.7/dist-packages/theano/sandbox/cuda/__init__.py:600: UserWarning: Your cuDNN version is more recent than the one Theano officially supports. If you see any problems, try updating Theano or downgrading cuDNN to version 5. warnings.warn(warn) ____________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ==================================================================================================== convolution2d_1 (Convolution2D) (None, 16, 16, 256) 2560 convolution2d_input_1[0][0] ____________________________________________________________________________________________________ maxpooling2d_1 (MaxPooling2D) (None, 8, 8, 256) 0 convolution2d_1[0][0] ____________________________________________________________________________________________________ convolution2d_2 (Convolution2D) (None, 8, 8, 512) 1180160 maxpooling2d_1[0][0] ____________________________________________________________________________________________________ maxpooling2d_2 (MaxPooling2D) (None, 4, 4, 512) 0 convolution2d_2[0][0] ____________________________________________________________________________________________________ convolution2d_3 (Convolution2D) (None, 4, 4, 1024) 4719616 maxpooling2d_2[0][0] ____________________________________________________________________________________________________ convolution2d_4 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_3[0][0] ____________________________________________________________________________________________________ convolution2d_5 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_4[0][0] ____________________________________________________________________________________________________ convolution2d_6 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_5[0][0] ____________________________________________________________________________________________________ convolution2d_7 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_6[0][0] ____________________________________________________________________________________________________ convolution2d_8 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_7[0][0] ____________________________________________________________________________________________________ convolution2d_9 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_8[0][0] ____________________________________________________________________________________________________ convolution2d_10 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_9[0][0] ____________________________________________________________________________________________________ convolution2d_11 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_10[0][0] ____________________________________________________________________________________________________ convolution2d_12 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_11[0][0] ____________________________________________________________________________________________________ convolution2d_13 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_12[0][0] ____________________________________________________________________________________________________ convolution2d_14 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_13[0][0] ____________________________________________________________________________________________________ convolution2d_15 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_14[0][0] ____________________________________________________________________________________________________ convolution2d_16 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_15[0][0] ____________________________________________________________________________________________________ convolution2d_17 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_16[0][0] ____________________________________________________________________________________________________ convolution2d_18 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_17[0][0] ____________________________________________________________________________________________________ convolution2d_19 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_18[0][0] ____________________________________________________________________________________________________ convolution2d_20 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_19[0][0] ____________________________________________________________________________________________________ convolution2d_21 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_20[0][0] ____________________________________________________________________________________________________ convolution2d_22 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_21[0][0] ____________________________________________________________________________________________________ convolution2d_23 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_22[0][0] ____________________________________________________________________________________________________ convolution2d_24 (Convolution2D) (None, 4, 4, 1024) 9438208 convolution2d_23[0][0] ____________________________________________________________________________________________________ maxpooling2d_3 (MaxPooling2D) (None, 2, 2, 1024) 0 convolution2d_24[0][0] ____________________________________________________________________________________________________ convolution2d_25 (Convolution2D) (None, 2, 2, 256) 2359552 maxpooling2d_3[0][0] ____________________________________________________________________________________________________ convolution2d_26 (Convolution2D) (None, 2, 2, 32) 73760 convolution2d_25[0][0] ____________________________________________________________________________________________________ maxpooling2d_4 (MaxPooling2D) (None, 1, 1, 32) 0 convolution2d_26[0][0] ____________________________________________________________________________________________________ flatten_1 (Flatten) (None, 32) 0 maxpooling2d_4[0][0] ____________________________________________________________________________________________________ dense_1 (Dense) (None, 4) 132 flatten_1[0][0] ____________________________________________________________________________________________________ dense_2 (Dense) (None, 1) 5 dense_1[0][0] ==================================================================================================== Total params: 206538153 ____________________________________________________________________________________________________ None Thu Oct 6 09:05:42 2016 +------------------------------------------------------+ | NVIDIA-SMI 352.63 Driver Version: 352.63 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce GTX 960 Off | 0000:01:00.0 On | N/A | | 30% 37C P2 28W / 120W | 1082MiB / 2044MiB | 9% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 1796 G /usr/bin/X 155MiB | | 0 2597 G compiz 65MiB | | 0 5966 C python 849MiB | +-----------------------------------------------------------------------------+ Press Enter to continue... Thu Oct 6 09:05:44 2016 +------------------------------------------------------+ | NVIDIA-SMI 352.63 Driver Version: 352.63 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce GTX 960 Off | 0000:01:00.0 On | N/A | | 30% 38C P2 28W / 120W | 1082MiB / 2044MiB | 0% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 1796 G /usr/bin/X 155MiB | | 0 2597 G compiz 65MiB | | 0 5966 C python 849MiB | +-----------------------------------------------------------------------------+ Compiled model. Press Enter to continue... Thu Oct 6 09:05:44 2016 +------------------------------------------------------+ | NVIDIA-SMI 352.63 Driver Version: 352.63 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce GTX 960 Off | 0000:01:00.0 On | N/A | | 30% 38C P2 28W / 120W | 1082MiB / 2044MiB | 0% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 1796 G /usr/bin/X 155MiB | | 0 2597 G compiz 65MiB | | 0 5966 C python 849MiB | +-----------------------------------------------------------------------------+ About to train one iteration. Press Enter to continue... Error allocating 37748736 bytes of device memory (out of memory). Driver report 34205696 bytes free and 2144010240 bytes total Traceback (most recent call last): File "memtest.py", line 65, in <module> model.train_on_batch(x, y) File "/usr/local/lib/python2.7/dist-packages/keras/models.py", line 712, in train_on_batch class_weight=class_weight) File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 1221, in train_on_batch outputs = self.train_function(ins) File "/usr/local/lib/python2.7/dist-packages/keras/backend/theano_backend.py", line 717, in __call__ return self.function(*inputs) File "/usr/local/lib/python2.7/dist-packages/theano/compile/function_module.py", line 871, in __call__ storage_map=getattr(self.fn, 'storage_map', None)) File "/usr/local/lib/python2.7/dist-packages/theano/gof/link.py", line 314, in raise_with_op reraise(exc_type, exc_value, exc_trace) File "/usr/local/lib/python2.7/dist-packages/theano/compile/function_module.py", line 859, in __call__ outputs = self.fn() MemoryError: Error allocating 37748736 bytes of device memory (out of memory). Apply node that caused the error: GpuContiguous(GpuDimShuffle{3,2,0,1}.0) Toposort index: 338 Inputs types: [CudaNdarrayType(float32, 4D)] Inputs shapes: [(1024, 1024, 3, 3)] Inputs strides: [(1, 1024, 3145728, 1048576)] Inputs values: ['not shown'] Outputs clients: [[GpuDnnConv{algo='small', inplace=True}(GpuContiguous.0, GpuContiguous.0, GpuAllocEmpty.0, GpuDnnConvDesc{border_mode='half', subsample=(1, 1), conv_mode='conv', precision='float32'}.0, Constant{1.0}, Constant{0.0}), GpuDnnConvGradI{algo='none', inplace=True}(GpuContiguous.0, GpuContiguous.0, GpuAllocEmpty.0, GpuDnnConvDesc{border_mode='half', subsample=(1, 1), conv_mode='conv', precision='float32'}.0, Constant{1.0}, Constant{0.0})]] HINT: Re-running with most Theano optimization disabled could give you a back-trace of when this node was created. This can be done with by setting the Theano flag 'optimizer=fast_compile'. If that does not work, Theano optimizations can be disabled with 'optimizer=None'. HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint and storage map footprint of this apply node.
需要注意的几件事情
>我尝试过Theano和TensorFlow后端.两者都有相同的问题,并且在同一行中的内存不足.在TensorFlow中,似乎Keras预先分配了大量内存(大约1.5 GB),所以nvidia-smi并不能帮助我们跟踪那里发生了什么,但是我得到了相同的内存不足的例外.再次,这表明我的(我的使用)Keras的错误(虽然很难确定这样的事情,它可能是我的设置的东西).
>我尝试在Theano中使用CNMEM,其行为像TensorFlow:它预分配大量内存(约1.5 GB),但在同一个地方崩溃.
>有一些关于CudNN版本的警告.我尝试使用CUDA运行Theano后端,但不是CudNN,并且我得到相同的错误,所以这不是问题的根源.
>如果要在自己的GPU上测试,可能需要使网络更深/更浅,这取决于您需要测试多少GPU内存.
>我的配置如下:Ubuntu 14.04,GeForce GTX 960,CUDA 7.5.18,CudNN 5.1.3,Python 2.7,Keras 1.1.0(通过pip安装)
>我已经尝试改变模型的编译,以使用不同的优化和损失,但这似乎并没有改变任何东西.
>我已经尝试更改train_on_batch函数来使用fit,但它也有同样的问题.
>我在StackOverflow – Why does this Keras model require over 6GB of memory? 上看到一个类似的问题 – 但据我所知,我的配置没有这些问题.我从来没有安装过多个版本的CUDA,而且我重新检查了我的PATH,LD_LIBRARY_PATH和CUDA_ROOT变量比我可以计数的次数.
> Julius建议激活参数本身占用GPU内存.如果这是真的,有人可以更清楚地解释一下吗?我已经尝试将我的卷积层的激活函数更改为明显硬编码的函数,没有任何可学习的参数,我可以告诉,这并不改变任何东西.此外,这些参数似乎不太可能占用与网络本身其余部分几乎一样多的内存.
>经过彻底的测试,我可以训练的最大的网络是大约453 MB的参数,我的〜2 GB的GPU RAM.这是正常吗?
>在对我的GPU进行适配的一些较小的CNN上测试Keras之后,我可以看到GPU RAM使用率突然上升.如果我运行一个大约100 MB参数的网络,99%的培训时间将使用少于200 MB的GPU RAM.但是每隔一段时间,内存使用量就会上升到大约1.3 GB.似乎很安全地假设这是引起我的问题的这些尖峰.我从来没有在其他框架中看到这些尖峰,但是他们可能在那里有一个很好的理由?如果有人知道是什么原因造成的,如果有办法避免它们,请进来!
以上就是本文的全部内容,希望本文的内容对大家的学习或者工作能带来一定的帮助,也希望大家多多支持 码农网
猜你喜欢:- 在Android中调用多个WebView时如何控制内存使用?
- 直观讲解-RPC调用和HTTP调用的区别
- 调用链系列一:解读UAVStack中的调用链技术
- 调用链系列二:解读UAVStack中的调用链技术
- 调用链系列三:解读UAVStack中的调用链技术
- dubbo源码解析(二十七)远程调用——injvm本地调用
本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
Discrete Mathematics and Its Applications
Kenneth H Rosen / McGraw-Hill Science/Engineering/Math / 2003-04-22 / USD 132.81
Discrete Mathematics and its Applications is a focused introduction to the primary themes in a discrete mathematics course, as introduced through extensive applications, expansive discussion, and deta......一起来看看 《Discrete Mathematics and Its Applications》 这本书的介绍吧!
URL 编码/解码
URL 编码/解码
UNIX 时间戳转换
UNIX 时间戳转换