Smart Hyperparameter Optimization of any Deep Learning model using TPU and Talos

栏目: IT技术 · 发布时间: 4年前

内容简介:A tensor processing unit (TPU) is an AI accelerator application-specific integrated circuit (ASIC) developed by Google specifically for neural network machine learning. TPUs are specifically optimized to perform fast, bulky matrix multiplication. TPUs are

Smart Hyperparameter Optimization of any Deep Learning model using TPU and Talos

Keras API + Colab Tensor Processing Unit + Talos

What is TPU?

A tensor processing unit (TPU) is an AI accelerator application-specific integrated circuit (ASIC) developed by Google specifically for neural network machine learning. TPUs are specifically optimized to perform fast, bulky matrix multiplication. TPUs are 15x to 30x faster than GPUs and CPUs, Google says – source .

Prepping of TPU environment in Colab:

Create a new Python Notebook in Google Colab , make sure to change the runtime to TPU and you’ll be allocated ~12.72 GB RAM, You can then increase the memory allocation to ~35.35 GB RAM by running the following command.

d=[]
while(1):
 d.append('1')

The above command tries to expand (thanks to ‘append’ command) RAM and crashes in the process. Click on the ‘get more ram’ after the crash of previous memory allocation. For more such tips, feel free to refer my previous blog .

TPU + Talos Pipeline:

I’m considering the Kaggle IEEE-CIS Fraud Detection competition , I’ll now breakdown step by step of a fast way of hyperparameter optimization Deep learning pipeline in colab using TPU.

1–3. Preparation of data for Modeling:

I’ve used the similar first three steps from my previous blog, Automate Kaggle Competition with the help of Google Colab namely:

  1. Downloading the datasets from API calls.
  2. Pre-Processing and Data Wrangling.
  3. Feature Engineering and Feature Selection.

4. Scaling the Data:

Adding to this we’ll be scaling the data to change the values of numeric columns in the dataset to a common scale. As the features are having different ranges, it is very important to scale data before imputing to Deep neural networks.

5. TPU Initialization:

To use TPU effectively and use all the workers and cores provided by the colab TPU, we need to initalize the TPU system by using the following code to initialize a TPU strategy object which will be further used in the model building.

6. Model Building:

To do Talos hyperparameter optimization, we need to first initialize a deep learning model. I’ve used Keras as it uses high level API and tensorflow for its backend. I’ll be using Keras sequential model with two hidden layers and output layer with sigmoid as the activation function.

7. Parameter Grid:

The parameter grid chosen can depend on various factors like data, time to put for the modeling etc. I’ve considered the following grid for the hyperparameter optimization.

8. Talos hyperparameter scanning:

Then we scan along the parameter grid according to the metric and loss that we initialized in the parameter grid. We use the following code to scan the parameter grid where:

We used the ieee_fraud_model as the model to scan, we’ve initialized the model with strategy.scope() where it uses all the TPU workers to build the model. This increases the speed of building the model 15x-20x times faster.

Talos scan live plot progress

I’ve also included the code for only Talos without TPU if to compare or if there’s no availability of TPU in my colab notebook if you are interested.

9. Prediction :

Then from all the models scanned, we select the best model according to the metric (‘binary_accuracy’ in this case) to do the prediction for the submission file using the following code.

10. Save and Restore the model for submission file:

Further you can also deploy the model according to the metric as a zip file where you can restore the model for predictions and to create submission file by using those predictions.

Please check the detailed Kaggle pipeline from this colab , I’ve received a score of ~0.91 on the submission file. We can further use other Keras Autoencoders, adding layers and increasing the grid or any other Keras DNN’s defining in the model building.

Conclusion:

Building a Deep neural network is a time taking process. Further, hyperparameter tuning of the defined neural network could even take days for big datasets like these (~600,000 observations and 394 features). Using Tensor Processing Unit to the fullest, we can hereby drastically minimize the time to build any deep learning model with better results.

References:


以上所述就是小编给大家介绍的《Smart Hyperparameter Optimization of any Deep Learning model using TPU and Talos》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

JavaScript权威指南

JavaScript权威指南

弗拉纳根 / 东南大学出版社 / 2007-6 / 99.00元

《JavaScript权威指南(影印版)(第5版)》已经经过全面地修订和扩展,涵盖了构建当今Web2.0应用程序所需的JavaScript技术。《JavaScript权威指南(影印版)(第5版)》不仅是一本实例驱动的程序员指南,同时也是一本可以摆在桌边随时查阅的参考手册,它以全新的章节阐述了有效使用Javascript脚本所需要知道的一切,包括: 脚本化的HTTP和Ajax;XML处理;使用......一起来看看 《JavaScript权威指南》 这本书的介绍吧!

CSS 压缩/解压工具
CSS 压缩/解压工具

在线压缩/解压 CSS 代码

Base64 编码/解码
Base64 编码/解码

Base64 编码/解码

正则表达式在线测试
正则表达式在线测试

正则表达式在线测试