Smart Hyperparameter Optimization of any Deep Learning model using TPU and Talos

栏目: IT技术 · 发布时间: 4年前

内容简介:A tensor processing unit (TPU) is an AI accelerator application-specific integrated circuit (ASIC) developed by Google specifically for neural network machine learning. TPUs are specifically optimized to perform fast, bulky matrix multiplication. TPUs are

Smart Hyperparameter Optimization of any Deep Learning model using TPU and Talos

Keras API + Colab Tensor Processing Unit + Talos

What is TPU?

A tensor processing unit (TPU) is an AI accelerator application-specific integrated circuit (ASIC) developed by Google specifically for neural network machine learning. TPUs are specifically optimized to perform fast, bulky matrix multiplication. TPUs are 15x to 30x faster than GPUs and CPUs, Google says – source .

Prepping of TPU environment in Colab:

Create a new Python Notebook in Google Colab , make sure to change the runtime to TPU and you’ll be allocated ~12.72 GB RAM, You can then increase the memory allocation to ~35.35 GB RAM by running the following command.

d=[]
while(1):
 d.append('1')

The above command tries to expand (thanks to ‘append’ command) RAM and crashes in the process. Click on the ‘get more ram’ after the crash of previous memory allocation. For more such tips, feel free to refer my previous blog .

TPU + Talos Pipeline:

I’m considering the Kaggle IEEE-CIS Fraud Detection competition , I’ll now breakdown step by step of a fast way of hyperparameter optimization Deep learning pipeline in colab using TPU.

1–3. Preparation of data for Modeling:

I’ve used the similar first three steps from my previous blog, Automate Kaggle Competition with the help of Google Colab namely:

  1. Downloading the datasets from API calls.
  2. Pre-Processing and Data Wrangling.
  3. Feature Engineering and Feature Selection.

4. Scaling the Data:

Adding to this we’ll be scaling the data to change the values of numeric columns in the dataset to a common scale. As the features are having different ranges, it is very important to scale data before imputing to Deep neural networks.

5. TPU Initialization:

To use TPU effectively and use all the workers and cores provided by the colab TPU, we need to initalize the TPU system by using the following code to initialize a TPU strategy object which will be further used in the model building.

6. Model Building:

To do Talos hyperparameter optimization, we need to first initialize a deep learning model. I’ve used Keras as it uses high level API and tensorflow for its backend. I’ll be using Keras sequential model with two hidden layers and output layer with sigmoid as the activation function.

7. Parameter Grid:

The parameter grid chosen can depend on various factors like data, time to put for the modeling etc. I’ve considered the following grid for the hyperparameter optimization.

8. Talos hyperparameter scanning:

Then we scan along the parameter grid according to the metric and loss that we initialized in the parameter grid. We use the following code to scan the parameter grid where:

We used the ieee_fraud_model as the model to scan, we’ve initialized the model with strategy.scope() where it uses all the TPU workers to build the model. This increases the speed of building the model 15x-20x times faster.

Talos scan live plot progress

I’ve also included the code for only Talos without TPU if to compare or if there’s no availability of TPU in my colab notebook if you are interested.

9. Prediction :

Then from all the models scanned, we select the best model according to the metric (‘binary_accuracy’ in this case) to do the prediction for the submission file using the following code.

10. Save and Restore the model for submission file:

Further you can also deploy the model according to the metric as a zip file where you can restore the model for predictions and to create submission file by using those predictions.

Please check the detailed Kaggle pipeline from this colab , I’ve received a score of ~0.91 on the submission file. We can further use other Keras Autoencoders, adding layers and increasing the grid or any other Keras DNN’s defining in the model building.

Conclusion:

Building a Deep neural network is a time taking process. Further, hyperparameter tuning of the defined neural network could even take days for big datasets like these (~600,000 observations and 394 features). Using Tensor Processing Unit to the fullest, we can hereby drastically minimize the time to build any deep learning model with better results.

References:


以上所述就是小编给大家介绍的《Smart Hyperparameter Optimization of any Deep Learning model using TPU and Talos》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

老码识途

老码识途

韩宏 / 电子工业出版社 / 2012-8 / 56.00元

《老"码"识途:从机器码到框架的系统观逆向修炼之路》以逆向反汇编为线索,自底向上,从探索者的角度,原生态地刻画了对系统机制的学习,以及相关问题的猜测、追踪和解决过程,展现了系统级思维方式的淬炼方法。该思维方式是架构师应具备的一种重要素质。《老"码"识途:从机器码到框架的系统观逆向修炼之路》内容涉及反汇编、底层调试、链接、加载、钩子、异常处理、测试驱动开发、对象模型和机制、线程类封装、跨平台技术、插......一起来看看 《老码识途》 这本书的介绍吧!

HTML 压缩/解压工具
HTML 压缩/解压工具

在线压缩/解压 HTML 代码

随机密码生成器
随机密码生成器

多种字符组合密码

SHA 加密
SHA 加密

SHA 加密工具