内容简介:A tensor processing unit (TPU) is an AI accelerator application-specific integrated circuit (ASIC) developed by Google specifically for neural network machine learning. TPUs are specifically optimized to perform fast, bulky matrix multiplication. TPUs are
Smart Hyperparameter Optimization of any Deep Learning model using TPU and Talos
Keras API + Colab Tensor Processing Unit + Talos
What is TPU?
A tensor processing unit (TPU) is an AI accelerator application-specific integrated circuit (ASIC) developed by Google specifically for neural network machine learning. TPUs are specifically optimized to perform fast, bulky matrix multiplication. TPUs are 15x to 30x faster than GPUs and CPUs, Google says – source .
Prepping of TPU environment in Colab:
Create a new Python Notebook in Google Colab , make sure to change the runtime to TPU and you’ll be allocated ~12.72 GB RAM, You can then increase the memory allocation to ~35.35 GB RAM by running the following command.
d=[] while(1): d.append('1')
The above command tries to expand (thanks to ‘append’ command) RAM and crashes in the process. Click on the ‘get more ram’ after the crash of previous memory allocation. For more such tips, feel free to refer my previous blog .
TPU + Talos Pipeline:
I’m considering the Kaggle IEEE-CIS Fraud Detection competition , I’ll now breakdown step by step of a fast way of hyperparameter optimization Deep learning pipeline in colab using TPU.
1–3. Preparation of data for Modeling:
I’ve used the similar first three steps from my previous blog, Automate Kaggle Competition with the help of Google Colab namely:
- Downloading the datasets from API calls.
- Pre-Processing and Data Wrangling.
- Feature Engineering and Feature Selection.
4. Scaling the Data:
Adding to this we’ll be scaling the data to change the values of numeric columns in the dataset to a common scale. As the features are having different ranges, it is very important to scale data before imputing to Deep neural networks.
5. TPU Initialization:
To use TPU effectively and use all the workers and cores provided by the colab TPU, we need to initalize the TPU system by using the following code to initialize a TPU strategy object which will be further used in the model building.
6. Model Building:
To do Talos hyperparameter optimization, we need to first initialize a deep learning model. I’ve used Keras as it uses high level API and tensorflow for its backend. I’ll be using Keras sequential model with two hidden layers and output layer with sigmoid as the activation function.
7. Parameter Grid:
The parameter grid chosen can depend on various factors like data, time to put for the modeling etc. I’ve considered the following grid for the hyperparameter optimization.
8. Talos hyperparameter scanning:
Then we scan along the parameter grid according to the metric and loss that we initialized in the parameter grid. We use the following code to scan the parameter grid where:
We used the ieee_fraud_model as the model to scan, we’ve initialized the model with strategy.scope() where it uses all the TPU workers to build the model. This increases the speed of building the model 15x-20x times faster.
I’ve also included the code for only Talos without TPU if to compare or if there’s no availability of TPU in my colab notebook if you are interested.
9. Prediction :
Then from all the models scanned, we select the best model according to the metric (‘binary_accuracy’ in this case) to do the prediction for the submission file using the following code.
10. Save and Restore the model for submission file:
Further you can also deploy the model according to the metric as a zip file where you can restore the model for predictions and to create submission file by using those predictions.
Please check the detailed Kaggle pipeline from this colab , I’ve received a score of ~0.91 on the submission file. We can further use other Keras Autoencoders, adding layers and increasing the grid or any other Keras DNN’s defining in the model building.
Conclusion:
Building a Deep neural network is a time taking process. Further, hyperparameter tuning of the defined neural network could even take days for big datasets like these (~600,000 observations and 394 features). Using Tensor Processing Unit to the fullest, we can hereby drastically minimize the time to build any deep learning model with better results.
References:
以上所述就是小编给大家介绍的《Smart Hyperparameter Optimization of any Deep Learning model using TPU and Talos》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!
猜你喜欢:本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
Python深度学习
[美] 弗朗索瓦•肖莱 / 张亮 / 人民邮电出版社 / 2018-8 / 119.00元
本书由Keras之父、现任Google人工智能研究员的弗朗索瓦•肖莱(François Chollet)执笔,详尽介绍了用Python和Keras进行深度学习的探索实践,涉及计算机视觉、自然语言处理、生成式模型等应用。书中包含30多个代码示例,步骤讲解详细透彻。由于本书立足于人工智能的可达性和大众化,读者无须具备机器学习相关背景知识即可展开阅读。在学习完本书后,读者将具备搭建自己的深度学习环境、建......一起来看看 《Python深度学习》 这本书的介绍吧!
在线进制转换器
各进制数互转换器
XML、JSON 在线转换
在线XML、JSON转换工具