内容简介:A tensor processing unit (TPU) is an AI accelerator application-specific integrated circuit (ASIC) developed by Google specifically for neural network machine learning. TPUs are specifically optimized to perform fast, bulky matrix multiplication. TPUs are
Smart Hyperparameter Optimization of any Deep Learning model using TPU and Talos
Keras API + Colab Tensor Processing Unit + Talos
What is TPU?
A tensor processing unit (TPU) is an AI accelerator application-specific integrated circuit (ASIC) developed by Google specifically for neural network machine learning. TPUs are specifically optimized to perform fast, bulky matrix multiplication. TPUs are 15x to 30x faster than GPUs and CPUs, Google says – source .
Prepping of TPU environment in Colab:
Create a new Python Notebook in Google Colab , make sure to change the runtime to TPU and you’ll be allocated ~12.72 GB RAM, You can then increase the memory allocation to ~35.35 GB RAM by running the following command.
d=[]
while(1):
d.append('1')
The above command tries to expand (thanks to ‘append’ command) RAM and crashes in the process. Click on the ‘get more ram’ after the crash of previous memory allocation. For more such tips, feel free to refer my previous blog .
TPU + Talos Pipeline:
I’m considering the Kaggle IEEE-CIS Fraud Detection competition , I’ll now breakdown step by step of a fast way of hyperparameter optimization Deep learning pipeline in colab using TPU.
1–3. Preparation of data for Modeling:
I’ve used the similar first three steps from my previous blog, Automate Kaggle Competition with the help of Google Colab namely:
- Downloading the datasets from API calls.
- Pre-Processing and Data Wrangling.
- Feature Engineering and Feature Selection.
4. Scaling the Data:
Adding to this we’ll be scaling the data to change the values of numeric columns in the dataset to a common scale. As the features are having different ranges, it is very important to scale data before imputing to Deep neural networks.
5. TPU Initialization:
To use TPU effectively and use all the workers and cores provided by the colab TPU, we need to initalize the TPU system by using the following code to initialize a TPU strategy object which will be further used in the model building.
6. Model Building:
To do Talos hyperparameter optimization, we need to first initialize a deep learning model. I’ve used Keras as it uses high level API and tensorflow for its backend. I’ll be using Keras sequential model with two hidden layers and output layer with sigmoid as the activation function.
7. Parameter Grid:
The parameter grid chosen can depend on various factors like data, time to put for the modeling etc. I’ve considered the following grid for the hyperparameter optimization.
8. Talos hyperparameter scanning:
Then we scan along the parameter grid according to the metric and loss that we initialized in the parameter grid. We use the following code to scan the parameter grid where:
We used the ieee_fraud_model as the model to scan, we’ve initialized the model with strategy.scope() where it uses all the TPU workers to build the model. This increases the speed of building the model 15x-20x times faster.
I’ve also included the code for only Talos without TPU if to compare or if there’s no availability of TPU in my colab notebook if you are interested.
9. Prediction :
Then from all the models scanned, we select the best model according to the metric (‘binary_accuracy’ in this case) to do the prediction for the submission file using the following code.
10. Save and Restore the model for submission file:
Further you can also deploy the model according to the metric as a zip file where you can restore the model for predictions and to create submission file by using those predictions.
Please check the detailed Kaggle pipeline from this colab , I’ve received a score of ~0.91 on the submission file. We can further use other Keras Autoencoders, adding layers and increasing the grid or any other Keras DNN’s defining in the model building.
Conclusion:
Building a Deep neural network is a time taking process. Further, hyperparameter tuning of the defined neural network could even take days for big datasets like these (~600,000 observations and 394 features). Using Tensor Processing Unit to the fullest, we can hereby drastically minimize the time to build any deep learning model with better results.
References:
以上所述就是小编给大家介绍的《Smart Hyperparameter Optimization of any Deep Learning model using TPU and Talos》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!
猜你喜欢:本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
激荡十年,水大鱼大
吴晓波 / 中信出版社 / 2017-11-1 / CNY 58.00
【编辑推荐】 知名财经作者吴晓波新作,畅销十年、销量超过两百万册的《激荡三十年》续篇,至此完成改革开放四十年企业史完整记录。 作为时代记录者,吴晓波有意识地从1978年中国改革开放伊始,记录中国翻天覆地的变化和对我们影响至深的人物与事件,串成一部我们每个人的时代激荡史。而最新的这十年,无疑更壮观,也更扑朔迷离。 很多事情,在当时并未有很深很透的感受,回过头来再看,可能命运的轨迹就......一起来看看 《激荡十年,水大鱼大》 这本书的介绍吧!