Using data and ML to better track wildfire and assess its threat levels
admin GoogleCloud No comments
Source: Using data and ML to better track wildfire and assess its threat levels from Google Cloud
As California’s recent wildfires have shown, it’s often hard to predict where fire will travel. While firefighters rely heavily on third-party weather data sources like NOAA , they often benefit from complementing weather data with other sources of information. (In fact, there’s a good chance there’s no nearby weather station to actively monitor weather properties in and around a wildfire.) How, then, is it possible to leverage modern technology to help firefighters plan for and contain blazes?
Last June we chatted with Aditya Shah and Sanjana Shah, two students at Monta Vista High School in Cupertino, California, who’ve been using machine learning in an effort to better predict the future path of a wildfire. These high school seniors had set about building a fire estimator, based on a model trained in TensorFlow , that measures the amount of dead fuel on the forest floor—a major wildfire risk. This month we checked back in with them to learn more on how they did it.
Why pick this challenge?
Aditya spends a fair bit of time outdoors in the Rancho San Antonio Open Space Preserve near where he lives, and wanted to protect it and other areas of natural beauty so close to home. Meanwhile, after being evacuated from Lawrence Berkeley National Lab in the summer of 2017 due a nearby wildfire, Sanjana wanted to find a technical solution that reduced the risk of fire even before it occurs. Wildfires not only destroy natural habitat but also displace people, impact jobs, and cause extensive damage to homes and other property. Just as prevention is better than a cure, preventing a potential wildfire from occurring is more effective than fighting it.
With a common goal, the two joined forces to explore available technologies that might prove useful. They began by taking photos of the underbrush around the Rancho San Antonio Open Space Preserve, cataloging a broad spectrum of brush samples—from dry and easily ignited, to green or wet brush, which would not ignite as easily. In all, they captured 200 photos across three categories of underbrush: “ gr1
” (humid), “ gr2
” (dry shrubs and leaves), and “ gr3
” (no fuel, plain dirt/soil, or burnt fuel).
Aditya then trained a successful model with 150 sample (training) images (roughly 50 in each of the three classes) plus a 50 image test (evaluation) set. For training, Aditya turned to Keras, his preferred Python-based, easy-to-use deep learning library. Training the model in Keras has two benefits—it permits you to export to a TensorFlow estimator, which you can run on a variety of platforms and devices, and it allows for easy and fast prototyping since it runs seamlessly on either CPU or GPU.
Preparing the data
Before training the model, Aditya ran a preprocessing step on the data: resizing and flattening the images. He used the image_to_function_vector
, which accepts raw pixel intensities from an input bitmap image and resizes that image to a fixed size, to ensure each image in the input dataset has the same ‘feature vector’ size. As many of the images are of different sizes, Aditya resized all his captured images to 32×32 pixels. Since Keras models take as their input a 1-dimensional feature vector, he needed to flatten the 32x32x3 image into a 3072-dimensional feature vector. Further, he defined the ImagePaths to initiate the list of data and label, then looped over the ImagePaths individually to load them to the folder storage using cv2.imread
function. Next, Aditya extracted the class labels (as gr1
, gr2
, and gr3
) from each image’s name. He then converted the images to feature vectors using image_to_feature_vector
function and updated the data and label lists to match.
Aditya next discovered that the simplest way to build the model was to linearly stacks layers, to form a sequential model, which simplified organization of the hidden layers. Aditya was able to use the img2vec
function, built into TensorFlow, as well as a support-vector-machine (SVM) layer.
Next, Aditya trained the model using a stochastic gradient descent (SGD) optimizer with learning rate = 0.05. SGD is an iterative method for finding an optimal solution by using stochastic gradient descent. There are a number of gradient descent methods typically used, including rmsprop
, adagrad
, adadelta
, and adam
. Aditya tried rmsprop
, which yielded very low accuracy (~47%). Some methods like adadelta
and adagrad
yielded slightly higher accuracy but took more time to run. So Aditya decided to use SGD as it offers better optimization with good accuracy and fast running time. In terms of hyperparameters, Aditya tried different numbers of training epoch
s (50, 100, 200) and batch_size
values (10, 35, 50), and he achieved the highest accuracy (94%) with epoch
= 200 and batch_size
= 35.
In the testing phase, Aditya was able to achieve the 94.11% accuracy utilizing only the raw pixel intensities of the fuel images.
The biggest challenge in this whole process was the data pre-processing step, as it involved accurately labeling the images. This very tedious task took Aditya more than four weeks while he created his training dataset.
Modeling fire based on sensor data
Although Aditya and Sanjana now had a way to classify whether different types of underbrush were ripe for fire, they wondered how they might assess the current status of an entire area of land like the Rancho San Antonio Open Space Preserve.
To do it, the pair settled on a high-definition image sensor, which connects over long-range, low-power LTE, to capture and relay images to Aditya’s computer, where they can run the model and classify the new inbound images. The device also collects a number of other metrics, including wind speed, wind direction, humidity, and temperature, and can classify an area of roughly 100 square meters (during daylight hours) to determine whether the ground cover will likely ignite or not. Aditya and Sanjana are currently collecting sensor data and testing the accuracy of their model at five different sites.
By combining real-time humidity and temperature data with NOAA-based wind speed and direction estimates, Aditya and Sanjana hope to determine in which direction a fire might travel. For the moment, the image-classification and the sensor-based prediction systems operate independently, but they plan to combine them in the future.
What’s next
Although currently the pair are simply running TensorFlow models on Aditya’s gaming notebook PC (a tool of choice for data scientists on the go), Aditya plans to try out Cloud ML Engine in the future, to enable more flexible scaling than was possible on a single laptop. Image gathering from remote forest areas is another challenge they want to tackle, and they’re experimenting with all-terrain ground and aerial drones to collect data for this purpose.
Aditya and Sanjana also plan to continue their efforts working with Cal Fire to deploy their device. Currently Cal Fire determines moisture levels by weighing known fuels, such as wood sticks, a process that requires human labor. Aditya and Sanjana’s device offers the potential to reduce the need for that labor.
Although fire is an often devastating force of nature, it is inspiring that a team of high schoolers hope to provide an effective new tool to assist agencies in predicting and tracking both wildfires and the weather factors that enable them to spread.
除非特别声明,此文章内容采用 知识共享署名 3.0 许可,代码示例采用 Apache 2.0 许可。更多细节请查看我们的 服务条款 。
Tags: Cloud
以上就是本文的全部内容,希望本文的内容对大家的学习或者工作能带来一定的帮助,也希望大家多多支持 码农网
猜你喜欢:本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
火的礼物:人类与计算技术的终极博弈(第4版)
【美】Baase,Sara(莎拉芭氏) / 郭耀、李琦 / 电子工业出版社 / 89.00
《火的礼物:人类与计算技术的终极博弈 (第4版)》是一本讲解与计算技术相关的社会、法律和伦理问题的综合性读物。《火的礼物:人类与计算技术的终极博弈 (第4版)》以希腊神话中普罗米修斯送给人类的火的礼物作为类比,针对当前IT技术与互联网迅速发展带来的一些社会问题,从法律和道德的角度详细分析了计算机技术对隐私权、言论自由、知识产权与著作权、网络犯罪等方面带来的新的挑战和应对措施,讲解了计算技术对人类的......一起来看看 《火的礼物:人类与计算技术的终极博弈(第4版)》 这本书的介绍吧!
图片转BASE64编码
在线图片转Base64编码工具
UNIX 时间戳转换
UNIX 时间戳转换