内容简介:Using google image and fastai library to build a powerful neural-network based image classifier that can identify bicycles, tricycles and motorcycles.Before we get started, I want to let you know that this post is not intended for beginners. However, any b
BUILD A POWERFUL IMAGE CLASSIFIER IN LESS THAN 10 MINUTES
Using google image and fastai library to build a powerful neural-network based image classifier that can identify bicycles, tricycles and motorcycles.
W e’ve all seen powerful and interesting applications of neural networks in many disparate fields like image classification, image segmentation, time series forecast and etc. Recent paper by K He, Deep Residual Learning for Image Recognition , has exploded and caught tremendous attention. After that, residual neural network has been the main stream for image classification. We don’t dig deeper into the theory behind it but instead focus on the actual implementation of ResNet . A great post on visualizing and understanding the architectures of ResNet can be found here . In this article, we will implement ResNet34 in the fastai library. Let’s get straight into it.
Get Started
Before we get started, I want to let you know that this post is not intended for beginners. However, any beginners can follow along pretty easily. To get the most of it, please check out the free courses to learn more about deep learning. This tutorial’s code can be found here if you don’t have time to go through the whole thing.
Get the libraries you need for this implementation. Since fastai library isn’t compatible with torch 1.5.0, we will install 1.4.0 instead. If you try to use 1.5.0 you will get an error later on when you are creating a databunch. If you are running this in a google colab (recommended), it will work for sure and it’s free. If you are using other platforms, they work most of the time. However, I do not recommend running on your local devices if you don’t have at least GTX1050 or K80.
!pip install fastai
!pip install "torch==1.4" "torchvision==0.5.0"from fastai.vision import *
If you are using colab, you can change the runtime type by clicking:
- Runtime → Change runtime type → Select GPU
This will result in faster (much faster) training.
Step 1: Download the Images For Training
Everyone uses google image. How about obtaining training data(images) from google image? We can do that in just several steps:
- Google bicycle, tricycle and motorcycle(you can search anything you want the computer to classify)
- Scroll all the way down till you cannot see more results
- Hit command + option + J ( mac ) or windows + shift + J ( windows ) to open up Developer Console in the browser(I used chrome) and paste the following command to download a csv of all the links.
urls=Array.from(document.querySelectorAll('.rg_i')).map(el=> el.hasAttribute('data-src')?el.getAttribute('data-src'):el.getAttribute('data-iurl')); window.open('data:text/csv;charset=utf-8,' + escape(urls.join('\n')));
- You will see the file downloading and do this time times for bicycle, tricycle and motorcycle. You will obtain three .csv files. You can name them whatever you want.
- Next, run the following code to download the images, assuming your .csv files are named as bicycle.csv , tricycle.csv and motorcycle.csv :
folder = 'bicycle'
file = 'bicycle.csv'
path = Path('data/riders')
dest = path/folder
dest.mkdir(parents=True, exist_ok=True)
download_images(path/file, dest);folder = 'tricycle'
file = 'tricycle.csv'
path = Path('data/riders')
dest = path/folder
dest.mkdir(parents=True, exist_ok=True)
download_images(path/file, dest);folder = 'motorcycle'
file = 'motorcycle.csv'
path = Path('data/riders')
dest = path/folder
dest.mkdir(parents=True, exist_ok=True)
download_images(path/file, dest);
Some of the urls are invalid but that’s fine. I am pretty sure I obtained enough images to train an outstanding classifier.
Step 2: Create a DataBunch
DataBunch is a basic object in fastai library to train the model. If you are interested in discovering more about creating a DataBunch and how to use a datablock API, make sure you check out the links: DataBunch , datablock AP I.
First, let’s verify if each file is an image using the following:
classes = ['bicycle','motorcycle','tricycle']
for a in classes:
print(a)
verify_images(path/a, delete=True, max_size=500)
If you get an error file, it will be automatically deleted. Then creating a DataBunch use the following:
data = ImageDataBunch.from_folder(path, train=".", valid_pct=0.2, ds_tfms=get_transforms(), size=224, num_workers=4).normalize(imagenet_stats)data.show_batch(rows=3, figsize=(7,8))
You will see similar stuff on your computer depending what images you collected. It’s showing a batch of 3 by 3.
Step 3: Training a Classifier
We will use the technique of transfer learning here. Transfer learning where you utilize a pre-trained model trained on a large dataset to obtain the parameters and then adapt it to your own dataset. This technique is very common is computer vision problems. We will be using a cnn_learner from fastai library.
learn = cnn_learner(data, models.resnet34, metrics=error_rate) learn.fit_one_cycle(4) #training for 4 times learn.save('stage-1')
Since it’s using a pre-trained model for transfer learning, we don’ t need to train too many times to prevent overfitting. In fastai library, all models are defaulted as frozen. By that I mean, we are only training the last layer of the neural network. We obtained an error_rate at about 7% after 4 times of training. That’s pretty good isn’t it? To train all the layers, we need to do the following:
learn.unfreeze()
Another very cool feature of fastai library is that they provided a learning-rate finder. Learning rate is essential to gradient descent , a fundamental step of training(optimization). To do that, we can use:
lr_find(learn) learn.recorder.plot()
A good rule of thumb for seeing a shape like that for learning rate is to pick the point 1e-01 before the point where it’s increasing in loss ever after, as suggest by Jeremy Howard, founder of fastai. In this case, we’d pick learning rate around 1e-04 since it’s increasing ever after after 1e-03 .
learn.fit_one_cycle(3, max_lr=slice(1e-4,1e-3)) learn.save('stage-2')
Cool! We are able to obtain the error of 5.8% after unfreezing the model and training a bit more. 94.2% is a great accuracy for this problem. What else can we do to improve? Data cleaning! Yes! The images we obtained from google image aren’t perfect! For instance, bicycle can be poker cards ! But that’s obviously not what we are looking for. Or, sometimes, some motorcycles and bicycles are quite hard to distinguish. We can relabel or delete them to improve our model using the widget called ImageCleaner from fastai library.
Step 4: Data Cleaning using a Widget
from fastai.widgets import *
db = (ImageList.from_folder(path)
.split_none()
.label_from_folder()
.transform(get_transforms(), size=224)
.databunch()
)
learn_cln = cnn_learner(db, models.resnet34, metrics=error_rate)
learn_cln.load('stage-2');
ds, idxs = DatasetFormatter().from_toplosses(learn_cln)
ImageCleaner(ds, idxs, path)
After running this cell, you will open up a widget that you can see pictures and make your decision whether you keep it, relabel it or delete it.
Step 5: Training the Cleaned-up Data
You will obtain a cleaned-up labels of the dataset called cleaned.csv after running the widget. Then we can create a new DataBunch based on that:
np.random.seed(42) data = ImageDataBunch.from_csv(path, folder=".", valid_pct=0.2, csv_labels='cleaned.csv', ds_tfms=get_transforms(), size=224, num_workers=4).normalize(imagenet_stats)
Similarly, we can train using cnn_learner:
learn = cnn_learner(data, models.resnet34, metrics=error_rate) learn.fit_one_cycle(8) learn.export()
Finally, we obtained a model with only 4.8% error which is a 95.2% accuracy. It’s deliverable and ready for production! Let’s try it with some images that it has never seen.
Fin
img = open_image('tricycle1.jpg') img
learn = load_learner(path) #load the exported learner pred_class,pred_idx,outputs = learn.predict(img) print('The rider you are seeing is probably a', pred_class)
The rider you are seeing is probably a tricycle
Let’s try it with a bicycle,
img = open_image('bike.jpg') img
pred_class,pred_idx,outputs = learn.predict(img) print('The rider you are seeing is probably a', pred_class)
The rider you are seeing is probably a bicycle
Last but not least, my favorite Kawasaki Ninja motorcycle,
The rider you are seeing is probably a motorcycle
That’s the end of this tutorial. I hope you enjoy and learned something today. I am pretty sure you can build a similar classifier using today’s knowledge.
This tutorial’s code can be found here . It’s modified from the class that Jeremy Howard taught on fast.ai and its original notebook can be found here . Feel free to contact me if you have any questions. Always open to constructive advice.
以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持 码农网
猜你喜欢:本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。