内容简介:A tiny, friendly, strong baseline code for Person-reID (based onNow we have supported:Here we provide hyperparameters and architectures, that were used to generate the result. Some of them (i.e. learning rate) are far from optimal. Do not hesitate to chang
Person_reID_baseline_pytorch
A tiny, friendly, strong baseline code for Person-reID (based on pytorch ).
-
Strong.It is consistent with the new baseline result in several top-conference works, e.g., Beyond Part Models: Person Retrieval with Refined Part Pooling(ECCV18) and Camera Style Adaptation for Person Re-identification(CVPR18) . We arrived Rank@1=88.24%, mAP=70.68% only with softmax loss.
-
Small.With fp16, our baseline could be trained with only 2GB GPU memory.
-
Friendly.You may use the off-the-shelf options to apply many state-of-the-art tricks in one line. Besides, if you are new to person re-ID, you may check out our Tutorial first (8 min read)
:+1: .
Table of contents
Features
Now we have supported:
- Float16 to save GPU memory based on apex
- Part-based Convolutional Baseline(PCB)
- Multiple Query Evaluation
- Re-Ranking
- Random Erasing
- ResNet/DenseNet
- Visualize Training Curves
- Visualize Ranking Result
Here we provide hyperparameters and architectures, that were used to generate the result. Some of them (i.e. learning rate) are far from optimal. Do not hesitate to change them and see the effect.
P.S. With similar structure, we arrived Rank@1=87.74% mAP=69.46% with Matconvnet . (batchsize=8, dropout=0.75) You may refer to Here . Different framework need to be tuned in a different way.
Some News
What's new:FP16 has been added. It can be used by simply added --fp16
. You need to install apex and update your pytorch to 1.0.
Float16 could save about 50% GPU memory usage without accuracy drop. Our baseline could be trained with only 2GB GPU memory.
python train.py --fp16
What's new:Visualizing ranking result is added.
python prepare.py python train.py python test.py python demo.py --query_index 777
What's new:Multiple-query Evaluation is added. The multiple-query result is about Rank@1=91.95% mAP=78.06% .
python prepare.py python train.py python test.py --multi python evaluate_gpu.py
What's new: PCB is added. You may use '--PCB' to use this model. It can achieve around Rank@1=92.73% mAP=78.16% . I used a GPU (P40) with 24GB Memory. You may try apply smaller batchsize and choose the smaller learning rate (for stability) to run. (For example, --batchsize 32 --lr 0.01 --PCB
)
python train.py --PCB --batchsize 64 --name PCB-64 python test.py --PCB --name PCB-64
What's new:You may try evaluate_gpu.py
to conduct a faster evaluation with GPU.
What's new:You may apply '--use_dense' to use DenseNet-121
. It can arrive around Rank@1=89.91% mAP=73.58%.
What's new:Re-ranking is added to evaluation. The re-ranked result is about Rank@1=90.20% mAP=84.76% .
What's new:Random Erasing is added to train.
What's new:I add some code to generate training curves. The figure will be saved into the model folder when training.
Trained Model
I re-trained several models, and the results may be different with the original one. Just for a quick reference, you may directly use these models. The download link is Here .
Methods | Rank@1 | mAP | Reference |
---|---|---|---|
[ResNet-50] | 88.84% | 71.49% | python train.py --train_all |
[DenseNet-121] | 90.11% | 73.51% | python train.py --name ft_net_dense --use_dense --train_all |
[PCB] | 92.64% | 77.47% | python train.py --name PCB --PCB --train_all --lr 0.02 |
[ResNet-50 (fp16)] | 88.27% | 71.20% | python train.py --name fp16 --fp16 --train_all |
Model Structure
You may learn more from model.py
. We add one linear layer(bottleneck), one batchnorm layer and relu.
Prerequisites
- Python 3.6
- GPU Memory >= 6G
- Numpy
- Pytorch 0.3+
- [Optional] apex (for float16)
(Some reports found that updating numpy can arrive the right accuracy. If you only get 50~80 Top1 Accuracy, just try it.)We have successfully run the code based on numpy 1.12.1 and 1.13.1 .
Getting started
Installation
- Install Pytorch from http://pytorch.org/
- Install Torchvision from the source
git clone https://github.com/pytorch/vision cd vision python setup.py install
- [Optinal] You may skip it. Install apex from the source
git clone https://github.com/NVIDIA/apex.git cd apex python setup.py install --cuda_ext --cpp_ext
Because pytorch and torchvision are ongoing projects.
Here we noted that our code is tested based on Pytorch 0.3.0/0.4.0/0.5.0/1.0.0 and Torchvision 0.2.0/0.2.1 .
Dataset & Preparation
Download Market1501 Dataset
Preparation: Put the images with the same id in one folder. You may use
python prepare.py
Remember to change the dataset path to your own path.
Futhermore, you also can test our code on DukeMTMC-reID Dataset . Our baseline code is not such high on DukeMTMC-reID Rank@1=64.23%, mAP=43.92% . Hyperparameters are need to be tuned.
Train
Train a model by
python train.py --gpu_ids 0 --name ft_ResNet50 --train_all --batchsize 32 --data_dir your_data_path
--gpu_ids
which gpu to run.
--name
the name of model.
--data_dir
the path of the training data.
--train_all
using all images to train.
--batchsize
batch size.
--erasing_p
random erasing probability.
Train a model with random erasing by
python train.py --gpu_ids 0 --name ft_ResNet50 --train_all --batchsize 32 --data_dir your_data_path --erasing_p 0.5
Test
Use trained model to extract feature by
python test.py --gpu_ids 0 --name ft_ResNet50 --test_dir your_data_path --batchsize 32 --which_epoch 59
--gpu_ids
which gpu to run.
--batchsize
batch size.
--name
the dir name of trained model.
--which_epoch
select the i-th model.
--data_dir
the path of the testing data.
Evaluation
python evaluate.py
It will output Rank@1, Rank@5, Rank@10 and mAP results. You may also try evaluate_gpu.py
to conduct a faster evaluation with GPU.
For mAP calculation, you also can refer to the C++ code for Oxford Building . We use the triangle mAP calculation (consistent with the Market1501 original code).
re-ranking
python evaluate_rerank.py
It may take more than 10G Memory to run.So run it on a powerful machine if possible.
It will output Rank@1, Rank@5, Rank@10 and mAP results.
Citation
As far as I know, the following papers may be the first two to use the bottleneck baseline. You may cite them in your paper.
@article{DBLP:journals/corr/SunZDW17, author = {Yifan Sun and Liang Zheng and Weijian Deng and Shengjin Wang}, title = {SVDNet for Pedestrian Retrieval}, booktitle = {ICCV}, year = {2017}, } @article{hermans2017defense, title={In Defense of the Triplet Loss for Person Re-Identification}, author={Hermans, Alexander and Beyer, Lucas and Leibe, Bastian}, journal={arXiv preprint arXiv:1703.07737}, year={2017} }
Related Repos
以上所述就是小编给大家介绍的《智能安防:行人检索 PyTorch 代码》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!
猜你喜欢:- 以人为本的计算机视觉研究:WIDER Challenge 2019(人脸检测/行人检测/人物检索)
- NVIDIA/悉尼科技大学/澳洲国立大学新作:用GAN生成高质量行人图像,辅助行人重识别
- 田奇:行人再识别的挑战和最新进展
- 阿里行人重识别(ReID)算法效果刷新业内最好成绩
- 行人检测新思路:高级语义特征检测取得精度新突破
- 深度学习行人重识别综述与展望,TPAMI 2021 最新文章
本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
编程珠玑(第2版•修订版)
[美] Jon Bentley 乔恩•本特利 / 黄倩、钱丽艳 / 人民邮电出版社 / 2014-12 / 39
历史上最伟大的计算机科学著作之一 融深邃思想、实战技术与趣味轶事于一炉的奇书 带你真正领略计算机科学之美 多年以来,当程序员们推选出最心爱的计算机图书时,《编程珠玑》总是位于前列。正如自然界里珍珠出自细沙对牡蛎的磨砺,计算机科学大师Jon Bentley以其独有的洞察力和创造力,从磨砺程序员的实际问题中凝结出一篇篇不朽的编程“珠玑”,成为世界计算机界名刊《ACM通讯》历史上最受欢......一起来看看 《编程珠玑(第2版•修订版)》 这本书的介绍吧!