内容简介:During the quarantine, we had limited physical activities and that was not good, especially for children.But when I made my kid exercise, I met resistance and had to control the whole process with attention.It was fun and also I got an idea to automate the
Artificial intelligence in SportTech
Jun 23 ·5min read
During the quarantine, we had limited physical activities and that was not good, especially for children.
But when I made my kid exercise, I met resistance and had to control the whole process with attention.
It was fun and also I got an idea to automate the process. Although it was overkill in the situation, the inspiration turned out to be irresistible.
Considering a point to start, I picked squats. A basic movement with explicit stages and a big amplitude looked like the best contender.
Data Collection
Raspberry Pi with a camera is very handy to take home pictures with minimal efforts.
OpenCV gets the images and writes them into the filesystem.
Movement recognition
Initially, I was going to find a human on the picture with image segmentation. But the segmentation is a pretty heavy operation, especially with Raspberry limited resources.
Also, segmentation misses a fact we have a sequence of frames, not a single picture. The sequence has obvious features and we need to use it.
So I proceeded with background removal algorithms from OpenCV. Combining this approach with some heuristics eventually provided a reliable result.
Background subtraction
First, create a background subtractor:
backSub = cv.createBackgroundSubtractorMOG2()
And feed it with frames:
mask = backSub.apply(frame)
Finally get a picture with a body outline:
Then dilate the image to highlight the contours.
mask = cv.dilate(mask, None, 3)
Applying this algorithm to all frames gives poses masks. Then we are going to classify them as a stand, squat, or nothing.
The next step is to cut a figure from the picture. OpenCV can find contours:
cnts, _ = cv.findContours(img, cv.RETR_CCOMP, cv.CHAIN_APPROX_SIMPLE)
The idea is the biggest contour fits for the figure more or less.
Unfortunately, the results are not stable and the biggest contour could wrap only body but miss legs, for example.
Anyway, having a sequence of images helps a lot. Squats are happening on the same spot so we can assume, all the actions are going inside some area and the area is stable.
Then the bounding rect can be built iteratively, increasing with the biggest contour if needed.
There is an example:
- the biggest contour is red
- the contour bounding rect is blue
- the figure bounding rect is green
Using this approach we can get a pose for further processing.
Classification
Then the bounding rectangle is cut out from the image, made a square and unified by size 64x64.
There are masks to be a classifier input:
For stands:
For squats:
I used Keras + Tensorflow for the classification.
Initially, I started with the classic Lenet-5 model . It worked good and after reading an article about Lenet-5 variations , I decided to play around to simplify the architecture.
Turned out, a very simple CNN shows pretty much the same accuracy:
model = Sequential([ Convolution2D(8,(5,5), activation='relu', input_shape=input_shape), MaxPooling2D(), Flatten(), Dense(512, activation='relu'), Dense(3, activation='softmax') ])model.compile(loss="categorical_crossentropy", optimizer=SGD(lr=0.01), metrics=["accuracy"])
There was 86% accuracy on 10 epochs, 94 on 20, and 96 on 30.
Longer training could cause overfitting so it is time to try the model in real life.
Raspberry Pi
I am a big fan of OpenCV-DNN module and intended to use it in order to avoid Tensorflow heavy setup.
Unfortunately, when I converted Keras model to TF and run on Raspberry:
cv2.error: OpenCV(4.2.0) C:\projects\opencv-python\opencv\modules\dnn\src\dnn.cpp:562: error: (-2:Unspecified error) Can't create layer "flatten_1/Shape" of type "Shape" in function 'cv::dnn::dnn4_v20191202::LayerData::getLayerInstance'
This is a known issue on Stack Overflow but the fix is not released yet.
So there was no way but with Tensorflow.
Google already supports TF for Raspberry for a couple of years so there are no tricks to get it working.
TF contains adapters to Keras models, no conversion necessary.
Load the model:
with open(MODEL_JSON, 'r') as f: model_data = f.read() model = tf.keras.models.model_from_json(model_data) model.load_weights(MODEL_H5) graph = tf.get_default_graph()
And classify squat masks with that:
img = cv.imread(path + f, cv.IMREAD_GRAYSCALE) img = np.reshape(img,[1,64,64,1]) with graph.as_default(): c = model.predict_classes(img) return c[0] if c else None
Classification call with input 64x64 takes about 60–70 ms on Raspberry — it is close to realtime for this purpose.
Raspberry app
Bringing together all the parts above into a single app :
Let's make a service using Flask with following entries:
- GET / — an app page (more info below)
- GET /status — get current status, squats and frames number
- POST /start — start an exercise
- POST /stop — finish the exercise
- GET /stream — a video stream from the camera
I initialized Tensorflow on the service start. It is generally a bad idea, especially on Raspberry — TF will consume a lot of resources, the service will be slow to respond and could die on hitting limits.
So normally I would start TF in a separate process and provide a channel for interprocess communication but I used a simple way for this prototype.
And there is already mentioned web-app to control squats activity. The app can:
- show a live video from the camera
- start/stop an exercise
- count squats and frames
When an exercise started the service writes pictures into the filesystem.
It is convenient to get them to train the neural network, but normally they are not needed.
The service handles a sequence of pictures, classifies them with TF, and when Stand — Squat — Stand pattern met, the squats counter increased.
Labeling tool
There is a simple labeling tool for manual classification. This is a GUI app with python + OpenCV.
The tool shows pictures with main contour and bounding rectangles and expects keys: S (Stand), Q (sQuat), N (Nothing) and then automatically moves pictures into target subfolders.
Then labeled subfolders should be copied into the Keras model input folder and the training process needs to be repeated.
Platforms
I run the app on Raspberry but nothing prevents using any Linux environment with python, OpenCV and a camera.
Problems
As is it could be accepted as MVP but there are a lot of things to improve.
- Refine background removal. Shadows generate noisy blobs and they make the classifier dizzy.
- Collect more data for the neural network.
- Review the classifier architecture. The simplest one shows satisfying results now but has its own limits.
Links
以上所述就是小编给大家介绍的《Squats detector with OpenCV and Tensorflow》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!
猜你喜欢:本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
Understanding Machine Learning
Shai Shalev-Shwartz、Shai Ben-David / Cambridge University Press / 2014 / USD 48.51
Machine learning is one of the fastest growing areas of computer science, with far-reaching applications. The aim of this textbook is to introduce machine learning, and the algorithmic paradigms it of......一起来看看 《Understanding Machine Learning》 这本书的介绍吧!
URL 编码/解码
URL 编码/解码
HEX CMYK 转换工具
HEX CMYK 互转工具