Skip to content

Commit 59ac422

Browse files
committed
update android package and readme
1 parent 88265d0 commit 59ac422

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

47 files changed

+589
-1874
lines changed

NOTICE

+5
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
PoseMon 让爷康康
2+
Copyright (c) Lin Yi
3+
4+
This project is forked from TensorFlow Examples -- Copyright 2021 The TensorFlow Authors. All Rights Reserved. -- available
5+
under the Apache 2.0 license (https://github.com/tensorflow/examples/blob/master/LICENSE).

README.md

+26-54
Original file line numberDiff line numberDiff line change
@@ -1,74 +1,46 @@
1-
“让爷康康”是一款手机 AI 应用程序,可以监测不良坐姿并进行语音提示
2-
# TensorFlow Lite Pose Estimation Android Demo
1+
# PoseMon 让爷康康
32

4-
### Overview
5-
This is an app that continuously detects the body parts in the frames seen by
6-
your device's camera. These instructions walk you through building and running
7-
the demo on an Android device. Camera captures are discarded immediately after
8-
use, nothing is stored or saved.
3+
## 介绍
94

10-
The app demonstrates how to use 4 models:
5+
<image align="right" src="doc_images/screenshot_icon.jpg" alt="Application Icon" width=17%>
116

12-
* Single pose models: The model can estimate the pose of only one person in the
13-
input image. If the input image contains multiple persons, the detection result
14-
can be largely incorrect.
15-
* PoseNet
16-
* MoveNet Lightning
17-
* MoveNet Thunder
18-
* Multi pose models: The model can estimate pose of multiple persons in the
19-
input image.
20-
* MoveNet MultiPose: Support up to 6 persons.
7+
“让爷康康”是一款应用于安卓平台的手机应用,可以实时监测不良坐姿并给出语音提示。本项目主要基于 [Tensorflow Lite 官方示例 - 姿态估计](https://github.com/tensorflow/examples/tree/master/lite/examples/pose_estimation/android)实现,其中 AI 部分包含用于姿态估计的 [MoveNet](https://blog.tensorflow.org/2021/05/next-generation-pose-detection-with-movenet-and-tensorflowjs.html),以及用于对姿态进行分类的[全连接网络](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/pose_classification.ipynb)。本应用不需要联网使用,所有 AI 特性均在手机本地运行,不需要将视频画面传输至外部服务器,仅需要摄像头权限用于获取姿态画面。视频介绍可以点击 [bilibili]()[YouTube]()
218

22-
See this [blog post](https://blog.tensorflow.org/2021/05/next-generation-pose-detection-with-movenet-and-tensorflowjs.html)
23-
for a comparison between these models.
9+
## 在 Android Studio 中编译程序并运行
2410

25-
![Demo Image](posenetimage.png)
11+
### 准备工作
2612

27-
## Build the demo using Android Studio
13+
* 安卓项目的编译需要 Android Studio,可以进入[官方网站](
14+
https://developer.android.com/studio/install?hl=zh-cn)按照说明进行下载安装。
2815

29-
### Prerequisites
16+
* 需要准备一部安卓手机。
3017

31-
* If you don't have it already, install **[Android Studio](
32-
https://developer.android.com/studio/index.html)** 4.2 or
33-
above, following the instructions on the website.
18+
### 编译程序
3419

35-
* Android device and Android development environment with minimum API 21.
20+
* 通过 `git clone` 克隆本项目,或者以压缩包形式下载项目文件并解压。
3621

37-
### Building
38-
* Open Android Studio, and from the `Welcome` screen, select
39-
`Open an existing Android Studio project`.
22+
* 打开 Android Studio,在初始的 `Welcome` 界面选择
23+
`Open an existing Android Studio project`,打开项目中的安卓工程文件夹。
4024

41-
* From the `Open File or Project` window that appears, navigate to and select
42-
the `lite/examples/pose_estimation/android` directory from wherever you
43-
cloned the `tensorflow/examples` GitHub repo. Click `OK`.
25+
* 安卓工程文件位于本项目的 `android/` 文件夹下。在 Android Studio 的提示窗口中选择该文件夹。项目打开后软件可能会提示需要进行 Gradle 同步,同意并等待同步完成即可。
4426

45-
* If it asks you to do a `Gradle Sync`, click `OK`.
27+
* 将处于开发者模式的手机通过 USB 线连接到电脑,具体连接方法可以参考[官方教程](https://developer.android.com/studio/run/device?hl=zh-cn)。如果程序顶部工具栏右侧正确显示了你的手机型号,说明设备连接成功。
4628

47-
* You may also need to install various platforms and tools, if you get errors
48-
like `Failed to find target with hash string 'android-21'` and similar. Click
49-
the `Run` button (the green arrow) or select `Run` > `Run 'android'` from the
50-
top menu. You may need to rebuild the project using `Build` > `Rebuild Project`.
29+
* 如果是首次安装 Android Studio,可能还需要安装一系列开发工具。点击软件界面右上角的绿色三角按钮`Run 'app'`直接运行程序。如果有需要安装的工具,系统会进行提示,按照提示依次安装即可。
5130

52-
* If it asks you to use `Instant Run`, click `Proceed Without Instant Run`.
31+
### 模型介绍
5332

54-
* Also, you need to have an Android device plugged in with developer options
55-
enabled at this point. See **[here](
56-
https://developer.android.com/studio/run/device)** for more details
57-
on setting up developer devices.
33+
本项目需要用到两个神经网络模型文件,均已包含在本项目中,不需要额外下载。第一个是 `int8` 格式的 MoveNet Thunder 神经网络模型,可以点击[官方模型文件链接](https://tfhub.dev/google/lite-model/movenet/singlepose/thunder/tflite/int8/4)进一步了解。[MoveNet](https://blog.tensorflow.org/2021/05/next-generation-pose-detection-with-movenet-and-tensorflowjs.html) 是谷歌推出的轻量级人体姿态估计模型,有 Thunder 和 Lightning 两个版本。其中 Thunder 版本运行速度较慢,但准确率更高,本项目使用的是 Thunder 版本。该版本又分为 `float16`、`int8` 两种数据格式。其中 `float16` 模型只能在通用 GPU 上运行,而 `int8` 模型既可以运行于通用 GPU 之上,也可以在高通骁龙处理器的 [Hexagon DSP 数字信号处理器](https://developer.qualcomm.com/software/hexagon-dsp-sdk/dsp-processor)上运行。运行在 Hexagon 处理器上时,AI 程序运行速度更快、也更省电,建议对 AI 模型进行移动部署时优先选择 Hexagon 处理器。目前谷歌也推出了自研的 Google Tensor 处理器,最新型号为 Tensor G2,如何调用 Tensor 处理器的 AI 加速单元尚不清楚,未来拿到设备实测确认后会更新文档。
5834

35+
#### 训练自己的分类网络
5936

60-
### Model used
61-
Downloading, extraction and placement in assets folder has been managed
62-
automatically by `download.gradle`.
37+
<image align="right" src="doc_images/labeled_movenet_result.png" alt="17 Keypoints detected by MoveNet" width=17%>
6338

64-
If you explicitly want to download the model, you can download it from here:
39+
除了 MoveNet Thunder,本项目还使用了一个简单的全连接网络对 MoveNet 输出的姿态信息(人体 17 个关键点的坐标)进行分类,用来判断画面中的人处于“标准坐姿”、“翘二郎腿”、“脖子前倾驼背”中的哪一种状态。关于该分类网络的介绍以及训练过程实际演示,可以参考 Tensorflow Lite 的 [Jupyter Notebook 教程](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/pose_classification.ipynb),或是本项目中修改并注释过的[版本]()。本项目为了对“标准坐姿”、“翘二郎腿”、“脖子前倾驼背”三种姿态进行分类,为每种姿态采集了约 300 张照片作为训练集(共 876 张照片),为每种姿态采集了约 30 张作为测试集(共 74 张照片)。其中训练集与测试集为不同人物主体,以此来在训练过程中及时发现模型的过拟合问题。训练数据应存放于 `main/pose_data/train/` 路径下的 `standard`、`crossleg`、`forwardhead`
40+
三个文件夹中,测试数据则位于 `main/pose_data/test/` 路径下。本项目中用于训练分类网络的 [Jupyter Notebook]() 会将原始数据自动转化为训练数据包,在此过程中生成每张照片的 MoveNet 检测结果,并将每张照片标记为三种姿态中的一种,最后将所有信息存储在 `main/pose_data/train_data.csv``main/pose_data/test_data.csv`,并生成记录标签信息的文本文件 `main/pose_data/pose_labels.txt`。在 Notebook 中训练完毕后,在 `main/pose_data/` 路径下会自动生成 `.tflite` 权重文件,导入至 Android Studio 项目中,替换掉本项目中的 `android\app\src\main\assets\classifier.tflite` 即可使用。
6541

66-
* [Posenet](https://storage.googleapis.com/download.tensorflow.org/models/tflite/posenet_mobilenet_v1_100_257x257_multi_kpt_stripped.tflite)
67-
* [Movenet Lightning](https://tfhub.dev/google/movenet/singlepose/lightning/)
68-
* [Movenet Thunder](https://tfhub.dev/google/movenet/singlepose/thunder/)
69-
* [Movenet MultiPose](https://tfhub.dev/google/movenet/multipose/lightning/)
42+
### 运行效果
7043

71-
### Additional Note
72-
_Please do not delete the assets folder content_. If you explicitly deleted the
73-
files, then please choose `Build` > `Rebuild` from menu to re-download the
74-
deleted model files into assets folder.
44+
45+
## 鸣谢
46+
本项目主要基于 [Tensorflow Lite Pose Estimation 示例项目](https://github.com/tensorflow/examples/tree/master/lite/examples/pose_estimation/android),离不开 [Tensorflow](https://www.tensorflow.org/?hl=zh-cn)[Jupyter Notebook](https://jupyter.org/) 等开源框架、开源开发工具。感谢各位程序工作者对开源社区的贡献!

android/app/build.gradle

+1-4
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ android {
88
buildToolsVersion "30.0.3"
99

1010
defaultConfig {
11-
applicationId "org.tensorflow.lite.examples.poseestimation"
11+
applicationId "lyi.linyi.posemon"
1212
minSdkVersion 23
1313
targetSdkVersion 30
1414
versionCode 1
@@ -32,9 +32,6 @@ android {
3232
}
3333
}
3434

35-
// Download tflite model
36-
apply from:"download.gradle"
37-
3835
dependencies {
3936

4037
implementation "org.jetbrains.kotlin:kotlin-stdlib:$kotlin_version"

android/app/src/androidTest/java/org/tensorflow/lite/examples/poseestimation/ml/EvaluationUtils.kt android/app/src/androidTest/java/lyi/linyi/lite/examples/posemon/ml/EvaluationUtils.kt

+8-4
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ limitations under the License.
1414
==============================================================================
1515
*/
1616

17-
package org.tensorflow.lite.examples.poseestimation.ml
17+
package lyi.linyi.lite.examples.posemon.ml
1818

1919
import android.graphics.Bitmap
2020
import android.graphics.BitmapFactory
@@ -23,8 +23,8 @@ import android.graphics.PointF
2323
import androidx.test.platform.app.InstrumentationRegistry
2424
import com.google.common.truth.Truth.assertThat
2525
import com.google.common.truth.Truth.assertWithMessage
26-
import org.tensorflow.lite.examples.poseestimation.data.BodyPart
27-
import org.tensorflow.lite.examples.poseestimation.data.Person
26+
import org.tensorflow.posemon.data.BodyPart
27+
import org.tensorflow.posemon.data.Person
2828
import java.io.BufferedReader
2929
import java.io.InputStreamReader
3030
import kotlin.math.pow
@@ -49,7 +49,11 @@ object EvaluationUtils {
4949
assertWithMessage("$bodyPart must exist").that(keypoint).isNotNull()
5050

5151
val detectedPointF = keypoint!!.coordinate
52-
val distanceFromExpectedPointF = distance(detectedPointF, expectedPointF)
52+
val distanceFromExpectedPointF =
53+
lyi.linyi.lite.examples.posemon.ml.EvaluationUtils.distance(
54+
detectedPointF,
55+
expectedPointF
56+
)
5357
assertWithMessage("Detected $bodyPart must be close to expected result")
5458
.that(distanceFromExpectedPointF).isAtMost(acceptableError)
5559
}

android/app/src/androidTest/java/org/tensorflow/lite/examples/poseestimation/ml/MovenetLightningTest.kt android/app/src/androidTest/java/lyi/linyi/lite/examples/posemon/ml/MovenetLightningTest.kt

+11-8
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ limitations under the License.
1414
==============================================================================
1515
*/
1616

17-
package org.tensorflow.lite.examples.poseestimation.ml
17+
package lyi.linyi.lite.examples.posemon.ml
1818

1919
import android.content.Context
2020
import android.graphics.PointF
@@ -23,8 +23,11 @@ import androidx.test.platform.app.InstrumentationRegistry
2323
import org.junit.Before
2424
import org.junit.Test
2525
import org.junit.runner.RunWith
26-
import org.tensorflow.lite.examples.poseestimation.data.BodyPart
27-
import org.tensorflow.lite.examples.poseestimation.data.Device
26+
import org.tensorflow.posemon.data.BodyPart
27+
import org.tensorflow.posemon.data.Device
28+
import org.tensorflow.posemon.ml.ModelType
29+
import org.tensorflow.posemon.ml.MoveNet
30+
import org.tensorflow.posemon.ml.PoseDetector
2831

2932
@RunWith(AndroidJUnit4::class)
3033
class MovenetLightningTest {
@@ -44,20 +47,20 @@ class MovenetLightningTest {
4447
appContext = InstrumentationRegistry.getInstrumentation().targetContext
4548
poseDetector = MoveNet.create(appContext, Device.CPU, ModelType.Lightning)
4649
expectedDetectionResult =
47-
EvaluationUtils.loadCSVAsset("pose_landmark_truth.csv")
50+
lyi.linyi.lite.examples.posemon.ml.EvaluationUtils.loadCSVAsset("pose_landmark_truth.csv")
4851
}
4952

5053
@Test
5154
fun testPoseEstimationResultWithImage1() {
52-
val input = EvaluationUtils.loadBitmapAssetByName(TEST_INPUT_IMAGE1)
55+
val input = lyi.linyi.lite.examples.posemon.ml.EvaluationUtils.loadBitmapAssetByName(TEST_INPUT_IMAGE1)
5356

5457
// As Movenet use previous frame to optimize detection result, we run it multiple times
5558
// using the same image to improve result.
5659
poseDetector.estimatePoses(input)
5760
poseDetector.estimatePoses(input)
5861
poseDetector.estimatePoses(input)
5962
val person = poseDetector.estimatePoses(input)[0]
60-
EvaluationUtils.assertPoseDetectionResult(
63+
lyi.linyi.lite.examples.posemon.ml.EvaluationUtils.assertPoseDetectionResult(
6164
person,
6265
expectedDetectionResult[0],
6366
ACCEPTABLE_ERROR
@@ -66,15 +69,15 @@ class MovenetLightningTest {
6669

6770
@Test
6871
fun testPoseEstimationResultWithImage2() {
69-
val input = EvaluationUtils.loadBitmapAssetByName(TEST_INPUT_IMAGE2)
72+
val input = lyi.linyi.lite.examples.posemon.ml.EvaluationUtils.loadBitmapAssetByName(TEST_INPUT_IMAGE2)
7073

7174
// As Movenet use previous frame to optimize detection result, we run it multiple times
7275
// using the same image to improve result.
7376
poseDetector.estimatePoses(input)
7477
poseDetector.estimatePoses(input)
7578
poseDetector.estimatePoses(input)
7679
val person = poseDetector.estimatePoses(input)[0]
77-
EvaluationUtils.assertPoseDetectionResult(
80+
lyi.linyi.lite.examples.posemon.ml.EvaluationUtils.assertPoseDetectionResult(
7881
person,
7982
expectedDetectionResult[1],
8083
ACCEPTABLE_ERROR

android/app/src/androidTest/java/org/tensorflow/lite/examples/poseestimation/ml/MovenetMultiPoseTest.kt android/app/src/androidTest/java/lyi/linyi/lite/examples/posemon/ml/MovenetMultiPoseTest.kt

+11-11
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ limitations under the License.
1414
==============================================================================
1515
*/
1616

17-
package org.tensorflow.lite.examples.poseestimation.ml
17+
package lyi.linyi.lite.examples.posemon.ml
1818

1919
import android.content.Context
2020
import android.graphics.Bitmap
@@ -24,10 +24,10 @@ import androidx.test.platform.app.InstrumentationRegistry
2424
import org.junit.Before
2525
import org.junit.Test
2626
import org.junit.runner.RunWith
27-
import org.tensorflow.lite.examples.poseestimation.data.BodyPart
28-
import org.tensorflow.lite.examples.poseestimation.data.Device
29-
import org.tensorflow.lite.examples.poseestimation.ml.MoveNetMultiPose
30-
import org.tensorflow.lite.examples.poseestimation.ml.Type
27+
import org.tensorflow.posemon.data.BodyPart
28+
import org.tensorflow.posemon.data.Device
29+
import org.tensorflow.posemon.ml.MoveNetMultiPose
30+
import org.tensorflow.posemon.ml.Type
3131

3232
@RunWith(AndroidJUnit4::class)
3333
class MovenetMultiPoseTest {
@@ -46,11 +46,11 @@ class MovenetMultiPoseTest {
4646
fun setup() {
4747
appContext = InstrumentationRegistry.getInstrumentation().targetContext
4848
poseDetector = MoveNetMultiPose.create(appContext, Device.CPU, Type.Dynamic)
49-
val input1 = EvaluationUtils.loadBitmapAssetByName(TEST_INPUT_IMAGE1)
50-
val input2 = EvaluationUtils.loadBitmapAssetByName(TEST_INPUT_IMAGE2)
51-
inputFinal = EvaluationUtils.hConcat(input1, input2)
49+
val input1 = lyi.linyi.lite.examples.posemon.ml.EvaluationUtils.loadBitmapAssetByName(TEST_INPUT_IMAGE1)
50+
val input2 = lyi.linyi.lite.examples.posemon.ml.EvaluationUtils.loadBitmapAssetByName(TEST_INPUT_IMAGE2)
51+
inputFinal = lyi.linyi.lite.examples.posemon.ml.EvaluationUtils.hConcat(input1, input2)
5252
expectedDetectionResult =
53-
EvaluationUtils.loadCSVAsset("pose_landmark_truth.csv")
53+
lyi.linyi.lite.examples.posemon.ml.EvaluationUtils.loadCSVAsset("pose_landmark_truth.csv")
5454

5555
// update coordination of the pose_landmark_truth.csv corresponding to the new input image
5656
for ((_, value) in expectedDetectionResult[1]) {
@@ -66,13 +66,13 @@ class MovenetMultiPoseTest {
6666
// Sort the results so that the person on the right side come first.
6767
val sortedPersons = persons.sortedBy { it.boundingBox?.left }
6868

69-
EvaluationUtils.assertPoseDetectionResult(
69+
lyi.linyi.lite.examples.posemon.ml.EvaluationUtils.assertPoseDetectionResult(
7070
sortedPersons[0],
7171
expectedDetectionResult[0],
7272
ACCEPTABLE_ERROR
7373
)
7474

75-
EvaluationUtils.assertPoseDetectionResult(
75+
lyi.linyi.lite.examples.posemon.ml.EvaluationUtils.assertPoseDetectionResult(
7676
sortedPersons[1],
7777
expectedDetectionResult[1],
7878
ACCEPTABLE_ERROR

android/app/src/androidTest/java/org/tensorflow/lite/examples/poseestimation/ml/MovenetThunderTest.kt android/app/src/androidTest/java/lyi/linyi/lite/examples/posemon/ml/MovenetThunderTest.kt

+11-8
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ limitations under the License.
1414
==============================================================================
1515
*/
1616

17-
package org.tensorflow.lite.examples.poseestimation.ml
17+
package lyi.linyi.lite.examples.posemon.ml
1818

1919
import android.content.Context
2020
import android.graphics.PointF
@@ -23,8 +23,11 @@ import androidx.test.platform.app.InstrumentationRegistry
2323
import org.junit.Before
2424
import org.junit.Test
2525
import org.junit.runner.RunWith
26-
import org.tensorflow.lite.examples.poseestimation.data.BodyPart
27-
import org.tensorflow.lite.examples.poseestimation.data.Device
26+
import org.tensorflow.posemon.data.BodyPart
27+
import org.tensorflow.posemon.data.Device
28+
import org.tensorflow.posemon.ml.ModelType
29+
import org.tensorflow.posemon.ml.MoveNet
30+
import org.tensorflow.posemon.ml.PoseDetector
2831

2932
@RunWith(AndroidJUnit4::class)
3033
class MovenetThunderTest {
@@ -44,20 +47,20 @@ class MovenetThunderTest {
4447
appContext = InstrumentationRegistry.getInstrumentation().targetContext
4548
poseDetector = MoveNet.create(appContext, Device.CPU, ModelType.Thunder)
4649
expectedDetectionResult =
47-
EvaluationUtils.loadCSVAsset("pose_landmark_truth.csv")
50+
lyi.linyi.lite.examples.posemon.ml.EvaluationUtils.loadCSVAsset("pose_landmark_truth.csv")
4851
}
4952

5053
@Test
5154
fun testPoseEstimationResultWithImage1() {
52-
val input = EvaluationUtils.loadBitmapAssetByName(TEST_INPUT_IMAGE1)
55+
val input = lyi.linyi.lite.examples.posemon.ml.EvaluationUtils.loadBitmapAssetByName(TEST_INPUT_IMAGE1)
5356

5457
// As Movenet use previous frame to optimize detection result, we run it multiple times
5558
// using the same image to improve result.
5659
poseDetector.estimatePoses(input)
5760
poseDetector.estimatePoses(input)
5861
poseDetector.estimatePoses(input)
5962
val person = poseDetector.estimatePoses(input)[0]
60-
EvaluationUtils.assertPoseDetectionResult(
63+
lyi.linyi.lite.examples.posemon.ml.EvaluationUtils.assertPoseDetectionResult(
6164
person,
6265
expectedDetectionResult[0],
6366
ACCEPTABLE_ERROR
@@ -66,15 +69,15 @@ class MovenetThunderTest {
6669

6770
@Test
6871
fun testPoseEstimationResultWithImage2() {
69-
val input = EvaluationUtils.loadBitmapAssetByName(TEST_INPUT_IMAGE2)
72+
val input = lyi.linyi.lite.examples.posemon.ml.EvaluationUtils.loadBitmapAssetByName(TEST_INPUT_IMAGE2)
7073

7174
// As Movenet use previous frame to optimize detection result, we run it multiple times
7275
// using the same image to improve result.
7376
poseDetector.estimatePoses(input)
7477
poseDetector.estimatePoses(input)
7578
poseDetector.estimatePoses(input)
7679
val person = poseDetector.estimatePoses(input)[0]
77-
EvaluationUtils.assertPoseDetectionResult(
80+
lyi.linyi.lite.examples.posemon.ml.EvaluationUtils.assertPoseDetectionResult(
7881
person,
7982
expectedDetectionResult[1],
8083
ACCEPTABLE_ERROR

0 commit comments

Comments
 (0)