Skip to content

Commit 3eeeb94

Browse files
authored
Fall and wave detection ROS nodes (#423)
* Overhauled fall detection node to be able to run on pose messages * Added to_ros_box in bridge, needed for overhauled fall detection node * Updated fall detection node section * Minor fix ros1 fall node * Added to_rox_box in ros2 bridge * Updated ros2 fall detection node * Initial version of ros1 wave detection node * Renamed class * Added performance to ros1 wave detection node * Minor fixes in fall_detection_node.py * Refactored wave_detection_node.py to work similar to fall detection node * Applied minor fixes to ros2 fall_detection_node.py * Removed unused import * Fall detection ros1 - visualization mode now publishes bboxes too * Fall detection ros2 - visualization mode now publishes bboxes too, fixed bug * Fall detection ros1 doc minor updates * Fall detection ros2 doc updated for newly updated node * Wave detection ros1, added missing docstring and wave detection messages are published in both modes * Added wave detection entry in node index list * Added wave detection section entry and fixed minor thing in fall detection section * Fixed broken link * Added ros2 wave detection node entry in setup.py * Added new ros2 wave detection node * Fixed broken link * Added wave detection entry in node index list * Added wave detection section entry and fixed minor thing in fall detection section * Removed unused import * Fixed default ctor argument and simplified if/else as suggested by review * Fixed default ctor argument as suggested by review * Fixed default ctor argument as suggested by review * Fixes as suggested by review * Re-arranged docstring to match actual order of arguments * Added performance to fall detection ROS1 node * Fixed performance topic name * Added performance topic arg entries for wave and fall detection nodes * Re-arranged docstring to match actual order of arguments * Added performance to fall detection ROS2 node * Added performance topic arg entries for wave and fall detection nodes * Fixed wrong publisher argument in ros2 wave/fall nodes * Fixed fall_detection_node.py performance measurement * Fixed wave_detection_node.py performance measurement * Fixed ROS2 fall_detection_node.py performance measurement * Fixed ROS2 wave_detection_node.py performance measurement
1 parent bcb181c commit 3eeeb94

File tree

12 files changed

+1241
-167
lines changed

12 files changed

+1241
-167
lines changed

Diff for: projects/opendr_ws/README.md

+13-12
Original file line numberDiff line numberDiff line change
@@ -69,18 +69,19 @@ Currently, apart from tools, opendr_ws contains the following ROS nodes (categor
6969
1. [Pose Estimation](src/opendr_perception/README.md#pose-estimation-ros-node)
7070
2. [High Resolution Pose Estimation](src/opendr_perception/README.md#high-resolution-pose-estimation-ros-node)
7171
3. [Fall Detection](src/opendr_perception/README.md#fall-detection-ros-node)
72-
4. [Face Detection](src/opendr_perception/README.md#face-detection-ros-node)
73-
5. [Face Recognition](src/opendr_perception/README.md#face-recognition-ros-node)
74-
6. [2D Object Detection](src/opendr_perception/README.md#2d-object-detection-ros-nodes)
75-
7. [2D Single Object Tracking](src/opendr_perception/README.md#2d-single-object-tracking-ros-node)
76-
8. [2D Object Tracking](src/opendr_perception/README.md#2d-object-tracking-ros-nodes)
77-
9. [Vision Based Panoptic Segmentation](src/opendr_perception/README.md#vision-based-panoptic-segmentation-ros-node)
78-
10. [Semantic Segmentation](src/opendr_perception/README.md#semantic-segmentation-ros-node)
79-
11. [Binary High Resolution](src/opendr_perception/README.md#binary-high-resolution-ros-node)
80-
12. [Image-based Facial Emotion Estimation](src/opendr_perception/README.md#image-based-facial-emotion-estimation-ros-node)
81-
13. [Landmark-based Facial Expression Recognition](src/opendr_perception/README.md#landmark-based-facial-expression-recognition-ros-node)
82-
14. [Skeleton-based Human Action Recognition](src/opendr_perception/README.md#skeleton-based-human-action-recognition-ros-node)
83-
15. [Video Human Activity Recognition](src/opendr_perception/README.md#video-human-activity-recognition-ros-node)
72+
4. [Wave Detection](src/opendr_perception/README.md#wave-detection-ros-node)
73+
5. [Face Detection](src/opendr_perception/README.md#face-detection-ros-node)
74+
6. [Face Recognition](src/opendr_perception/README.md#face-recognition-ros-node)
75+
7. [2D Object Detection](src/opendr_perception/README.md#2d-object-detection-ros-nodes)
76+
8. [2D Single Object Tracking](src/opendr_perception/README.md#2d-single-object-tracking-ros-node)
77+
9. [2D Object Tracking](src/opendr_perception/README.md#2d-object-tracking-ros-nodes)
78+
10. [Vision Based Panoptic Segmentation](src/opendr_perception/README.md#vision-based-panoptic-segmentation-ros-node)
79+
11. [Semantic Segmentation](src/opendr_perception/README.md#semantic-segmentation-ros-node)
80+
12. [Binary High Resolution](src/opendr_perception/README.md#binary-high-resolution-ros-node)
81+
13. [Image-based Facial Emotion Estimation](src/opendr_perception/README.md#image-based-facial-emotion-estimation-ros-node)
82+
14. [Landmark-based Facial Expression Recognition](src/opendr_perception/README.md#landmark-based-facial-expression-recognition-ros-node)
83+
15. [Skeleton-based Human Action Recognition](src/opendr_perception/README.md#skeleton-based-human-action-recognition-ros-nodes)
84+
16. [Video Human Activity Recognition](src/opendr_perception/README.md#video-human-activity-recognition-ros-node)
8485

8586
## RGB + Infrared input
8687
1. [End-to-End Multi-Modal Object Detection (GEM)](src/opendr_perception/README.md#2d-object-detection-gem-ros-node)

Diff for: projects/opendr_ws/src/opendr_bridge/src/opendr_bridge/bridge.py

+22
Original file line numberDiff line numberDiff line change
@@ -196,6 +196,28 @@ def from_ros_face(self, ros_hypothesis):
196196
confidence=ros_hypothesis.score)
197197
return category
198198

199+
def to_ros_box(self, box):
200+
"""
201+
Converts an OpenDR BoundingBox into a Detection2D that can carry the same information.
202+
The bounding box is represented by its center coordinates as well as its width/height dimensions.
203+
:param box: OpenDR bounding box to be converted
204+
:type box: engine.target.BoundingBox
205+
:return: ROS message with the Detection2D including the bounding box
206+
:rtype: vision_msgs.msg.Detection2D
207+
"""
208+
ros_box = Detection2D()
209+
ros_box.bbox = BoundingBox2D()
210+
ros_box.results.append(ObjectHypothesisWithPose())
211+
ros_box.bbox.center = Pose2D()
212+
ros_box.bbox.center.x = box.left + box.width / 2.
213+
ros_box.bbox.center.y = box.top + box.height / 2.
214+
ros_box.bbox.size_x = box.width
215+
ros_box.bbox.size_y = box.height
216+
ros_box.results[0].id = int(box.name)
217+
if box.confidence:
218+
ros_box.results[0].score = box.confidence
219+
return ros_box
220+
199221
def to_ros_boxes(self, box_list):
200222
"""
201223
Converts an OpenDR BoundingBoxList into a Detection2DArray msg that can carry the same information.

Diff for: projects/opendr_ws/src/opendr_perception/CMakeLists.txt

+1
Original file line numberDiff line numberDiff line change
@@ -32,6 +32,7 @@ catkin_install_python(PROGRAMS
3232
scripts/pose_estimation_node.py
3333
scripts/hr_pose_estimation_node.py
3434
scripts/fall_detection_node.py
35+
scripts/wave_detection_node.py
3536
scripts/object_detection_2d_nanodet_node.py
3637
scripts/object_detection_2d_yolov5_node.py
3738
scripts/object_detection_2d_detr_node.py

Diff for: projects/opendr_ws/src/opendr_perception/README.md

+103-13
Original file line numberDiff line numberDiff line change
@@ -127,33 +127,123 @@ The node publishes the detected poses in [OpenDR's 2D pose message format](../op
127127
128128
You can find the fall detection ROS node python script [here](./scripts/fall_detection_node.py) to inspect the code and modify it as you wish to fit your needs.
129129
The node makes use of the toolkit's [fall detection tool](../../../../src/opendr/perception/fall_detection/fall_detector_learner.py) whose documentation can be found [here](../../../../docs/reference/fall-detection.md).
130-
Fall detection uses the toolkit's pose estimation tool internally.
130+
Fall detection is rule-based and works on top of pose estimation.
131131

132-
<!-- TODO Should add information about taking advantage of the pose estimation ros node when running fall detection, see issue https://github.com/opendr-eu/opendr/issues/282 -->
132+
This node normally runs on `detection mode` where it subscribes to a topic of OpenDR poses and detects whether the poses are fallen persons or not.
133+
By providing an image topic the node runs on `visualization mode`. It also gets images, performs pose estimation internally and visualizes the output on an output image topic.
134+
Note that when providing an image topic the node has significantly worse performance in terms of speed, due to running pose estimation internally.
133135

134-
#### Instructions for basic usage:
136+
- #### Instructions for basic usage in `detection mode`:
135137

136-
1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites).
138+
1. Start the node responsible for publishing poses. Refer to the [pose estimation node above](#pose-estimation-ros-node).
137139

138140
2. You are then ready to start the fall detection node:
139141

140142
```shell
141143
rosrun opendr_perception fall_detection_node.py
142144
```
143-
The following optional arguments are available:
145+
The following optional arguments are available and relevant for running fall detection on pose messages only:
144146
- `-h or --help`: show a help message and exit
145-
- `-i or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`/usb_cam/image_raw`)
146-
- `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated RGB image, `None` to stop the node from publishing on this topic (default=`/opendr/image_fallen_annotated`)
147-
- `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages, `None` to stop the node from publishing on this topic (default=`/opendr/fallen`)
148-
- `--performance_topic PERFORMANCE_TOPIC`: topic name for performance messages (default=`None`, disabled)
147+
- `-ip or --input_pose_topic INPUT_POSE_TOPIC`: topic name for input pose, `None` to stop the node from running detections on pose messages (default=`/opendr/poses`)
148+
- `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages (default=`/opendr/fallen`)
149+
- `--performance_topic PERFORMANCE_TOPIC`: topic name for performance messages, note that performance will be published to `PERFORMANCE_TOPIC/fallen` (default=`None`, disabled)
150+
151+
3. Detections are published on the `detections_topic`
152+
153+
- #### Instructions for `visualization mode`:
154+
155+
1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites).
156+
157+
2. You are then ready to start the fall detection node in `visualization mode`, which needs an input image topic to be provided:
158+
159+
```shell
160+
rosrun opendr_perception fall_detection_node.py -ii /usb_cam/image_raw
161+
```
162+
The following optional arguments are available and relevant for running fall detection on images. Note that the
163+
`input_rgb_image_topic` is required for running in `visualization mode`:
164+
- `-h or --help`: show a help message and exit
165+
- `-ii or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`None`)
166+
- `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated RGB image (default=`/opendr/image_fallen_annotated`)
167+
- `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages (default=`/opendr/fallen`)
168+
- `--performance_topic PERFORMANCE_TOPIC`: topic name for performance messages, note that performance will be published to `PERFORMANCE_TOPIC/image` (default=`None`, disabled)
149169
- `--device DEVICE`: device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`)
150170
- `--accelerate`: acceleration flag that causes pose estimation that runs internally to run faster but with less accuracy
151171

152-
3. Default output topics:
153-
- Output images: `/opendr/image_fallen_annotated`
154-
- Detection messages: `/opendr/fallen`
172+
- Default output topics:
173+
- Detection messages: `/opendr/fallen`
174+
- Output images: `/opendr/image_fallen_annotated`
155175

156-
For viewing the output, refer to the [notes above.](#notes)
176+
For viewing the output, refer to the [notes above.](#notes)
177+
178+
**Notes**
179+
180+
Note that when the node runs on the default `detection mode` it is significantly faster than when it is provided with an
181+
input image topic. However, pose estimation needs to be performed externally on another node which publishes poses.
182+
When an input image topic is provided and the node runs in `visualization mode`, it runs pose estimation internally, and
183+
consequently it is recommended to only use it for testing purposes and not run other pose estimation nodes in parallel.
184+
The node can run in both modes in parallel or only on one of the two. To run the node only on `visualization mode` provide
185+
the argument `-ip None` to disable the `detection mode`. Detection messages on `detections_topic` are published in both modes.
186+
187+
### Wave Detection ROS Node
188+
189+
You can find the wave detection ROS node python script [here](./scripts/wave_detection_node.py) to inspect the code and modify it as you wish to fit your needs.
190+
The node is based on a [wave detection demo of the Lightweight OpenPose tool](../../../../projects/python/perception/pose_estimation/lightweight_open_pose/demos/wave_detection_demo.py).
191+
Wave detection is rule-based and works on top of pose estimation.
192+
193+
This node normally runs on `detection mode` where it subscribes to a topic of OpenDR poses and detects whether the poses are waving or not.
194+
By providing an image topic the node runs on `visualization mode`. It also gets images, performs pose estimation internally and visualizes the output on an output image topic.
195+
Note that when providing an image topic the node has significantly worse performance in terms of speed, due to running pose estimation internally.
196+
197+
- #### Instructions for basic usage in `detection mode`:
198+
199+
1. Start the node responsible for publishing poses. Refer to the [pose estimation node above](#pose-estimation-ros-node).
200+
201+
2. You are then ready to start the wave detection node:
202+
203+
```shell
204+
rosrun opendr_perception wave_detection_node.py
205+
```
206+
The following optional arguments are available and relevant for running fall detection on pose messages only:
207+
- `-h or --help`: show a help message and exit
208+
- `-ip or --input_pose_topic INPUT_POSE_TOPIC`: topic name for input pose, `None` to stop the node from running detections on pose messages (default=`/opendr/poses`)
209+
- `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages (default=`/opendr/wave`)
210+
- `--performance_topic PERFORMANCE_TOPIC`: topic name for performance messages, note that performance will be published to `PERFORMANCE_TOPIC/wave` (default=`None`, disabled)
211+
212+
3. Detections are published on the `detections_topic`
213+
214+
- #### Instructions for `visualization mode`:
215+
216+
1. Start the node responsible for publishing images. If you have a USB camera, then you can use the `usb_cam_node` as explained in the [prerequisites above](#prerequisites).
217+
218+
2. You are then ready to start the wave detection node in `visualization mode`, which needs an input image topic to be provided:
219+
220+
```shell
221+
rosrun opendr_perception wave_detection_node.py -ii /usb_cam/image_raw
222+
```
223+
The following optional arguments are available and relevant for running wave detection on images. Note that the
224+
`input_rgb_image_topic` is required for running in `visualization mode`:
225+
- `-h or --help`: show a help message and exit
226+
- `-ii or --input_rgb_image_topic INPUT_RGB_IMAGE_TOPIC`: topic name for input RGB image (default=`None`)
227+
- `-o or --output_rgb_image_topic OUTPUT_RGB_IMAGE_TOPIC`: topic name for output annotated RGB image (default=`/opendr/image_wave_annotated`)
228+
- `-d or --detections_topic DETECTIONS_TOPIC`: topic name for detection messages (default=`/opendr/wave`)
229+
- `--performance_topic PERFORMANCE_TOPIC`: topic name for performance messages, note that performance will be published to `PERFORMANCE_TOPIC/image` (default=`None`, disabled)
230+
- `--device DEVICE`: device to use, either `cpu` or `cuda`, falls back to `cpu` if GPU or CUDA is not found (default=`cuda`)
231+
- `--accelerate`: acceleration flag that causes pose estimation that runs internally to run faster but with less accuracy
232+
233+
- Default output topics:
234+
- Detection messages: `/opendr/wave`
235+
- Output images: `/opendr/image_wave_annotated`
236+
237+
For viewing the output, refer to the [notes above.](#notes)
238+
239+
**Notes**
240+
241+
Note that when the node runs on the default `detection mode` it is significantly faster than when it is provided with an
242+
input image topic. However, pose estimation needs to be performed externally on another node which publishes poses.
243+
When an input image topic is provided and the node runs in `visualization mode`, it runs pose estimation internally, and
244+
consequently it is recommended to only use it for testing purposes and not run other pose estimation nodes in parallel.
245+
The node can run in both modes in parallel or only on one of the two. To run the node only on `visualization mode` provide
246+
the argument `-ip None` to disable the `detection mode`. Detection messages on `detections_topic` are published in both modes.
157247

158248
### Face Detection ROS Node
159249

0 commit comments

Comments
 (0)